MiaoQIDS 🛡️量子防火墙-会做pcap文件分析的猫娘
PassAudited by VirusTotal on May 10, 2026.
Overview
Type: OpenClaw Skill Name: miao-qids Version: 0.1.1 The skill implements a complex hybrid classical-quantum intrusion detection system, but it contains several high-risk vulnerabilities and behaviors. It uses 'pickle.load' and 'torch.load' to load model data in 'CNNmodel.py', 'QNNmodel.py', and 'FeatureSelection.py', which are known vectors for Remote Code Execution (RCE) if model files are tampered with. The MCP server in 'skill.py' is an unauthenticated HTTP server that accepts arbitrary file paths ('pcap_path') from POST requests, which could be exploited to process sensitive system files. Additionally, 'skill.py' performs external network requests to 'ip-api.com' to geolocate IP addresses extracted from the analyzed traffic, constituting a potential data leak.
Findings (0)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Users may place too much trust in a “safe” or benign-looking result when reviewing network traffic.
The skill presents confidence outputs as probabilities, but the code manually boosts the benign-traffic confidence after computing the distribution, without documenting that calibration.
#故意上调善意置信度以减少误报的情况; if confidence_dict["善意流量"]>0.5:confidence_dict["善意流量"]*=1.1
Document any confidence calibration clearly, normalize scores after adjustment, and avoid wording that implies benign results are definitive.
A malicious model file could run arbitrary Python code on the user’s machine when the detector starts.
pickle.load can execute code embedded in a crafted file; this loader is used for the required QNN model loaded from the model directory.
with open(filename,'rb')as f: model_data=pickle.load(f)
Only use trusted, hash-verified model files; prefer safe tensor formats where possible; and restrict model_dir to trusted read-only artifacts.
Users must obtain unreviewed model files separately, increasing the chance of compromised or mismatched artifacts.
The required pretrained model artifacts are not included in the manifest and no trusted source, version, or checksum is provided for them.
需要预先训练好的 CNN 模型(`cnn_mtd_final.pth`)和 QNN 模型(`qnn_nodel.pkl`)
Publish the exact model artifacts or trusted download source with cryptographic hashes, and document the expected filenames consistently.
If the server is exposed to untrusted clients, they could cause it to parse chosen local paths or consume CPU/memory on large PCAPs.
The local HTTP endpoint processes a user-supplied local file path, which is expected for PCAP analysis but should be kept within trusted local use.
请求格式(HTTP POST `/analyze`) ... `"pcap_path": "/path/to/your/file.pcap"`
Run it on localhost or a trusted network only, add authentication if exposed, and consider path allowlists and file-size limits.
IP addresses from analyzed traffic may be disclosed to ip-api.com and visible to network observers because the request uses HTTP.
The skill automatically sends extracted IP addresses to an external geolocation provider over HTTP. This matches the IP-location feature, but it is privacy-relevant.
url=f"http://ip-api.com/json/{ip}?lang=zh-CN"; response=requests.get(url,timeout=3)Make IP lookups optional, document the provider, use HTTPS if supported, and avoid querying private/internal target addresses.
Traffic-derived metadata may remain in the cache after analysis and could be accessed by other local users or later tasks.
The skill stores derived feature vectors from PCAP contents on disk for caching, which is purpose-aligned but creates retained network-derived data.
if self.use_cache:np.save(cache_file,features)
Use a private cache directory, document retention, and clear cached feature files when they are no longer needed.
