{"skill":{"slug":"llm-testing","displayName":"LLM Testing","summary":"Provides curated prompts to test LLM security, bias, privacy, alignment, and robustness for authorized AI safety and red team assessments.","tags":{"latest":"1.0.0"},"stats":{"comments":0,"downloads":113,"installsAllTime":0,"installsCurrent":0,"stars":0,"versions":1},"createdAt":1774107281252,"updatedAt":1774107708111},"latestVersion":{"version":"1.0.0","createdAt":1774107281252,"changelog":"- Initial release of the llm-testing skill.\n- Provides curated prompts and wordlists for testing LLM security, safety, privacy, and bias.\n- Includes test categories for bias detection, data leakage, privacy boundaries, memory recall, and alignment/adversarial resistance.\n- Clear usage instructions, best practices, and ethical guidelines included.\n- Structured file organization for easy integration and expansion.\n- References to leading AI safety and red teaming frameworks.","license":"MIT-0"},"metadata":null,"owner":{"handle":"pandaai-1337","userId":"publishers:pandaai-1337","displayName":"PandaAI-1337","image":"https://avatars.githubusercontent.com/u/264713685?v=4"},"moderation":{"isSuspicious":true,"isMalwareBlocked":false,"verdict":"suspicious","reasonCodes":["suspicious.llm_suspicious"],"summary":"Detected: suspicious.llm_suspicious","engineVersion":"v2.2.0","updatedAt":1774107708111}}