{"skill":{"slug":"openclaw-langcache","displayName":"Langcache Semantic Caching for OpenClaw","summary":"This skill should be used when the user asks to \"enable semantic caching\", \"cache LLM responses\", \"reduce API costs\", \"speed up AI responses\", \"configure LangCache\", \"search the semantic cache\", \"store responses in cache\", or mentions Redis LangCache, semantic similarity caching, or LLM response caching. Provides integration with Redis LangCache managed service for semantic caching of prompts and responses.","tags":{"latest":"1.0.0"},"stats":{"comments":0,"downloads":1665,"installsAllTime":0,"installsCurrent":0,"stars":1,"versions":1},"createdAt":1770172516522,"updatedAt":1777524983898},"latestVersion":{"version":"1.0.0","createdAt":1770172516522,"changelog":"openclaw-langcache v1.0.0\n\n- Initial release.\n- Integrates Redis LangCache for semantic caching of LLM prompts and responses.\n- Provides bash scripts for searching, storing, deleting, and flushing cache entries.\n- Enforces robust caching policy to prevent unsafe or context-sensitive data from being stored.\n- Supports both semantic and exact match search strategies.\n- Includes usage documentation, environment variable setup, and example workflows.","license":null},"metadata":null,"owner":{"handle":"manvinder01","userId":"publishers:manvinder01","displayName":"manvinder01","image":"https://avatars.githubusercontent.com/u/16512475?v=4"},"moderation":{"isSuspicious":true,"isMalwareBlocked":false,"verdict":"suspicious","reasonCodes":["suspicious.llm_suspicious"],"summary":"Detected: suspicious.llm_suspicious","engineVersion":"v2.4.5","updatedAt":1777524983898}}