Duckduckgo Search

v1.0.0

DuckDuckGo web search for private tracker-free searching. Use when user asks to search the web find information online or perform web-based research without...

0· 504·1 current·1 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name and description match the included Python search client and SKILL.md examples. The code implements web, image, and JSON searches against DuckDuckGo endpoints — these capabilities align with the stated purpose.
Instruction Scope
SKILL.md instructs the agent to call DuckDuckGo JSON and HTML endpoints and to perform HTML scraping for full results. It does not instruct reading local config, secrets, or transmitting data to third-party endpoints. Note: scraping is explicitly recommended (HTML scraping section), which is functionally expected but may have legal/TOS implications; the instructions also advise rate-limiting and caching.
!
Install Mechanism
There is no install spec (instruction-only), which minimizes risk, but the packaged code depends on Python libraries (requests, bs4) that are not declared in the skill metadata. The lack of declared dependencies is a minor coherence issue (the runtime will need these packages available). No downloads from arbitrary URLs are present.
Credentials
The skill requests no environment variables, credentials, or config paths. The code only uses network access to DuckDuckGo domains and a bundled User-Agent header; this is proportionate to the stated purpose.
Persistence & Privilege
always is false and the skill does not request persistent system-wide privileges or alter other skills' configs. It appears to be a normal, invocable skill without elevated privileges.
Assessment
This skill appears to do only DuckDuckGo searches and HTML scraping, which matches its description. Before installing: (1) confirm your agent environment has Python, requests, and beautifulsoup4 (they are used but not declared); (2) be aware the skill will make outbound HTTP(S) requests to duckduckgo.com (ensure network policy permits this); (3) HTML scraping can have TOS or rate-limit implications—keep delays/caching enabled; and (4) review the included script yourself to ensure it meets your privacy and compliance needs. If you need stricter guarantees, request that the skill declare its dependencies and explicitly document network behavior in the metadata.

Like a lobster shell, security has layers — review code before you run it.

duckduckgovk97030zawxcrchhnhey3j0mqn581dfxplatestvk97030zawxcrchhnhey3j0mqn581dfxpprivacyvk97030zawxcrchhnhey3j0mqn581dfxpsearchvk97030zawxcrchhnhey3j0mqn581dfxpwebvk97030zawxcrchhnhey3j0mqn581dfxp

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

DuckDuckGo Web Search

Private web search using DuckDuckGo API for tracker-free information retrieval.

Core Features

  • Privacy-focused search (no tracking)
  • Instant answer support
  • Multiple search modes (web, images, videos, news)
  • JSON output for easy parsing
  • No API key required

Quick Start

Basic Web Search

import requests

def search_duckduckgo(query, max_results=10):
    """
    Perform DuckDuckGo search and return results.

    Args:
        query: Search query string
        max_results: Maximum number of results to return (default: 10)

    Returns:
        List of search results with title, url, description
    """
    url = "https://api.duckduckgo.com/"
    params = {
        "q": query,
        "format": "json",
        "no_html": 1,
        "skip_disambig": 0
    }

    response = requests.get(url, params=params)
    data = response.json()

    # Extract results
    results = []

    # Abstract (instant answer)
    if data.get("Abstract"):
        results.append({
            "type": "instant_answer",
            "title": "Instant Answer",
            "content": data["Abstract"],
            "source": data.get("AbstractSource", "DuckDuckGo")
        })

    # Related topics
    if data.get("RelatedTopics"):
        for topic in data["RelatedTopics"][:max_results]:
            if isinstance(topic, dict) and topic.get("Text"):
                results.append({
                    "type": "related",
                    "title": topic.get("FirstURL", "").split("/")[-1].replace("-", " ").title(),
                    "content": topic["Text"],
                    "url": topic.get("FirstURL", "")
                })

    return results[:max_results]

Advanced Usage (HTML Scraping)

from bs4 import BeautifulSoup
import requests

def search_with_results(query, max_results=10):
    """
    Perform DuckDuckGo search and scrape actual results.

    Args:
        query: Search query string
        max_results: Maximum number of results to return

    Returns:
        List of search results with title, url, snippet
    """
    url = "https://duckduckgo.com/html/"
    params = {"q": query}

    headers = {
        "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36"
    }

    response = requests.post(url, data=params, headers=headers)
    soup = BeautifulSoup(response.text, "html.parser")

    results = []
    for result in soup.find_all("a", class_="result__a", href=True)[:max_results]:
        results.append({
            "title": result.get_text(),
            "url": result["href"],
            "snippet": result.find_parent("div", class_="result__body").get_text().strip()
        })

    return results

Search Operators

DuckDuckGo supports standard search operators:

OperatorExampleDescription
"""exact phrase"Exact phrase match
-python -djangoExclude terms
site:site:wikipedia.org historySearch specific site
filetype:filetype:pdf reportSpecific file types
intitle:intitle:openclawWords in title
inurl:inurl:docs/Words in URL
ORdocker OR kubernetesEither term

Search Modes

Web Search

Default mode, searches across the web.

search_with_results("machine learning tutorial")

Images Search

def search_images(query, max_results=10):
    url = "https://duckduckgo.com/i.js"
    params = {
        "q": query,
        "o": "json",
        "vqd": "",  # Will be populated
        "f": ",,,",
        "p": "1"
    }

    response = requests.get(url, params=params)
    data = response.json()

    results = []
    for result in data.get("results", [])[:max_results]:
        results.append({
            "title": result.get("title", ""),
            "url": result.get("image", ""),
            "thumbnail": result.get("thumbnail", ""),
            "source": result.get("source", "")
        })

    return results

News Search

Add !news to the query:

search_duckduckgo("artificial intelligence !news")

Best Practices

Query Construction

Good queries:

  • "DuckDuckGo API documentation" 2024 (specific, recent)
  • site:github.com openclaw issues (targeted)
  • python machine learning tutorial filetype:pdf (resource-specific)

Avoid:

  • Vague single words ("search", "find")
  • Overly complex operators that might confuse results
  • Questions with multiple unrelated topics

Privacy Considerations

DuckDuckGo advantages:

  • ✅ No personal tracking
  • ✅ No search history stored
  • ✅ No user profiling
  • ✅ No forced personalized results

Performance Tips

  1. Use HTML scraping for actual results - The JSON API provides instant answers but limited result lists
  2. Add appropriate delays - Respect rate limits when making multiple queries
  3. Cache results - Store common searches to avoid repeated API calls

Error Handling

def search_safely(query, retries=3):
    for attempt in range(retries):
        try:
            results = search_with_results(query)
            if results:
                return results
        except Exception as e:
            if attempt == retries - 1:
                raise
            time.sleep(2 ** attempt)  # Exponential backoff

    return []

Output Formatting

Markdown Format

def format_results_markdown(results, query):
    output = f"# Search Results for: {query}\n\n"

    for i, result in enumerate(results, 1):
        output += f"## {i}. {result.get('title', 'Untitled')}\n\n"
        output += f"**URL:** {result.get('url', 'N/A')}\n\n"
        output += f"{result.get('snippet', result.get('content', 'N/A'))}\n\n"
        output += "---\n\n"

    return output

JSON Format

import json

def format_results_json(results, query):
    return json.dumps({
        "query": query,
        "count": len(results),
        "results": results,
        "timestamp": datetime.now().isoformat()
    }, indent=2)

Common Patterns

Find Documentation

search_duckduckgo(f'{library_name} documentation filetype:md')

Recent Information

search_duckduckgo(f'{topic} 2024 news')

Troubleshooting

search_duckduckgo(f'{error_message} {tool_name} stackoverflow')

Technical Comparison

search_duckduckgo('postgresql vs mysql performance 2024')

Integration Example

class DuckDuckGoSearcher:
    def __init__(self):
        self.session = requests.Session()
        self.session.headers.update({
            "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36"
        })

    def search(self, query, mode="web", max_results=10):
        """
        Unified search interface.

        Args:
            query: Search query
            mode: 'web', 'images', 'news'
            max_results: Maximum results

        Returns:
            Formatted results as list
        """
        if mode == "images":
            return self._search_images(query, max_results)
        elif mode == "news":
            return self._search_web(f"{query} !news", max_results)
        else:
            return self._search_web(query, max_results)

    def _search_web(self, query, max_results):
        # Implementation
        pass

    def _search_images(self, query, max_results):
        # Implementation
        pass

Resources

Official Documentation

References

  • HTML scraping patterns for result extraction
  • Rate limiting best practices
  • Result parsing and filtering

Files

2 total
Select a file
Select a file to preview.

Comments

Loading comments…