Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Authorship Credit Gen

v0.1.0

Use when determining author order on research manuscripts, assigning CRediT contributor roles for transparency, documenting individual contributions to colla...

0· 243·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for googolme/authorship-credit-gen.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Authorship Credit Gen" (googolme/authorship-credit-gen) from ClawHub.
Skill page: https://clawhub.ai/googolme/authorship-credit-gen
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install authorship-credit-gen

ClawHub CLI

Package manager switcher

npx clawhub@latest install authorship-credit-gen
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The skill claims to determine fair author order, compute weighted contribution scores, analyze equity, and export to multiple formats. The included scripts/main.py implements CRediT role definitions and generation of text/json/xml/bilingual statements, but I don't see the claimed author-ordering algorithms, weighting/score functions, equity-analysis/reporting, or export-to-docx/pdf implementations referenced in SKILL.md. SKILL.md also imports from scripts.authorship_credit and references classes (e.g., AuthorshipCreditGenerator) that are not present in the code bundle — this is an incoherence between stated purpose and actual code.
!
Instruction Scope
Runtime instructions (SKILL.md) show example usage that imports non-existent modules (scripts.authorship_credit) and calls functions not implemented in scripts/main.py. The SKILL.md references resources/ and references/ directories that are not included. The CLI usage shown (--contributions, --guidelines, --output) may or may not be supported by the actual script (main.py contains argparse but the full CLI implementation was truncated). Overall the instructions overreach relative to the provided code and are vague about what data is read/written.
Install Mechanism
No install spec is provided (instruction-only skill). The package includes a Python script and a minimal requirements.txt. There are no remote downloads or installers; nothing is written to disk by an install step. This is low-risk from an install-mechanism perspective.
Credentials
The skill declares no required environment variables, no credentials, and no config paths. That is proportionate to the declared functionality. The code also does not appear to reference network endpoints or secrets in the visible portions.
Persistence & Privilege
always is false and there are no indications the skill requests persistent elevated privileges or modifies other skills or global agent settings. Autonomous invocation is allowed by default but that alone is not a red flag.
What to consider before installing
This skill is suspicious because the documentation and examples promise features (author ordering algorithms, equity analysis, multiple export formats, and a different module/class API) that are not present or do not match the included script. Before installing or invoking it: 1) Inspect the full scripts/main.py to confirm which CLI flags and functions are actually implemented and whether any parts are missing or truncated. 2) Test the tool in an isolated environment with synthetic data to verify it behaves as expected (it should not require credentials). 3) If you need the missing features (ordering, weighted scoring, docx/pdf export), ask the author for a corrected package or source; do not rely on the SKILL.md examples as authoritative. 4) Because the SKILL.md and file manifest disagree, prefer to run the code locally (not on sensitive real data) until you confirm its behavior. If you want, I can list exact mismatches (module and function names) and suggest what to change to make the package coherent.

Like a lobster shell, security has layers — review code before you run it.

latestvk97c9gsb50b4j2g15hcwvhjmrd82vp4a
243downloads
0stars
1versions
Updated 4h ago
v0.1.0
MIT-0

Research Authorship and Contributor Credit Generator

When to Use This Skill

  • determining author order on research manuscripts
  • assigning CRediT contributor roles for transparency
  • documenting individual contributions to collaborative projects
  • resolving authorship disputes in multi-institutional research
  • preparing contributor statements for journal submissions
  • evaluating contribution equity in research teams

Quick Start

from scripts.main import AuthorshipCreditGen

# Initialize the tool
tool = AuthorshipCreditGen()

from scripts.authorship_credit import AuthorshipCreditGenerator

generator = AuthorshipCreditGenerator(guidelines="ICMJEv4")

# Document contributions
contributions = {
    "Dr. Sarah Chen": [
        "Conceptualization",
        "Methodology", 
        "Writing - Original Draft",
        "Supervision"
    ],
    "Dr. Michael Roberts": [
        "Data Curation",
        "Formal Analysis",
        "Writing - Review & Editing"
    ],
    "Dr. Lisa Zhang": [
        "Investigation",
        "Resources",
        "Validation"
    ]
}

# Generate fair authorship order
authorship = generator.determine_order(
    contributions=contributions,
    criteria=["intellectual_input", "execution", "writing", "supervision"],
    weights={"intellectual_input": 0.4, "execution": 0.3, "writing": 0.2, "supervision": 0.1}
)

print(f"First author: {authorship.first_author}")
print(f"Corresponding: {authorship.corresponding_author}")
print(f"Author order: {authorship.ordered_list}")

# Generate CRediT statement
credit_statement = generator.generate_credit_statement(
    contributions=contributions,
    format="journal_submission"
)

# Check for disputes
dispute_check = generator.check_equity_issues(authorship)
if dispute_check.has_issues:
    print(f"Recommendations: {dispute_check.recommendations}")

Core Capabilities

1. Generate Fair Authorship Orders

Analyze contributions using weighted criteria to determine equitable author ranking.

# Define weighted contribution criteria
weights = {
    "conceptualization": 0.25,
    "methodology_design": 0.20,
    "data_collection": 0.15,
    "analysis": 0.15,
    "manuscript_writing": 0.15,
    "supervision": 0.10
}

# Calculate contribution scores
scores = tool.calculate_contribution_scores(
    contributions=team_contributions,
    weights=weights
)

# Generate ordered author list
authorship_order = tool.generate_author_order(scores)
print(f"Recommended order: {authorship_order}")

2. Assign CRediT Roles

Map contributions to official CRediT (Contributor Roles Taxonomy) categories.

# Map contributions to CRediT roles
credit_roles = tool.assign_credit_roles(
    contributions=contributions,
    version="CRediT_2021"
)

# Generate CRediT statement for journal
statement = tool.generate_credit_statement(
    roles=credit_roles,
    format="JATS_XML"
)

# Validate role assignments
validation = tool.validate_credit_roles(credit_roles)
if validation.is_valid:
    print("CRediT roles properly assigned")

3. Detect Contribution Inequities

Identify potential authorship disputes before submission.

# Analyze contribution distribution
equity_analysis = tool.analyze_equity(
    contributions=contributions,
    thresholds={"min_substantial": 0.15}
)

# Flag potential issues
if equity_analysis.has_inequities:
    for issue in equity_analysis.issues:
        print(f"Warning: {issue.description}")
        print(f"Recommendation: {issue.recommendation}")

# Generate equity report
report = tool.generate_equity_report(equity_analysis)

4. Generate Journal-Ready Statements

Create formatted contributor statements for various journal requirements.

# Generate for Nature-style statement
nature_statement = tool.generate_contributor_statement(
    style="Nature",
    include_competing_interests=True
)

# Generate for Science-style statement  
science_statement = tool.generate_contributor_statement(
    style="Science",
    include_author_contributions=True
)

# Export in multiple formats
tool.export_statement(
    statement=nature_statement,
    formats=["docx", "pdf", "txt"]
)

Command Line Usage

python scripts/main.py --contributions contributions.json --guidelines ICMJE --output authorship_order.json

Best Practices

  • Discuss authorship expectations at project inception
  • Document contributions continuously throughout project
  • Review and agree on author order before submission
  • Include non-author contributors in acknowledgments

Quality Checklist

Before using this skill, ensure you have:

  • Clear understanding of your objectives
  • Necessary input data prepared and validated
  • Output requirements defined
  • Reviewed relevant documentation

After using this skill, verify:

  • Results meet your quality standards
  • Outputs are properly formatted
  • Any errors or warnings have been addressed
  • Results are documented appropriately

References

  • references/guide.md - Comprehensive user guide
  • references/examples/ - Working code examples
  • references/api-docs/ - Complete API documentation

Skill ID: 766 | Version: 1.0 | License: MIT

Comments

Loading comments...