sf-scrapper

v1.0.0

Scrape employee data from a logged-in SAP SuccessFactors browser session using browser automation. Use when: user provides an employee ID and wants employee...

0· 278·0 current·0 all-time
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The name/description match the instructions: the skill performs UI scraping of SuccessFactors via a logged-in Chrome tab and the Browser Relay. No unrelated credentials, binaries, or installs are requested.
Instruction Scope
Instructions are focused on taking browser snapshots, navigating deep links, using search fallbacks, and extracting fields from the accessibility tree. One noteworthy design choice: the SKILL.md explicitly forbids using OData/API and directs the agent to rely solely on the UI. That is consistent with the stated purpose but increases reliance on an authenticated browser session and may bypass API audit trails — verify this behavior is acceptable for your organization's policies.
Install Mechanism
No install spec or code is present — this is instruction-only. That minimizes disk writes or third-party package risk. The only external dependency is the user-run OpenClaw Browser Relay Chrome extension (documented in the prerequisites).
Credentials
The skill requests no environment variables, credentials, or config paths. It does suggest an optional Base URL entry in TOOLS.md/workspace, which is non-sensitive and reasonable for deep-link construction.
Persistence & Privilege
The skill is not always-enabled and does not request persistent system privileges. It instructs runtime interactions with the user's Chrome session via the Browser Relay, which is expected for a UI-scraping skill.
Assessment
This skill is coherent: it scrapes data only from a logged-in Chrome SuccessFactors session via the Browser Relay and asks for no secrets or installs. Before installing, ensure you are authorized to view/scrape the requested employee data and that using UI scraping (instead of the official API) complies with your company's security and audit policies. Verify the OpenClaw Browser Relay extension is trusted and enabled only when you intend to share your session, and avoid running this skill against accounts or data you don't have explicit permission to access. Because the skill's source is unknown, consider only using it in a controlled environment or after reviewing the agent action logs produced when you run it.

Like a lobster shell, security has layers — review code before you run it.

latestvk971hwmcvc2k7s62hq5ht4mr5s825xj7
278downloads
0stars
1versions
Updated 1mo ago
v1.0.0
MIT-0

SF Scraper — SuccessFactors Browser Scraping Skill

Scrape employee data from a live, logged-in SAP SuccessFactors session via browser automation.

Prerequisites

  • User must have SAP SuccessFactors open and logged in on a Chrome tab.
  • OpenClaw Browser Relay Chrome extension must be active (badge ON) on that tab.
  • Use profile="chrome" for all browser calls.

Workflow

1. Discover the SuccessFactors Base URL

Take a snapshot of the attached Chrome tab to identify the current SF domain:

browser(action="snapshot", profile="chrome", compact=true)

Extract the base URL (e.g., https://<company>.successfactors.com or https://pmsalesdemo8.successfactors.com).

2. Navigate to Employee Search / People Profile

SuccessFactors supports deep-link URLs. Use the following pattern to go directly to an employee's profile:

Primary pattern (People Profile / Live Profile):

{base_url}/sf/liveprofile?selected_user={employee_id}

Fallback patterns if primary doesn't work:

{base_url}/xi/ui/peopleprofile/pages/index.xhtml?selected_user={employee_id}
{base_url}/sf/home?selected_user={employee_id}

Navigate using:

browser(action="navigate", profile="chrome", targetUrl="{constructed_url}")

Wait briefly for the page to load, then snapshot.

3. If Deep Link Fails — Use Search

If the deep link lands on a generic page or errors, fall back to the global search:

  1. Snapshot the page to find the search bar (usually top-right, role: searchbox or textbox with name containing "Search").
  2. Click the search box, type the employee ID, press Enter.
  3. Snapshot results, click the matching employee profile link.

4. Scrape Employee Data

Once on the profile page, take a snapshot:

browser(action="snapshot", profile="chrome", compact=true)

Extract the following fields from the rendered accessibility tree:

FieldWhere to Look
NamePage heading / heading role, or prominent text near avatar
Employee IDUsually in a details section or the URL itself
EmailLook for link with mailto: or text containing @
Job TitleNear name, often under heading
DepartmentIn profile details / info card
ManagerIn profile details, often a clickable link
LocationIn profile details section
PhoneIn contact info section

Not all fields will always be visible — return what's available.

5. If Profile Page Uses Tabs/Sections

SuccessFactors profiles often have tabs (Personal Info, Employment Info, Job Info, etc.). If needed data isn't visible:

  1. Snapshot to find tab elements.
  2. Click the relevant tab (e.g., "Personal Information", "Job Information", "Employment Details").
  3. Snapshot again and extract.

6. Return Results

Format results clearly:

Employee: John Doe
ID: 12345
Email: john.doe@company.com
Title: Senior Developer
Department: Engineering
Manager: Jane Smith
Location: Bangalore, India

Only include fields that were actually found on the page. Do not guess or fabricate data.

Configuration

The user should create a config in TOOLS.md or the workspace with:

### SuccessFactors
- Base URL: https://yourcompany.successfactors.com

If no base URL is configured, discover it from the currently open tab.

Error Handling

  • Not logged in: If snapshot shows a login page, tell the user to log in first.
  • Access denied / no profile found: Report clearly — the user may lack permissions for that employee.
  • Page timeout: Retry snapshot once after 3 seconds. If still loading, inform the user.
  • Search returns multiple results: List them and ask the user to clarify.

Batch Mode

If user provides multiple employee IDs, iterate through each one sequentially using the same workflow. Collect results and present as a table.

Important Notes

  • Never use OData API, REST endpoints, or any programmatic API. This skill is purely browser-based scraping.
  • Always use profile="chrome" — never profile="openclaw" (we need the user's authenticated session).
  • Be patient with page loads — SF can be slow. Use snapshots to verify page state before extracting.
  • Respect the user's session — don't navigate away from SF without warning.

Comments

Loading comments...