vantage.silentgode.com / developers
LIVE
ResilientLink is a web scraping API by Silent God Enterprise — extract clean, structured data from any URL.

For Developers

ResilientLink
API Docs.

Turn any URL into clean, structured data with a single API call. ResilientLink handles JavaScript rendering, caching, rate limits, and metadata extraction — so you don't have to.

Up and running in one request.

Authenticate with your API key and pass any URL. ResilientLink returns title, description, OG image, full metadata, and optional raw HTML — all in one clean JSON response.

Base URL & Authentication

All requests go to: https://api.resilientlink.silentgode.com

Pass your API key as a header on every request: X-API-Key: rl_your_key_here

Core Endpoint — POST /api/scrape
curl -X POST https://api.resilientlink.silentgode.com/api/scrape \
  -H "X-API-Key: rl_your_key_here" \
  -H "Content-Type: application/json" \
  -d '{"url":"https://example.com"}'

Returns: title, description, image, domain, cached — and much more.

JavaScript — fetch
const res = await fetch(
  'https://api.resilientlink.silentgode.com/api/scrape',
  {
    method: 'POST',
    headers: {
      'X-API-Key': 'YOUR_KEY',
      'Content-Type': 'application/json'
    },
    body: JSON.stringify({ url: 'https://example.com' })
  }
);
Python — requests
import requests

res = requests.post(
    'https://api.resilientlink.silentgode.com/api/scrape',
    headers={'X-API-Key': 'YOUR_KEY'},
    json={'url': 'https://example.com'}
)
Rate Limits by Plan
Free10 req / min
Starter50 req / min
Pro100 req / min
Enterprise200 req / min
Python SDK
pip install resilientlink

from resilientlink import ResilientLink
client = ResilientLink(api_key="YOUR_API_KEY")
result = client.scrape("https://example.com")
print(result["data"]["title"])
Node.js SDK
npm install resilientlink

const ResilientLink = require('resilientlink');
const client = new ResilientLink({ apiKey: 'YOUR_API_KEY' });
const result = await client.scrape('https://example.com');
console.log(result.data.title);
Step-by-step: First Request in Under 5 Minutes
  • Step 1 — Create your account. Sign up at resilientlink ↗ — no credit card required for the free plan.
  • Step 2 — Get your API key. From your dashboard, go to the API Key tab. Your key is prefixed with rl_.
  • Step 3 — Make your first request. Use the cURL example above or install one of the SDKs.
  • Step 4 — Handle the response. You'll receive a clean JSON object with all metadata fields — ready to use immediately in your application.

What developers get from ResilientLink.

Structured Metadata
Every response includes title, description, image, Open Graph tags, Twitter card data, JSON-LD, canonical URL, SEO fields, and a full image list — parsed and ready to use.
JavaScript Rendering
Use waitForSelector or waitMs to handle SPAs and dynamic pages. ResilientLink's headless engine waits for your target element before scraping.
Screenshots & PDFs
Pass screenshot: true or pdf: true to receive a base64-encoded PNG or PDF of the rendered page. Available on paid plans.
Smart Caching
Responses are cached automatically to preserve your request quota. Pass bypassCache: true (Python: bypass_cache) to force a fresh scrape on demand.
Custom Headers
Send any custom request headers — Accept-Language, User-Agent, cookies, and more — to scrape geo-targeted or authenticated pages.
Timeout Control
Set a per-request timeout in milliseconds (Node) or seconds (Python). Default is 30s. Useful for slow or JS-heavy pages where you need predictable SLAs.
Clean JSON Response
Every scrape returns a consistently structured JSON object — predictable schema, no surprises. Fields never change between requests. Load directly into your database, dataframe, or frontend without custom parsing logic.
And there's still more to get from ResilientLink.
Webhooks, Metadata Streams, Proxy Network, Cache Layer, JS Rendering, Batch URL Processing, API Explorer, and more — all available depending on your plan. Explore the full feature set from your dashboard or the API panel at resilientlink.

Everything in a single response object.

A successful scrape returns a structured JSON object. Here's the full shape:

{
  "success": true,
  "cached": false,
  "tier": "...",
  "responseTime": 412,
  "data": {
    "url": "https://example.com",
    "title": "Example Domain",
    "description": "...",
    "image": "https://example.com/og.png",
    "domain": "example.com",
    "og": { "title": "...", "description": "...", "image": "..." },
    "twitter": { "card": "summary_large_image", "title": "..." },
    "content": { "wordCount": 423, "readTimeMinutes": 2, "headings": [] },
    "seo": { "keywords": "...", "robots": "index,follow", "canonical": "..." },
    "jsonLd": [],
    "images": [],
    "scrapedAt": "..."
  }
}

All available scrape options.

Option (Node) Option (Python) Type Description
returnHtml return_html boolean Include raw HTML in the response
screenshot screenshot boolean Return base64 PNG of the rendered page (paid)
pdf pdf boolean Return base64 PDF of the rendered page (paid)
bypassCache bypass_cache boolean Force a fresh scrape, skip cache
waitForSelector wait_for_selector string CSS selector to wait for before scraping
waitMs wait_ms number Wait N milliseconds before scraping
customHeaders custom_headers object Custom HTTP headers to send with the request
timeout timeout number Timeout in ms (Node) or seconds (Python). Default 30s

Predictable errors. No surprises.

All SDK errors throw a typed ResilientLinkError with a human-readable message and an HTTP status code. Handle them like this:

Python
from resilientlink import ResilientLink, ResilientLinkError

try:
    result = client.scrape("https://example.com")
except ResilientLinkError as e:
    print(e)             # human-readable
    print(e.status_code) # 401 | 429 | 451
Node.js
const { ResilientLinkError } = require('resilientlink');

try {
  const result = await client.scrape('https://example.com');
} catch (err) {
  if (err instanceof ResilientLinkError) {
    console.error(err.message);    // human-readable
    console.error(err.statusCode); // 401 | 429 | 451
  }
}
Status Codes
401 Invalid or missing API key
429 Rate limit exceeded — slow down or upgrade your plan
451 URL is blocked or legally restricted from scraping

Pick the plan that fits your volume.

Free
Get started at no cost. Includes a generous monthly request allowance — ideal for prototyping, personal projects, and evaluating the API before committing to a paid plan.
Starter
For small production apps and indie developers. Higher request limits, faster response times, and access to the screenshot and PDF export features.
Pro
For teams with higher volume needs. Increased concurrency, priority queue access, and advanced options like custom headers and selector-based waiting.
Enterprise — 1,000,000 req/mo
Full-scale scraping infrastructure for high-volume pipelines. Dedicated support, SLA guarantees, and custom configuration for your use case.

Your API key is one sign-up away.