For Developers
ResilientLink
API Docs.
Turn any URL into clean, structured data with a single API call. ResilientLink handles JavaScript rendering, caching, rate limits, and metadata extraction — so you don't have to.
Quick Start
Up and running in one request.
Authenticate with your API key and pass any URL. ResilientLink returns title, description, OG image, full metadata, and optional raw HTML — all in one clean JSON response.
All requests go to: https://api.resilientlink.silentgode.com
Pass your API key as a header on every request: X-API-Key: rl_your_key_here
curl -X POST https://api.resilientlink.silentgode.com/api/scrape \
-H "X-API-Key: rl_your_key_here" \
-H "Content-Type: application/json" \
-d '{"url":"https://example.com"}'
Returns: title, description, image, domain, cached — and much more.
const res = await fetch(
'https://api.resilientlink.silentgode.com/api/scrape',
{
method: 'POST',
headers: {
'X-API-Key': 'YOUR_KEY',
'Content-Type': 'application/json'
},
body: JSON.stringify({ url: 'https://example.com' })
}
);
import requests
res = requests.post(
'https://api.resilientlink.silentgode.com/api/scrape',
headers={'X-API-Key': 'YOUR_KEY'},
json={'url': 'https://example.com'}
)
| Free | 10 req / min |
| Starter | 50 req / min |
| Pro | 100 req / min |
| Enterprise | 200 req / min |
pip install resilientlink
from resilientlink import ResilientLink
client = ResilientLink(api_key="YOUR_API_KEY")
result = client.scrape("https://example.com")
print(result["data"]["title"])
npm install resilientlink
const ResilientLink = require('resilientlink');
const client = new ResilientLink({ apiKey: 'YOUR_API_KEY' });
const result = await client.scrape('https://example.com');
console.log(result.data.title);
- Step 1 — Create your account. Sign up at resilientlink ↗ — no credit card required for the free plan.
- Step 2 — Get your API key. From your dashboard, go to the API Key tab. Your key is prefixed with
rl_. - Step 3 — Make your first request. Use the cURL example above or install one of the SDKs.
- Step 4 — Handle the response. You'll receive a clean JSON object with all metadata fields — ready to use immediately in your application.
API Capabilities
What developers get from ResilientLink.
title, description, image, Open Graph tags, Twitter card data, JSON-LD, canonical URL, SEO fields, and a full image list — parsed and ready to use.waitForSelector or waitMs to handle SPAs and dynamic pages. ResilientLink's headless engine waits for your target element before scraping.screenshot: true or pdf: true to receive a base64-encoded PNG or PDF of the rendered page. Available on paid plans.bypassCache: true (Python: bypass_cache) to force a fresh scrape on demand.Accept-Language, User-Agent, cookies, and more — to scrape geo-targeted or authenticated pages.Response Reference
Everything in a single response object.
A successful scrape returns a structured JSON object. Here's the full shape:
{
"success": true,
"cached": false,
"tier": "...",
"responseTime": 412,
"data": {
"url": "https://example.com",
"title": "Example Domain",
"description": "...",
"image": "https://example.com/og.png",
"domain": "example.com",
"og": { "title": "...", "description": "...", "image": "..." },
"twitter": { "card": "summary_large_image", "title": "..." },
"content": { "wordCount": 423, "readTimeMinutes": 2, "headings": [] },
"seo": { "keywords": "...", "robots": "index,follow", "canonical": "..." },
"jsonLd": [],
"images": [],
"scrapedAt": "..."
}
}
Options Reference
All available scrape options.
| Option (Node) | Option (Python) | Type | Description |
|---|---|---|---|
returnHtml |
return_html |
boolean | Include raw HTML in the response |
screenshot |
screenshot |
boolean | Return base64 PNG of the rendered page (paid) |
pdf |
pdf |
boolean | Return base64 PDF of the rendered page (paid) |
bypassCache |
bypass_cache |
boolean | Force a fresh scrape, skip cache |
waitForSelector |
wait_for_selector |
string | CSS selector to wait for before scraping |
waitMs |
wait_ms |
number | Wait N milliseconds before scraping |
customHeaders |
custom_headers |
object | Custom HTTP headers to send with the request |
timeout |
timeout |
number | Timeout in ms (Node) or seconds (Python). Default 30s |
Error Handling
Predictable errors. No surprises.
All SDK errors throw a typed ResilientLinkError with a human-readable message and an HTTP status code. Handle them like this:
from resilientlink import ResilientLink, ResilientLinkError
try:
result = client.scrape("https://example.com")
except ResilientLinkError as e:
print(e) # human-readable
print(e.status_code) # 401 | 429 | 451
const { ResilientLinkError } = require('resilientlink');
try {
const result = await client.scrape('https://example.com');
} catch (err) {
if (err instanceof ResilientLinkError) {
console.error(err.message); // human-readable
console.error(err.statusCode); // 401 | 429 | 451
}
}
| 401 | Invalid or missing API key |
| 429 | Rate limit exceeded — slow down or upgrade your plan |
| 451 | URL is blocked or legally restricted from scraping |
Plans
Pick the plan that fits your volume.
Get Started
Your API key is one sign-up away.
- Sign up at resilientlink ↗ — no credit card required for the free plan
- Copy your API key from the Dashboard
- Install the SDK:
pip install resilientlinkornpm install resilientlink - Make your first scrape request and inspect the structured response
- Upgrade your plan when you're ready to scale