Hintrix – Web scraping API that returns content and AI search audit
Imagine a super-smart robot friend, like a friendly, curious puppy! This puppy loves to learn.
Sometimes, websites are like big, messy books with lots of scribbles. It's hard for our robot puppy to read them.
But there's a special helper called Hintrix! Hintrix is like magic glasses for our robot puppy.
With these magic glasses, the puppy can now read any website, even the messy ones! It helps the puppy find the important words and pictures.
It also tells the puppy if the website is good enough for other smart robots to find later. So, our robot puppy can learn super fast and tell its friends all about the cool stuff it finds! Yay!
Article URL: https://hintrix.com/ Comments URL: https://news.ycombinator.com/item?id=47624857 Points: 1 # Comments: 0
Give your AI agent eyes on the web
hintrix lets AI agents read any website, extract structured data, crawl entire domains, and audit pages for AI search visibility. One API call returns clean Markdown and GEO diagnostics with actionable fixes.
request & response
$ curl -X POST https://hintrix.com/v1/scrape \ -H "X-API-Key: hx_live_sk_..." \ -d '{"url": "https://example.com", "mode": ["content", "audit"]}'$ curl -X POST https://hintrix.com/v1/scrape \ -H "X-API-Key: hx_live_sk_..." \ -d '{"url": "https://example.com", "mode": ["content", "audit"]}'{ "agent": "reveal", "content": { "markdown": "# About Us\n\nWe build tools...", "word_count": 890 }, "audit": { "geo_score": 72, "tech_score": 85, "issues": [...] }, "credits_used": 2 }`
// npm install hintrix import { Hintrix } from 'hintrix';// npm install hintrix import { Hintrix } from 'hintrix';const hx = new Hintrix('hx_live_sk_...'); const page = await hx.scrape('https://example.com', { mode: ['content', 'audit'] });_
console.log(page.content.markdown); console.log(page.audit.geo_score); // 72
// Crawl an entire site and collect all pages
const { pages } = await hx.crawlAndCollect('https://example.com', {
max_pages: 50,
onProgress: (s) => console.log(${s.pages_crawled} pages...),
});`
Content
LLM-ready output from any page
Clean Markdown, HTML, or plain text. Metadata, links, structured data, and Schema.org included.
-
Static HTML and JS-rendered pages
-
JSON APIs and data endpoints
-
SPAs, React, Next.js, Vue
-
Automatic JS detection
GEO Audit
Will AI search engines cite this page?
GEO readiness score with 80+ evidence-backed checks. Issues ranked by impact with copy-paste fixes.
-
AI bot accessibility analysis
-
Citation readiness scoring
-
E-E-A-T and entity signals
-
Structured data validation
01
Read any website as context
Your agent scrapes a URL and gets clean Markdown that fits directly into a prompt. No parsing, no HTML cleanup.
02
Research across entire domains
Crawl a full website and collect content from every page. Build knowledge bases, feed RAG pipelines, or summarize documentation.
03
Extract structured data
Pull prices, products, contacts, or any data as JSON from any page — including SPAs, JSON endpoints, and JS-rendered content.
04
Audit AI search visibility
Check if a page will be cited by Perplexity, ChatGPT Search, or Google AI Overviews. Get a score, issues, and copy-paste fixes.
05
Handle JavaScript-heavy pages
SPAs, React apps, and client-rendered pages that return empty HTML to normal crawlers. Rendered automatically when needed.
API endpoints
Five endpoints. Content extraction, auditing, structured data, batch processing, and multi-page crawls.
POST
/v1/scrapeagent: glance | reveal
Single URL to Markdown, HTML, or text. Optionally include GEO audit. Handles JS rendering automatically when needed.
1 credit (content) · 2 credits (content + audit) · +1 screenshot
POST
/v1/auditagent: reveal
GEO readiness audit with scores, issues, and fixes. Checks AI bot access, citation readiness, E-E-A-T signals, structured data.
2 credits
POST
/v1/extractagent: pinch
Structured data from any page. Define a schema with CSS selectors or let auto-detection handle it. Works with SPAs and JSON endpoints.
2 credits
POST
/v1/batchagent: sweep
Submit multiple URLs in a single request. Async job processes them in parallel with a single status endpoint to poll.
same as per-URL cost · billed per item · +1 screenshot per URL
POST
/v1/crawlagent: sweep
Multi-page crawl with depth control. Async job with progress tracking. Content and audit for entire sites.
1 credit / page · +1 audit · JS rendering included
Capabilities
What you can do with hintrix.
▸ Markdown outputClean, structured content ready for any LLM pipeline
▸ GEO scoringEvidence-backed readiness score for AI search visibility
▸ JS renderingFull browser rendering for SPAs, detected automatically
▸ JSON & API contentRead and parse JSON endpoints, RSS feeds, sitemaps
▸ Schema.org extractionJSON-LD, microdata, and RDFa parsed and returned
▸ Fix suggestionsCopy-paste fixes with severity, impact, and effort ratings
▸ Node.js SDKnpm install hintrix — typed client with retries and polling helpers
▸ MCP serverNative integration for Claude Code, Cursor, and Windsurf
▸ robots.txt analysisWhich AI bots are allowed, blocked, or rate-limited
▸ SSRF protectionMulti-layer defense against server-side request forgery
▸ PageSpeed scoresCore Web Vitals and performance metrics alongside audit results
▸ ScreenshotsOpt-in full-page screenshots for visual verification and diffing (+1 credit, not stored)
▸ Content diffingCompare page content across scrapes — see what changed. Previous versions stored 7 days per URL. No extra cost.
▸ Link health reportsBroken link detection across pages and entire domains
▸ llms.txt generationAuto-generate and audit llms.txt files for AI-agent discoverability
Pricing
Pay per use. No subscriptions. Credits valid for 30 days — any new purchase extends by 30 days.
Free
$0
500 credits
on signup, no card
$5 pack
$5
2,500 credits
$0.002 / credit
$12 pack
$12
7,500 credits
$0.0016 / credit
$29 pack
$29
20,000 credits
$0.00145 / credit
Credit costs
ActionCredits
/v1/scrape1 /v1/scrape + audit2 /v1/audit2 /v1/extract2 /v1/batchper-URL cost /v1/crawl1 / page (+1 audit per page) JS renderingincluded
Cost examples
Scrape 1,000 blog posts for a RAG pipeline1,000 cr · $2.00
GEO audit 100 pages for a client200 cr · $0.40
Crawl a JS-rendered shop, 500 pages500 cr · $1.00
Extract product data from 200 listings400 cr · $0.80
What makes hintrix different
Most crawl APIs give you content or diagnostics. hintrix returns both in a single request.
Content + diagnostics, not either/or
Other tools return raw Markdown or SEO data. hintrix gives you clean, LLM-ready content and a GEO audit with scores and fixes from the same API call.
Pay per use, not per month
No subscriptions. Buy credit packs when you need them. Start with 500 free credits on signup, no credit card required — get 500 more by sharing on X. Credits are valid for 30 days; any new purchase extends all credits by 30 days.
Built for AI agents
MCP server for Claude Code, Cursor, and Windsurf. Markdown output optimized for LLM context windows. Auto-detection of JS rendering needs.
Actionable, not just data
Every issue comes with a severity rating, impact estimate, and a fix you can copy and paste. No interpretation needed.
Frequently asked questions
Common questions about hintrix.
What domains are blocked?
Social media platforms (Facebook, Instagram, Twitter/X, LinkedIn, TikTok, YouTube, Reddit, Pinterest, Snapchat, Threads) and dark web domains are blocked for legal and ethical reasons.
Do you respect robots.txt?
Yes, by default. You can override this per request with respect_robots_txt: false for URLs where you have explicit permission.
What User-Agent does hintrix use?
HintrixBot/1.0 (+https://hintrix.com/bot). Website owners can allow or block this in their robots.txt.
Do I need to verify my email?
Yes. A verification email is sent on signup. You must verify before making API calls.
What happens when JS rendering is needed?
hintrix uses full browser rendering by default for reliable content extraction. You can disable it with wait_for_js: false for faster plain HTTP scraping. No extra cost either way.
Do credits expire?
Credits are valid for 30 days. Any new purchase extends the expiry of all your credits. Active users effectively never lose credits.
Can I get a refund?
All credit purchases are final and non-refundable. However, failed API requests (4xx/5xx errors, connection failures) are automatically refunded — once per URL per 24 hours. The first failure is free; repeated failures on the same URL are charged normally.
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Releases

Logitech’s haptics-enhanced MX Master 4 mouse is on sale for under $100
If you’re shopping for a wireless mouse that’ll help you multitask more easily, Logitech’s MX Master 4 is easily one of the best and most comfortable options available. It’s rarely discounted, but the black version is currently down to $99.99 ($20 off) at Newegg with code TRWF233. While it offers similarly quiet clicks and long [ ]




Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!