Detect AI slop before it ships.
VeracityAPI is a drop-in linter for AI outputs. It catches generic text slop, synthetic-looking images, and AI-generated voice risk before your app publishes, cites, trains on, or accepts content. One API call returns an action your code can branch on — allow, revise, human_review, or reject — plus the evidence behind it.
We don’t care who wrote it. We care whether it’s shippable. Workflow signals, not forensic proof. See methodology.
Think CI/CD checks, but for generated content.
Don’t build fragile probability thresholds. Just write your switch statement.
switch (result.recommended_action) {
case "allow": publish(); break;
case "revise": requestRevision(result.recommended_fixes); break;
case "human_review": queueForReview(result.evidence); break;
case "reject": block(); break;
}{
"recommended_action": "human_review",
"risk_level": "high",
"primary_reason": "unsupported_generic_claims",
"evidence_count": 3,
"confidence": "high"
}AI slop detection
Catch generic phrasing, low specificity, weak provenance, unsupported claims, and padded prose. Default route: revise or human_review.
AI forgery detection
Flag synthetic-looking image artifacts, suspicious composites, geometry/text/lighting issues, and weak provenance. Default route: human_review.
Synthetic voice detection
Flag synthetic-speech cues: cadence, breath, prosody, room tone, phoneme onset, and transcript mismatch. Default route: human_review.
Follow the path from signal → workflow → implementation.
New contextual links make the live site behave like a product map instead of a loose sitemap. Start with the signal taxonomy, inspect examples, then compare detector jobs and implementation paths.
What we detect
AI slop, weak provenance, synthetic-media cues, and routing boundaries.
Examples
Copy-paste queue, cron, LangChain, and moderation wrappers.
MCP
Local and hosted MCP tools for agent clients.
Comparisons
Benchmark-gated buyer guides for detector alternatives.
Blog
Launch notes on routing F1, linter positioning, and safe claims.
Error handling
Production retries, rate limits, billing, and validation behavior.
Don’t just detect slop. Fix it.
Set auto_revise:true on text requests. When the API returns revise, it can also return revised_text — a drop-in rewrite that removes generic filler while preserving supported claims.
{
"type": "text",
"content": "Our platform helps everyone do everything better...",
"auto_revise": true,
"context": { "format": "article", "intended_use": "publish" }
}
→ { "recommended_action": "revise", "revised_text": "..." }One API call. Four routing actions.
VeracityAPI translates messy content signals into the one field your agent can route on: recommended_action. Green = allow, Yellow = revise, Orange = human_review, Red = reject.
allow
Low-risk content can proceed: publish, cite, index, accept, or train.
revise
Send content back to an agent/editor with evidence and recommended fixes.
human_review
Queue uncertainty for moderation, QA, source verification, or policy review.
reject
Block, quarantine, or discard high-risk content before it reaches users.
Installable TypeScript + Python clients
npm install @veracityapi/sdk or pip install veracityapi, then call typed helpers for text, image, audio, video, batch, and balance checks.
Claude Desktop / MCP ready
Local MCP package support gives Claude Desktop, Cursor, and MCP clients structured VeracityAPI tools; hosted remote MCP is live for compatible custom connectors.
OpenAPI + llms.txt + agents.json + /for-agents
Point agents at veracityapi.com/llms.txt, /openapi.json, or /.well-known/agents.json so they know when to call the API and when not to.
No media-byte storage
privacy-safe defaults avoids raw text retention; image/audio/video endpoints store no media bytes, base64, frames/contact sheets, or full media URLs.
Three places agents need content verification.
Pre-publish QA
Pre-publish moderation. Check agent-written posts, pages, captions, product copy, and summaries before they go live. Route generic slop or unsupported claims to revise or human_review.
RAG / training-data ingestion
RAG and training-data hygiene. Filter scraped or user-submitted text before it enters a knowledge base, fine-tuning set, or citation workflow. Keep synthetic sludge out of your agent’s context.
Async UGC moderation
Async UGC moderation. Screen uploaded images, voice notes, podcasts, and media assets for synthetic-media cues. Quarantine suspicious uploads for human review before publication.
Demo trust panels show why each action was chosen.
Each live demo now includes a Why this action? explanation, a Billing estimate, and modality-specific expected semantics so buyers can see the routing policy before reading raw JSON.
Text demo: generic travel safety copy should route to human_review
Generic safety claims and unsupported absolutes should not autopublish; the panel explains primary_reason, evidence, and rewrite guidance. Price: $0.005 / 1k text chars.
Image demo: low-risk sample should route to allow with provenance caveat
The fixture is intentionally low-risk professional/influencer imagery; the right action is allow while still reminding users to verify source/provenance for evidence claims. Price: $0.02 / image.
Audio demo: suspicious voice fixture should route to human_review
The generated fixture should demonstrate review routing, transcript return, and limitations around synthetic-speech cues. Price: $0.01 / audio request.
Custom paste is primary: paste your own draft, review, caption, or source snippet. Demo calls force privacy-safe defaults, cap input at 4,000 characters, and are rate limited.
estimatedCostCents: 0.5 · $0.005 / 1k text chars
Recommended action: human_review. Replace generic warnings with named examples, sourceable details, and concrete workflow guidance.
Sign up free Want the full API output, SDKs, image/audio/video, and higher limits? Sign up free.
"should always stay alert"
Vague, universally applicable advice lacking specificity or actionable detail.
"Pickpockets are everywhere"
Sweeping generalization without supporting evidence or useful context.
"major European cities"
No named cities, neighborhoods, timeframes, or source details.
{
"analysis_id": "demo_01KRA1EQPDJ7N2KHBXCQMGZYFJ",
"modality": "text",
"content_trust_score": 0.22,
"specificity_risk": 0.78,
"provenance_weakness": 0.78,
"synthetic_risk": 0.72,
"slop_risk": 0.78,
"confidence": "medium",
"primary_reason": "unsupported_generic_claims",
"evidence": [
{
"type": "generic_phrasing",
"severity": "high",
"span": "should always stay alert",
"explanation": "Vague, universally applicable advice lacking specificity or actionable detail."
},
{
"type": "hedging_and_absolutes",
"severity": "high",
"span": "Pickpockets are everywhere",
"explanation": "Sweeping generalization without supporting evidence or useful context."
},
{
"type": "absence_of_specificity",
"severity": "medium",
"span": "major European cities",
"explanation": "No named cities, neighborhoods, timeframes, or source details."
}
],
"recommended_fixes": [
"Replace generic warnings with named examples, locations, and sourceable details.",
"Remove absolute claims unless they are supported by evidence.",
"Add concrete decision guidance for the intended workflow."
],
"risk_level": "high",
"recommended_action": "human_review",
"model_version": "v0.1",
"limitations": [
"Scores are probabilistic workflow risk signals, not proof of AI authorship or truth.",
"v0.1 uses an LLM-backed structured scoring pass; treat synthetic_risk as texture risk, not ground-truth authorship detection.",
"English-calibrated at MVP; non-English content should be treated as experimental."
]
}Dogfooded on production publishing workflows.
Used internally on production publishing workflows to route generated drafts, captions, and content-pipeline outputs for allow/revise/human_review decisions before publication. This is honest internal dogfood, not a fake customer-logo strip.
HTTPS image URLs only. Demo forces privacy-safe defaults, stores no image bytes or full URL, and uses IP/cookie rate limits. Demo image is synthetic/licensed sample content, not a real endorsement.

This influencer-product photo is hosted on veracityapi.com and preloaded with a sample trust result. Click Analyze image link to run a fresh live call.
{
"analysis_id": "demo_img_PRELOADED_INFLUENCER",
"modality": "image",
"content_trust_score": 0.75,
"synthetic_image_risk": 0.25,
"synthetic_risk": 0.25,
"confidence": "medium",
"primary_reason": "visible_synthetic_media_cues",
"evidence": [
{
"type": "synthetic_texture",
"severity": "low",
"span": "facial skin and neck area",
"explanation": "Skin appears slightly over-smoothed with minimal visible pore detail, consistent with beauty filters or light retouching rather than synthetic generation."
},
{
"type": "other",
"severity": "low",
"span": "left hand holding product bottle",
"explanation": "Hand structure, finger joints, and nail definition appear anatomically plausible with natural proportions and realistic shadow detail."
},
{
"type": "low_specificity",
"severity": "low",
"span": "Beauty Tonic bottle label text and design",
"explanation": "Product label text is readable and perspective-aligned, without obvious text distortion artifacts typical of generated images."
},
{
"type": "other",
"severity": "low",
"span": "overall scene lighting from face to background fence",
"explanation": "Lighting direction and shadow placement appear consistent across the subject and environment."
}
],
"recommended_fixes": [
"No critical fixes needed; image appears consistent with professional photography or light post-processing.",
"If publishing, standard influencer disclosure practices apply regardless of synthetic risk assessment.",
"Verify original source/provenance if the image is used as evidence for a claim."
],
"risk_level": "low",
"recommended_action": "allow",
"model_version": "v0.1",
"limitations": [
"Scores are probabilistic workflow risk signals, not proof of AI authorship.",
"v0.1 image scoring uses a vision LLM, not a calibrated synthetic-image classifier.",
"VeracityAPI does not inspect EXIF, C2PA Content Credentials, or provenance metadata in v0.1."
]
}HTTPS MP3/WAV/M4A/WebM/OGG URLs under 4 MB. Designed for async/post-upload media review. Demo forces privacy-safe defaults and stores no audio bytes, base64, or full URL. Demo audio is a generated fixture, not a real person.
This generated audio fixture is preloaded with a sample async media-review result. Click Analyze audio link to run a fresh live call.
{
"analysis_id": "demo_aud_PRELOADED_VOICE_MESSAGE",
"content_trust_score": 0.1,
"synthetic_audio_risk": 0.9,
"workflow_risk": 0.85,
"synthetic_risk": 0.9,
"confidence": "medium",
"evidence": [
{
"type": "prosody_consistency",
"severity": "medium",
"span": "overall clip",
"explanation": "Delivery has unusually even cadence and synthetic/TTS-like smoothness; treat as a review signal, not evidence."
}
],
"recommended_fixes": [
"Request provenance or raw recording context before high-stakes publication.",
"Ask for a fresh recording or source-file metadata if the voice note affects trust, money, identity, or publication decisions."
],
"risk_level": "high",
"recommended_action": "human_review",
"model_version": "v0.1",
"limitations": [
"Gemini-powered audio workflow triage with transcript return, not evidence of AI generation.",
"Not voice-clone evidence, speaker identity verification, or forensic determination."
]
}Video authenticity risk scoring
This is the shipped fixed demo video. Visitors can play the clip, copy its direct HTTPS URL, and inspect the preprocessed VeracityAPI check without creating no-key video abuse cost. Authenticated customers call POST /v1/analyze with type:"video" and direct HTTPS videos capped at 30s/25MB.
video_v0Six-frame contact-sheet + sanitized metadata triage for the playable fixture. Result: low apparent visual-manipulation risk from sampled frames, with provenance caveats. Not forensic proof.
{
"analysis_id": "demo_vid_BOOK_PAYOFF_PREPROCESSED",
"modality": "video",
"content_trust_score": 0.82,
"synthetic_video_risk": 0.18,
"synthetic_risk": 0.18,
"confidence": "medium",
"primary_reason": "low_apparent_visual_manipulation_risk",
"signals": {
"visual_synthetic_risk": 0.16,
"metadata_risk": 0.28
},
"evidence": [
{
"type": "visual_artifact",
"severity": "low",
"span": "six sampled contact-sheet frames",
"explanation": "Subject, room lighting, pose progression, and held Thailand booklet remain visually consistent across sampled frames; no obvious identity swap, gross compositing, or frame-level generation artifacts are visible in this contact sheet."
},
{
"type": "weak_provenance",
"severity": "medium",
"span": "container metadata",
"explanation": "The demo clip is a short web-hosted MP4 with limited capture provenance, so the result is a workflow signal rather than proof of original capture."
}
],
"recommended_fixes": [
"Allow for low-stakes demo/publishing QA, but keep source-chain checks for high-stakes verification.",
"If this clip affects identity, payment, legal, or compliance decisions, review the original source file, audio/video sync, metadata, and provenance before relying on it."
],
"risk_level": "low",
"recommended_action": "allow",
"model_version": "v0.1-video-preprocessed",
"limitations": [
"Preprocessed homepage fixture based on a six-frame contact sheet and sanitized metadata; not forensic proof of authenticity or manipulation.",
"Does not inspect full-frame timelines, source-chain provenance, camera sensor traces, or audio/video synchronization beyond MVP visual triage."
],
"billing": {
"units_analyzed": 1,
"bucket": "video_v0",
"price_cents": 5,
"remaining_balance_cents": 995
}
}Built for agent decisions
Use the response to decide whether to allow, revise, human_review, or reject text, image, audio, and private-beta video content before publishing, citing, training, or moderation workflows.
Evidence over authorship vibes
Responses include trust/specificity risk, confidence, evidence spans, and recommended fixes — useful for automation and human QA.
Privacy-first logging
Privacy-safe public flows log hashes and metadata, not raw submitted content/media/full URLs.
Machine discoverable
Agents can read OpenAPI, llms.txt, agents.json, and sitemap.xml.
An AI output linter for agents
Input and pre-publish guardrails. VeracityAPI checks text slop and media forgery risk before agents ingest, cite, publish, moderate, or trust content — then returns an action instead of a naked probability score.
How agents use VeracityAPI
Call before publish, cite, train, or moderate; then route by allow, revise, human_review, or reject.
Example workflow costs
Text analyze-only is $0.005 per 1k characters, Analyze + revise is $0.010 per 1k with auto_revise=true, image is $0.02, audio is $0.01, video private beta is $0.05/success, with /v1/balance for autonomous preflight.
Operational evidence
Dogfooded on customer content pipelines and production demos with privacy-safe logging and machine-readable discovery surfaces.
Human-readable integration guide
Auth, schemas, installable TypeScript/Python SDKs, and tool-wrapper patterns.
Model, thresholds, limitations
v0.1 uses structured LLM scoring; not evidence of authorship. See current action policy.
Early calibration evidence
Customer pipeline dogfooding: 16 pages, 14 low/allow, 2 high/human_review.
Agent business-function playbooks
Publishing gates, caption pre-flight, SEO QA, Reddit source validation, competitor recon, manuscript QA, and more.
One endpoint. Action-readable output.
POST {type, content} for text, image URLs, or audio URLs plus lightweight context. Receive slop/forgery risk signals, evidence, fixes, limitations, and a recommended action.
Full docs OpenAPI JSON Agent instructions
npm install @veracityapi/sdkpip install veracityapi
curl https://api.veracityapi.com/v1/analyze -H "Authorization: Bearer $VERACITY_API_KEY" -H "Content-Type: application/json" -d '{"type":"text","content":"Paste article, review, caption, or source text here...","context":{"format":"article","intended_use":"publish","domain":"travel safety"}}'Context controls action policy
text 20-100000 charsprivacy-safe by defaultformat enumintended_use enum
{
"type": "text | image | audio",
"content": "text content or HTTPS media URL",
"context": {"format": "article", "intended_use": "publish", "domain": "travel safety"},
"context": { "format": "article", "intended_use": "publish" }
}Scores, evidence, action
{
"analysis_id": "ana_...",
"content_trust_score": 0.22,
"specificity_risk": 0.78,
"provenance_weakness": 0.78,
"synthetic_risk": 0.72,
"slop_risk": 0.78,
"risk_level": "high",
"recommended_action": "human_review",
"confidence": "medium",
"evidence": [{ "type": "...", "severity": "...", "span": "...", "explanation": "..." }],
"recommended_fixes": ["..."],
"limitations": ["..."]
}Start usage-based with prepaid credits and one balance for text, image, audio, and private-beta video workflows.
Free starter credit: $1.50 · Analyze only: $0.005 / 1k characters · Analyze + revise: $0.010 / 1k characters · Image: $0.02/image · Audio: $0.01/request · Video private beta: $0.05/successful analysis · Custom volume: contact sales.
Agents can call the balance endpoint before a run. Public demo remains free, no-key, capped, and rate limited.