For agents

Call VeracityAPI before your workflow trusts content.

Agents should call VeracityAPI at irreversible or high-leverage content checkpoints: before publishing, citing, indexing, training on, moderating, or accepting text, image URLs, audio URLs, and private-beta video URLs from uncertain sources.

Get API key OpenAPI llms.txt

When to call

  • Before publishing generated drafts, captions, product copy, reviews, or SEO pages.
  • Before adding scraped/user-submitted text to RAG, training data, or citation workflows.
  • Before accepting uploaded images, voice notes, podcasts, short video clips, or media assets into a public workflow.
  • When an agent needs evidence-backed routing instead of a raw detector percentage.

When not to call

  • Do not use as forensic proof, legal proof, academic misconduct proof, speaker identity verification, or guaranteed truth detection.
  • Do not submit secrets, regulated personal data, or private media unless your policy permits it.
  • Do not block high-stakes human claims without review; route uncertainty to humans.

Agent decision policy templates

Policy: pre-publish QA

if (result.recommended_action === "allow") publish();
if (result.recommended_action === "revise") rewriteWith(result.evidence);
if (result.recommended_action === "human_review") queueEditor();
if (result.recommended_action === "reject") blockPublish();

Policy: RAG/source triage

if (result.recommended_action === "allow") indexSource();
if (result.recommended_action === "human_review") requireCorroboration();
if (result.recommended_action === "reject") quarantineSource();

Policy: UGC/media moderation

if (["image","audio","video"].includes(result.modality)) attachMediaReview(result);
if (["human_review","reject"].includes(result.recommended_action)) escalate();

Framework recipes

  • TypeScript SDK: npm install @veracityapi/sdk and call new VeracityAPI().analyze().
  • Python SDK: pip install veracityapi and call VeracityAPI().analyze().
  • OpenAI tool schema: import /openapi.json or define a tool around POST /v1/analyze.
  • Vercel AI SDK tool: expose analyzeContent and branch on recommended_action.
  • LangGraph conditional edge: route graph edges from allow, revise, human_review, and reject.

Primary reason enum-like values

Use modality, recommended_action, and primary_reason for stable branching. Current enum-like values include unsupported_generic_claims, weak_provenance, synthetic_texture, visible_synthetic_media_cues, synthetic_speech_cues, workflow_context_risk, low_risk_content, and low_risk_media.

Endpoint

POST https://api.veracityapi.com/v1/analyze
Authorization: Bearer DOC_KEY
Content-Type: application/json

{"type":"text|image|audio|video","content":"..."}

Use the canonical unified endpoint for all single-item modalities.

Routing policy

recommended_actionAgent behavior
allowContinue the workflow.
reviseUse evidence and recommended_fixes to rewrite, replace, or request better provenance.
human_reviewQueue with evidence, risk_level, confidence, limitations, and source metadata.
rejectDiscard, quarantine, or block according to local policy.

Pricing

  • Text Analyze only: $0.005 / 1k characters.
  • Text Analyze + revise: $0.010 / 1k characters with auto_revise:true.
  • Image URL analysis: $0.02/image.
  • Audio URL analysis: $0.01/request.
  • Private-beta video URL analysis: $0.05/successful request (video_v0).
  • Use GET /v1/balance before autonomous runs.

Proof

Seed benchmark: 500 samples, 88.0% routing-action agreement, macro F1 0.871. See /evals. Framing is routing-action quality, not authorship proof.

Limitations

VeracityAPI is a workflow-risk triage API. It does not prove content is AI-generated, true, false, cloned, or legally attributable. Image/audio/video v0.1 do not inspect EXIF/C2PA, speaker identity, or frame-by-frame temporal consistency.

How it compares

NeedUse VeracityAPI when...Use detector/forensics vendors when...
Agent routingYou need allow/revise/human_review/reject plus evidence.You only need an authorship probability or investigation workflow.
Pre-publish QAYou want generic/slop/provenance checks before publication.You need plagiarism databases or institution workflows.
Synthetic mediaYou need async uploaded-media triage with clear limitations.You need identity, courtroom evidence, or real-time fraud prevention.