Copy-paste VeracityAPI into your agent stack.
Use these wrappers as drop-in gates before agents publish, cite, train on, or moderate content. Each wrapper preserves risk_level, recommended_action, evidence, and limitations.
LangChain @tool + Python SDK
from langchain_core.tools import tool
from veracityapi import VeracityAPI
client = VeracityAPI()
@tool
def veracity_content_gate(text: str, intended_use: str = "publish") -> dict:
"""Score content trust risk before an agent acts."""
return client.analyze(
text,
context={"format": "article", "intended_use": intended_use},
)Vercel AI SDK tool
import { tool } from "ai";
import { z } from "zod";
import { VeracityAPI } from "@veracityapi/sdk";
const veracity = new VeracityAPI({ apiKey: process.env.VERACITY_API_KEY });
export const veracityContentGate = tool({
description: "Route content by trust risk before publish/cite/train/moderate.",
parameters: z.object({ text: z.string(), intendedUse: z.string().default("publish") }),
execute: async ({ text, intendedUse }) => veracity.analyzeText({
text,
context: { format: "article", intended_use: intendedUse }
})
});LlamaIndex FunctionTool
from llama_index.core.tools import FunctionTool
from veracityapi import VeracityAPI
client = VeracityAPI()
def veracity_content_gate(text: str, intended_use: str = "publish"):
return client.analyze(text, context={"format": "article", "intended_use": intended_use})
veracity_tool = FunctionTool.from_defaults(fn=veracity_content_gate)LangGraph routing node
import { VeracityAPI } from "@veracityapi/sdk";
const veracity = new VeracityAPI({ apiKey: process.env.VERACITY_API_KEY });
async function veracityGate(state) {
return { ...state, veracity: await veracity.analyzeText({
text: state.draft,
auto_revise: true,
context: { format: "article", intended_use: "publish" }
}) };
}
function routeByVeracity(state) {
return state.veracity.recommended_action; // allow | revise | human_review | reject
}MCP local client config
Use this for Claude Desktop, Cursor, and local MCP clients. Hosted remote MCP is live at https://api.veracityapi.com/mcp for compatible custom connectors.
{
"mcpServers": {
"veracityapi": {
"command": "npx",
"args": ["-y", "@veracityapi/mcp"],
"env": { "VERACITY_API_KEY": "YOUR_API_KEY" }
}
}
}Publishing pipeline quality gate
Stop generic pages before they hit production. Agents can score every popular-picks page, comparison page, travel guide, and scam page generated by a cron or content pipeline, then auto-draft weak work instead of auto-publishing it.
Social media caption pre-flight
Score reel, carousel, TikTok, Facebook, Pinterest, and YouTube Shorts captions before publishing. Agents can rewrite captions that sound generic, padded, or engagement-bait-like before algorithms and humans ignore them.
SEO helpful-content proxy
Use VeracityAPI as an automated proxy for helpfulness before search engines evaluate the page. Agents can catch generic, unspecific, unoriginal writing while it is still cheap to fix.
Reddit source validation for content sourcing
When agents mine Reddit for victim stories, scam reports, tips, or product feedback, VeracityAPI can flag suspiciously generic or weak-provenance posts before they become source material.
Competitor content intelligence
Score competitor travel-safety, affiliate, and scam pages to identify where they rely on generic filler versus genuinely researched content. Agents can turn those gaps into a targeted content roadmap.
KDP book manuscript QA
Before publishing Amazon KDP guides, score chapters for generic filler, weak sourcing, and low-specificity advice. Agents can convert VeracityAPI evidence into an editorial punch list before bad reviews damage the book.