Live
Black Hat USADark ReadingBlack Hat AsiaAI BusinessThe reputation of troubled YC startup Delve has gotten even worseTechCrunchSam Altman's Sister Amends Lawsuit Accusing OpenAI CEO of Sexual Abuse - GV WireGoogle News: OpenAI‘System failure’ paralyzes Baidu robotaxis in ChinaTechCrunch AIThe Perils of AI-Generated Legal Advice for Dealers and Finance Companies - JD SupraGoogle News: Generative AICognichip, which is building an AI model for chip design, raised a $60M Series A led by Seligman Ventures, with participation from new board member Lip-Bu Tan (Tim Fernholz/TechCrunch)TechmemeDrones Reportedly Being Used to Help Smugglers Cross the U.S.-Mexico BorderInternational Business TimesWhatsApp just caught an Italian spyware firm building a fake version of its app for iPhonesThe Next Web NeuralGoogle offers researchers early access to Willow quantum processorTechSpotCrack ML Interviews with Confidence: Anomaly Detection (20 Q&A)Towards AIInspectMind AI (YC W24) Is HiringHacker News TopMicrosoft CFO’s AI Spending Runs Up Against Tech Bubble FearsBloomberg TechnologyWhy Traditional Defenses Can’t Hide AI Traffic Patterns - Security BoulevardGoogle News: Machine LearningBlack Hat USADark ReadingBlack Hat AsiaAI BusinessThe reputation of troubled YC startup Delve has gotten even worseTechCrunchSam Altman's Sister Amends Lawsuit Accusing OpenAI CEO of Sexual Abuse - GV WireGoogle News: OpenAI‘System failure’ paralyzes Baidu robotaxis in ChinaTechCrunch AIThe Perils of AI-Generated Legal Advice for Dealers and Finance Companies - JD SupraGoogle News: Generative AICognichip, which is building an AI model for chip design, raised a $60M Series A led by Seligman Ventures, with participation from new board member Lip-Bu Tan (Tim Fernholz/TechCrunch)TechmemeDrones Reportedly Being Used to Help Smugglers Cross the U.S.-Mexico BorderInternational Business TimesWhatsApp just caught an Italian spyware firm building a fake version of its app for iPhonesThe Next Web NeuralGoogle offers researchers early access to Willow quantum processorTechSpotCrack ML Interviews with Confidence: Anomaly Detection (20 Q&A)Towards AIInspectMind AI (YC W24) Is HiringHacker News TopMicrosoft CFO’s AI Spending Runs Up Against Tech Bubble FearsBloomberg TechnologyWhy Traditional Defenses Can’t Hide AI Traffic Patterns - Security BoulevardGoogle News: Machine Learning

Agentura – like pytest, but for AI agents (Free)

Hacker News AI Topby SyntheticSynapticMarch 31, 20265 min read1 views
Source Quiz

Article URL: https://github.com/SyntheticSynaptic/agentura Comments URL: https://news.ycombinator.com/item?id=47594463 Points: 1 # Comments: 0

Make sure your AI agent still works after every change.

Agentura tests your agent on every pull request and tells you what broke before you merge. Like pytest, but for AI agents.

→ Try it live: playground.agentura.run

Run a real baseline vs branch comparison in your browser. No install. No account.

Try it in 60 seconds

No signup. No GitHub App. Runs entirely on your machine.

init generates an agentura.yaml config and a baseline snapshot. run --local scores your agent against expected outputs and shows you exactly what passed, what failed, and what regressed.

What problem does this solve?

You push a change. Your agent behaves differently. You find out from a user, not from a test.

Agentura catches this before merge:

  • You updated the system prompt — did accuracy drop?

  • Your model provider pushed a silent update — did tone shift?

  • You added a new tool — are the right ones being called?

  • You cut the system prompt to reduce costs — did safety regress?

A GitHub Action runs your tests. Agentura is the tests.

How it works

  1. Define expected behaviors in YAML

  • name: accuracy type: golden_dataset dataset: ./evals/accuracy.jsonl scorer: semantic_similarity threshold: 0.85
  • name: quality type: llm_judge dataset: ./evals/quality.jsonl rubric: ./evals/rubric.md runs: 3
  • name: tool_use type: tool_use dataset: ./evals/tool_use.jsonl threshold: 0.8
  • name: performance type: performance max_p95_ms: 3000 max_cost_per_call_usd: 0.01 ci: block_on_regression: false compare_to: main post_comment: true`
  1. Run locally to set a baseline

agentura run --local

Agentura calls your agent, scores every case, and saves a baseline snapshot in .agentura/baseline.json.

  1. Every PR is compared to that baseline

Improvements (1 case flipped from fail to pass): ✓ case_12: "How do I reset my password?"

→ Merge blocked: accuracy suite below threshold`

Results post directly to your pull request as a comment and GitHub Check Run.

Eval strategies

Strategy What it tests Requires

golden_dataset Exact, fuzzy, or semantic match Nothing (semantic needs API key)

llm_judge Tone, helpfulness, quality Any LLM API key

tool_use Tool invocation and argument validation Nothing

performance Latency and cost guardrails Nothing

Multi-turn Conversational agent behavior across turns Nothing

LLM judge and semantic similarity auto-detect your provider: set ANTHROPIC_API_KEY, OPENAI_API_KEY, GEMINI_API_KEY, or GROQ_API_KEY, or run Ollama locally with no API key at all.

Multi-turn eval

Most eval tools only test single questions. Agentura tests whether your agent behaves consistently across a full conversation.

This catches failures that single-turn evals miss — agents that drift from constraints established earlier in the conversation, or give generic answers when they should reference prior context.

Works with any agent

Framework Example

OpenAI Agents SDK examples/openai-agent

Anthropic Claude examples/anthropic-agent

LangChain examples/langchain-agent

Any HTTP endpoint examples/http-agent

Your agent just needs to expose an HTTP endpoint. No SDK required.

GitHub Actions

Full docs: docs/github-action.md

Comparison

Feature Agentura Braintrust LangSmith DeepEval

Open source ✅ MIT ❌ ❌ ✅

CI/CD native ✅ Partial ❌ Partial

Framework agnostic ✅ ✅ LangChain-first ✅

Self-hostable ✅ ❌ ❌ ✅

Local mode (no signup) ✅ ❌ ❌ Partial

Local inference (no API key) ✅ via Ollama ❌ ❌ Partial

Regression diff ✅ ❌ ❌ ❌

Multi-turn eval ✅ Partial Partial ❌

Tool-call validation ✅ ❌ ❌ Partial

Semantic similarity ✅ ✅ ✅ ✅

Audit manifests ✅ ❌ ❌ ❌

Locked dataset mode ✅ ❌ ❌ ❌

For regulated environments

Agentura includes a governance layer for teams building AI agents in healthcare, finance, or other regulated domains.

  • Audit manifests — every run writes dataset hashes, CLI version, git sha, and per-suite results to .agentura/manifest.json

  • Locked mode — exits 1 if any dataset changed since baseline, for environments requiring reproducible eval sets

  • Behavioral drift detection — compare against a frozen reference snapshot to detect gradual drift over time

  • Heterogeneous consensus — run the same query across multiple model families and require agreement before accepting an output

  • Clinical audit report — generate a single self-contained HTML artifact for CMIO review and FDA PCCP documentation

See docs/clinical-report.md.

Self-hosting

Agentura is fully open source. Run your own instance: docs/self-hosting.md

Contributing

See CONTRIBUTING.md. Good first issues are labeled in the issue tracker.

License

MIT

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by AI News Hub · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

Knowledge Map

Knowledge Map
TopicsEntitiesSource
Agentura – …agentgithubHacker News…

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Building knowledge graph…

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!