How Do We Prove We Actually Do AI? — Ultra Lab's Technical Transparency Manifesto
<h2> The Problem: "Are You Actually Doing AI?" </h2> <p>This is a question every company that claims to be "AI-driven" should be asked.</p> <p>In 2026, open any startup's website and you'll see "AI-Powered," "Intelligent," and "Automated" plastered everywhere. But if you ask one simple question — "What specifically does your AI do?" — most companies will give you a vague marketing paragraph rather than a verifiable answer.</p> <p>This isn't the startups' fault. AI is the biggest business narrative of 2025-2026, and everyone wants on the bandwagon. But the problem is: <strong>When everyone claims to be doing AI, nobody is doing AI.</strong></p> <p>At least, that's how it looks to potential clients.</p> <p>We at Ultra Lab face the same challenge. We genuinely use AI to build 6 products, auto
The Problem: "Are You Actually Doing AI?"
This is a question every company that claims to be "AI-driven" should be asked.
In 2026, open any startup's website and you'll see "AI-Powered," "Intelligent," and "Automated" plastered everywhere. But if you ask one simple question — "What specifically does your AI do?" — most companies will give you a vague marketing paragraph rather than a verifiable answer.
This isn't the startups' fault. AI is the biggest business narrative of 2025-2026, and everyone wants on the bandwagon. But the problem is: When everyone claims to be doing AI, nobody is doing AI.
At least, that's how it looks to potential clients.
We at Ultra Lab face the same challenge. We genuinely use AI to build 6 products, auto-generate 35+ pieces of content daily, and developed our own AI security scanner. But when these numbers sit on a website, how are they any different from someone else's "AI-Driven | Intelligent | Automated"?
The answer is: verifiability.
Our Answer: Five Verifiable Pieces of Evidence
Evidence 1: Public Products — You Can Try Them Yourself
We don't just say we have products. We let you use them for free.
Product Link What You Can Do
UltraProbe ultralab.tw/probe Paste your System Prompt, get a security score in 5 seconds
Mind Threads mindthread.tw Taiwan's only Threads automation SaaS
Ultra Advisor ultra-advisor.tw 18+ AI-assisted financial visualization tools
These three products aren't demos, aren't prototypes, and aren't "coming soon." They're running right now, with real users, and you can sign up.
Why this matters: Most "AI companies" have product pages with nothing but waitlist forms and mockups. A live product is more convincing than a hundred paragraphs of marketing copy.
Evidence 2: Public Data — Not "What We Claim," But What's Actually Running
Our Threads automation system currently manages 6 accounts, producing 35+ AI-generated pieces of content per day — fully automated.
Specific accounts:
-
@risk.clock.tw — Went from zero to 1,300 followers within 24 hours, 100% AI-generated
-
@ginrollbt — 0 to 6,500+ followers in six months, already monetized
You can click through and see for yourself. These aren't screenshots — they're live accounts. You can count the followers, check posting frequency, and evaluate content quality.
Cumulative data:
-
35,000+ AI auto-generated posts
-
2,000,000+ AI-driven total followers
-
6 simultaneously operating automated accounts
These numbers come from our Firestore database in real time — they're not manually entered marketing figures.
Evidence 3: Public Architecture — We Even Published Our Failure Logs
This might be the strongest signal: We openly share our technical architecture and failure stories.
In our AI-Ready Architecture Guide article, we wrote about:
-
The February 2026 Google API rate-limiting incident that took all three products down simultaneously
-
Why we're moving from single-Gemini to a Multi-LLM architecture
-
Gemini Flash's JSON format error rate of 3% (requiring Zod validation)
-
Actual latency (1.5-3 seconds) and cost (~$0.001/call) per AI call
A company that's just wrapping a ChatGPT API wouldn't write a 3,000-word article explaining why you need a Model Router, why prompts shouldn't be hardcoded, or why every AI call needs token logging.
Our technical blog has 13 in-depth articles. From Threads auto-posting tutorials to IG Reel fully automated production pipelines, every article is a field report — not SEO filler.
Evidence 4: Public Security — We Scan Our Own Products
UltraProbe is our in-house AI security scanner that detects 12 attack vectors: XSS, SQL Injection, SSRF, RCE, Prompt Injection, and more.
The interesting part: We use UltraProbe to scan our own products.
This is called dogfooding — using your own tools to test your own systems. If UltraProbe finds vulnerabilities in our own products, we fix them first. Only then do we have the credibility to tell clients "we can help with your AI security."
In our UltraProbe launch announcement, we documented in detail the scanner's development process, why we chose Gemini 2.5 Flash as the analysis model, and the common vulnerability patterns we discovered.
Evidence 5: Public Timeline — Build Log from Day 1 to Now
The final signal: Time.
We didn't appear yesterday. Here's our public timeline:
Date Event
2025.09 Ultra Creation Co., Ltd. officially incorporated
2025.11 Mind Threads SaaS launched — Taiwan's only Threads automation, zero competitors
2026.01 risk.clock.tw hit 1,300 followers in 24 hours — AI content engine validated
2026.02 UltraProbe AI security scanner launched
2026.03 Technical blog reaches 13 articles
Every milestone is verifiable via links. Every product can be opened and tried. Every data point comes from live statistics.
"A one-person technical army" isn't a slogan — it's a record with dates, products, and data.
Why Most AI Companies Don't Do This
Because radical transparency is uncomfortable.
-
Open architecture means competitors can see your technical choices
-
Open data means people will come to verify your numbers
-
Open failures means admitting you're not perfect
-
Open timelines mean you can't inflate your track record
But that's exactly the point.
If your technology can withstand scrutiny, opening it up only builds trust. If your technology can't withstand scrutiny — the problem isn't whether to be open, but the technology itself.
In the age of the AI bubble, opacity = untrustworthiness.
Every company that says "AI-Driven" without explaining how is spending down the market's trust reserve. And we don't want to be that kind of company.
Ultra Lab's Technical Transparency Principles
We've set five rules for ourselves:
1. Every AI Claim Links to a Verifiable Source
The website says "35,000+ AI auto-generated posts"? You can see real-time posting on our Threads accounts. Says "3 SaaS products live"? Each one comes with an accessible URL.
We don't allow unverifiable numbers on our website.
2. Every Mentioned Product Offers a Free Trial
UltraProbe offers free scanning. Ultra Advisor provides free basic features. Mind Threads has a trial period.
If a product can't be tried, we won't feature it prominently on our website.
3. Every Technical Article Is Written by the Founder
All 13 of our technical articles were personally written by me (Min Yi Chen). Not generated by Claude or GPT, not ghostwritten.
Ironic? An AI company that insists on not using AI to write its own technical articles. But we believe: Technical thinking cannot be outsourced. AI can help you write marketing copy, but it can't think through architectural decisions for you.
4. We Publish Failures, Not Just Successes
Google API rate limiting caused all three products to crash? Published. Gemini's JSON format error rate is 3%? Published. Accidentally misconfigured environments when switching from sandbox to production? Also published.
A company that only showcases successes is either not doing anything, or hiding problems. People who are actually on the battlefield have scars.
5. Our Blog Is an Engineering Notebook, Not a Marketing Department
Go read our technical blog. You'll find code snippets, architecture diagrams, API call latency data, and model comparison tables. You won't find hollow openings like "AI is changing the world."
Because our target readers aren't investors — they're engineers and technical decision-makers. They don't need to be convinced AI matters. They need to know how to do AI right.
Conclusion: Transparency Is the New Moat
After the AI bubble recedes, two types of companies will remain:
-
Those with real products, real data, and real technical track records
-
Everyone else
We choose to be the first type.
Not because we're better than anyone — we're a one-person team, and comparing technical depth with Google, Anthropic, or OpenAI would be absurd. But because at our scale, transparency is the only effective way to build trust.
You don't need to believe our marketing copy. You just need to:
-
Open UltraProbe, paste a prompt, and see the scan results
-
Open @risk.clock.tw and evaluate the AI-generated content quality
-
Read our AI-Ready Architecture article and judge the technical depth
Then decide for yourself whether this company is actually doing AI.
Min Yi Chen — Founder, Ultra Creation Co., Ltd. Currently operating 6 AI products with 200+ daily AI calls
Want your system to have verifiable AI capabilities too? Free consultation
Originally published on Ultra Lab — we build AI products that run autonomously.
Try UltraProbe free — our AI security scanner checks your website for vulnerabilities in 30 seconds: ultralab.tw/probe
DEV Community
https://dev.to/ppcvote/how-do-we-prove-we-actually-do-ai-ultra-labs-technical-transparency-manifesto-ieSign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
claudegeminimodel
Request for arXiv cs.AI Endorsement – Life-Aligned AI Framework
Hi everyone, I’m preparing to submit a paper to arXiv (cs.AI, with cross-lists to q-bio.PE and physics.soc-ph) and am currently awaiting endorsement from a qualified author. Posting here in case anyone in this community can help or knows someone who can. Title: Life-Aligned AI: A Framework for Grounding Artificial Intelligence in the Empirical Conditions of Flourishing Here’s the main idea: Current alignment approaches work backwards — rules imposed in advance by minds that the systems they constrain may eventually exceed. This paper proposes a different starting point: training AI on living systems — the only adaptive framework continuously pressure-tested across four billion years under conditions of genuine consequence — and letting the operating principles emerge from genuine reasoning
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Products

AI Stack Selection: Workflow Fit Over Model Hype
Your AI platform choice is locking in an operating model, not just buying software. Choose wrong, and you're funding technical debt instead of business velocity. If you are an SME leader trying to choose the right AI stack from options like ChatGPT, Claude, Microsoft Copilot, or Gemini, the market pushes you toward the wrong questions. It will push you to ask which model is smartest, which app feels best, or which vendor is winning the news cycle. That is not the question that protects your budget. The better question is this: Which stack fits the way our company works, where our knowledge lives, and how much control we need? That is the question that turns AI selection into a business decision instead of a software shopping spree. Who this article is for This piece is for the founder, CEO





Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!