Live
Black Hat USAAI BusinessBlack Hat AsiaAI BusinessDoes GPT-2 Have a Fear Direction?lesswrong.comY Combinator's CEO says he ships 37,000 lines of AI code per dayHacker News AI TopShow HN: SpeechSDK – free, open-source SDK that unifies all AI voice modelsHacker News AI TopWe Ditched LangChain. Here’s What We Built Instead — and Why It’s Better for Serious AI Research.Medium AIAMD vs. Nvidia: The AI Supercycle Is Big Enough for Both. Here's the Better Buy. - AOL.comGNews AI NVIDIAI Broke Up With ChatGPT (And My Productivity Thanked Me)Medium AIAI startup envisions '100M new people' making videogamesHacker News AI TopMost Students Think ChatGPT Helps Them Study — Here’s Why It Actually Slows Them Down (And How to…Medium AIWhen the server crashes the soulMedium AIDeepfakes and malware: AI menu grows longer for threat actors, causing headaches for defenders - SiliconANGLEGNews AI deepfakeAMD vs. Nvidia: The AI Supercycle Is Big Enough for Both. Here's the Better Buy. - The Motley FoolGNews AI NVIDIAThe AI That Refuses to Advise, And Why That Changes EverythingMedium AIBlack Hat USAAI BusinessBlack Hat AsiaAI BusinessDoes GPT-2 Have a Fear Direction?lesswrong.comY Combinator's CEO says he ships 37,000 lines of AI code per dayHacker News AI TopShow HN: SpeechSDK – free, open-source SDK that unifies all AI voice modelsHacker News AI TopWe Ditched LangChain. Here’s What We Built Instead — and Why It’s Better for Serious AI Research.Medium AIAMD vs. Nvidia: The AI Supercycle Is Big Enough for Both. Here's the Better Buy. - AOL.comGNews AI NVIDIAI Broke Up With ChatGPT (And My Productivity Thanked Me)Medium AIAI startup envisions '100M new people' making videogamesHacker News AI TopMost Students Think ChatGPT Helps Them Study — Here’s Why It Actually Slows Them Down (And How to…Medium AIWhen the server crashes the soulMedium AIDeepfakes and malware: AI menu grows longer for threat actors, causing headaches for defenders - SiliconANGLEGNews AI deepfakeAMD vs. Nvidia: The AI Supercycle Is Big Enough for Both. Here's the Better Buy. - The Motley FoolGNews AI NVIDIAThe AI That Refuses to Advise, And Why That Changes EverythingMedium AI
AI NEWS HUBbyEIGENVECTOREigenvector

It’s not easy to get depression-detecting AI through the FDA

The Verge AIby Robert HartApril 2, 20261 min read0 views
Source Quiz

For the past seven years, the California-based startup Kintsugi has been developing AI designed to detect signs of depression and anxiety from a person's speech. But after failing to secure FDA clearance in time, the company is shutting down and releasing most of its technology as open-source. Some elements may even find a second life [ ]

For the past seven years, the California-based startup Kintsugi has been developing AI designed to detect signs of depression and anxiety from a person’s speech. But after failing to secure FDA clearance in time, the company is shutting down and releasing most of its technology as open-source. Some elements may even find a second life beyond healthcare, like detecting deepfake audio.

Mental health assessments still largely rely on patient questionnaires and clinical interviews, rather than the lab tests or scans common in physical medicine. Instead of focusing on what someone is saying, Kintsugi’s software analyzes how it is being said. The idea isn’t new — speech patterns like pauses, sentence structure, or speed are known indicators of various mental health issues — but Kintsugi says its AI can pick up subtle shifts that may be less obvious to human observers, though it has not publicly detailed exactly which features drive its models’ predictions. In peer-reviewed research, the company reported results broadly in line with established self-report screening tools for depression using short speech samples.

The company pitched the technology as a complement — or potential alternative — to self-reported screening tools.

The company pitched the technology as a complement — or potential alternative — to self-reported screening tools like the Patient Health Questionnaire-9, or PHQ-9, a staple of primary care and psychiatry. These tools are supposed to be used alongside formal clinical assessment, and although they are widely validated, screening rates can be low, they depend on patients accurately describing symptoms, and they may not capture the full set of symptoms associated with mental health disorders. Kintsugi argued its voice-based model could provide a more objective signal, expand screening to more patients, and be deployed at scale across health systems, insurers, and employer programs. Doing so, however, would require FDA clearance.

Kintsugi had been seeking FDA clearance through the agency’s “De Novo” pathway, a route meant for novel, low-risk medical devices without an existing equivalent on the market. While intended to streamline approval for new kinds of products, it is still a process that can require years of data collection and regulatory review. Kintsugi’s founder and CEO Grace Chang told The Verge a lot of time was spent teaching the regulator about AI. The framework also fits AI poorly; much is designed with more traditional devices in mind — think hip implants, surgical tools, pacemakers — whose design remains largely fixed once approved. For AI systems, that can mean locking a model that would otherwise continue to be optimized and updated over time.

The FDA fits AI poorly; much is designed with more traditional devices in mind.

Despite the Trump administration’s hard push to cut red tape and get AI products into the real world as soon as possible, Chang said regulatory experts tell her that “there’s nothing that helps them do that except loud yelling from the top.” The approval process was further slowed by federal government shutdowns. The startup ran out of funding waiting for its final submission.

Efforts to raise additional funds faltered as the company’s runway shortened. Rather than accept “predatory” short-term offers to meet payroll — Chang said one proposal offered around $50,000 a week in exchange for $1 million in equity — the team decided to open-source most of its technology so others might continue the work. Investors were not happy.

Open-sourcing a mental health screening model also raises concerns about misuse. Tools designed to flag signs of depression or anxiety could, in theory, be deployed outside clinical settings, such as by employers or insurers, without the safeguards typically required in healthcare. Obviously that shouldn’t happen, but once released publicly there is little to prevent the technology from being used in ways its creators did not intend.

There are other complications, too. Nicholas Cummins, a senior lecturer in speech analysis and responsible AI in health at King’s College London, told The Verge that open-source releases often lack the detailed “paper trail” regulators expect, including a clear record of how a model was trained, validated, and tested for safety. Without that, he said, bringing a product built on the technology through FDA approval could prove difficult.

Open-sourcing a mental health screening model also raises concerns about misuse.

More likely, Cummins suggested, companies would treat the model as a starting point and layer their own data and validation processes on top. Even then, he cautioned voice-based systems remain imperfect and carry a “reasonable” risk of errors, he warned, especially for conditions like depression, which manifest differently across individuals, languages, and cultural contexts and depend heavily on the diversity and structure of speech data used in training.

Chang did not dismiss concerns about potential misuse, but said “it’s less of a concern in practice than it might appear in theory.” The organizations with the greatest incentives to abuse the technology, she argued, are also those that “face the highest barriers to actually deploying it.” In Chang’s view, “the more realistic risk is underuse, not misuse.”

While Kintsugi’s mental health screening technology has been open-sourced, Chang said not all of the company’s technology has been released publicly. In part, this is for security concerns, she said, as chief among it is technology that can detect synthetic or manipulated voices.

Chang said the capability emerged when the team experimented with AI-generated speech to strengthen its mental health models. The synthetic audio lacked the vocal signals the model was trained to recognize, revealing that it could be used to distinguish between human and AI-generated voices. It is a growing challenge given the proliferation of AI slop and fraudulent deepfakes and one that has yet to be reliably solved. It is a potentially lucrative opportunity, and, thankfully for Kintsugi, an area that is not subject to FDA oversight.

Chang declined to speculate on her next move or whether Kintsugi’s security-focused technology might resurface, but she said she wishes someone else would build on the company’s work and carry it through the final stages of the FDA process. But without broader changes, Kintsugi’s shutdown is unlikely to be the last example of startup timelines clashing with medical regulation, and Chang said she hopes that reality doesn’t deter other founders from trying.

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

  • Robert Hart
Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by Eigenvector · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

Knowledge Map

Knowledge Map
TopicsEntitiesSource
It’s not ea…open-sourcestartupcompanyThe Verge AI

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 130 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!