Questions, answers, and fine print.
What is this?
A voice-based mental health check powered by open-source AI. You talk for 20 seconds. A model analyzes how you sound — not what you say — and shows you what it found. Then it tries to make you laugh about it. Sometimes it succeeds.
Is this a real medical tool?
No. This is entertainment powered by real science.
The voice model was developed by Kintsugi Health, a company that spent seven years and $30 million pursuing FDA clearance before open-sourcing their technology in February 2026.
The underlying science is peer-reviewed and published in the Annals of Family Medicine. The product wrapping it is not peer-reviewed. Do not use this as a substitute for professional care.
How does it work?
You press a button and talk for 20 seconds about anything — your day, your week, your lunch plans. The model analyzes acoustic features of your voice: pitch, intonation, tone, timbre, and pauses. It does not analyze your words — only how you say them.
The model outputs two separate scores: one for depression (based on the PHQ-9 clinical scale) and one for anxiety (based on the GAD-7 clinical scale). These are the same scales used by doctors in primary care screening.
Depression is scored in 3 levels: Minimal, Mild to Moderate, and Severe. Anxiety is scored in 4 levels: Minimal, Mild, Moderate, and Severe. Each combination gets its own result. There are no "tiers" — what you see maps directly to what the model detected.
How accurate is it?
In a peer-reviewed study of 14,898 adults, the model detected vocal characteristics consistent with moderate to severe depression (PHQ-9 ≥ 10) with a sensitivity of 71.3% and a specificity of 73.5%.
In plain English: roughly 7 out of 10 people with moderate-to-severe depression are correctly identified, and roughly 7 out of 10 people without it are correctly cleared.
The model performs differently across demographics. Sensitivity for women was 74.0% vs 59.3% for men. For people under 60 it was 71.9% vs 63.4% for people 60 and older.
This is comparable to many standard clinical screening tools, but it is not a diagnosis.
What language should I speak?
English. The model was trained predominantly on American English speech, with a broad range of accents represented.
It will still process other languages, but the results are unvalidated. The model analyzes acoustic features (pitch, rhythm, pauses) not words, so it may pick up some signal from non-English speech — but we can't tell you how reliable that signal is.
What does "borderline" mean in my result?
Sometimes your voice falls right on the boundary between two categories. When this happens, the model's confidence is low — it can't clearly assign one label over the other.
In the original clinical research, about 20% of voice samples fell in this indeterminate zone. We show this as "borderline" rather than forcing a label the model isn't confident about.
You can try again — voices change by the hour, and a second recording might give a clearer read.
Do you store my voice?
No. Your audio is processed in memory, analyzed, and immediately deleted. We don't store it, we can't replay it, we don't have it.
The model runs on a single computer. Your voice goes in, the result comes out, the audio is gone. No database, no cloud storage, no backup.
Do you store any of my data?
No account. No cookies. No tracking beyond an anonymous page view counter. We know how many people used the app today. We don't know who any of them are. This is by design.
Can I save my results over time?
Not yet — but we're building it.
Right now, we delete everything the moment you're done. Some people like that. Some people want to track how they're doing over weeks and months.
If you want to be notified when vibe history is available, leave your email below. We'll reach out when it's ready — with a secure, private way to track your results over time. This will be a paid feature ($5/month) because storing data responsibly costs money, and we'd rather charge you honestly than sell your information to someone else.
Your email is stored with Tally solely for this waitlist. We won't share it or use it for anything else.
Who built this?
Eva Ouyang, a PM and AI builder in the Bay Area. The voice model is Kintsugi DAM v3.1, open-sourced in February 2026 after Kintsugi Health shut down. The product was built in a weekend with Claude Code.
The irony of using AI to build a product about AI anxiety was not lost on anyone involved.
Why are there 12 different results instead of just "fine" or "not fine"?
Because mental health isn't binary.
The model outputs two independent dimensions — depression and anxiety — each with its own severity scale. Depression has 3 levels, anxiety has 4. That gives 12 possible combinations, and each one represents a genuinely different experience.
"Anxious but not depressed" feels completely different from "depressed but not anxious." We wrote different copy for each because you deserve a result that actually reflects what the model detected, not a simplified bucket.
Is the code open source?
The Kintsugi DAM v3.1 model is open-source on HuggingFace (huggingface.co/KintsugiHealth/dam).
The "I am fine" product code is not currently open-source. It might be eventually.
Keep this thing alive.
"I am fine" is free. No ads, no paywall, no data harvesting. It runs on a Mac mini and goodwill. If you think this should keep existing, here are ways to help.
Buy the model a coffee — $3
Keeps the server running for roughly a day. You'll feel good for approximately the same duration.
Sponsor a feature — $25
Goes directly toward building the next thing. Current wishlist:
- Personalized results based on what you actually said
- Historical vibe tracking
- More result copy (the jokes need to stay fresh)
- Multi-language support
Tell us what you'd build: email or form link placeholder
We drop the bit here.
If the result surprised you, or if something in your week has felt heavier than you're letting on — that's real, and it matters. This app can't help with that. But these people can.
Immediate crisis support
These are free, confidential, and available right now.
- 988 Suicide & Crisis Lifeline: Call or text 988. Available 24/7 in the U.S. and Canada.
- Crisis Text Line: Text HOME to 741741. Available 24/7.
- The Trevor Project: Call 1-866-488-7386 or text START to 678-678. For LGBTQ+ youth. (Verify current availability at thetrevorproject.org — service structure may have changed in 2025-2026.)
- Veterans Crisis Line: Dial 988 then press 1, or text 838255.
California-specific free resources
If you're in the Bay Area or anywhere in California:
- CalHOPE Warm Line: (833) 317-HOPE (4673). Peer-run emotional support. Mon/Tue/Wed/Fri 7am–11pm, Thu 8am–10pm PT. Not 24/7 — for after-hours crisis support, call 988.
- BrightLife Kids & Soluna: Free mental health apps for parents and youth, supported by the CA Department of Health Care Services.
General low-cost & free resources
For ongoing support, not just crisis moments:
- 211.org: Call 211 to find local community resource specialists who can help locate free or low-cost mental health providers in your area.
- NAMI (National Alliance on Mental Illness): Support groups, educational programs, and helpline. Visit nami.org or call 1-800-950-NAMI (6264).
- SAMHSA National Helpline: 1-800-662-4357. Free, confidential, 24/7, 365 days a year. Treatment referral and information.
- Federally Qualified Health Centers: Use findahealthcenter.hrsa.gov to find clinics offering services on a sliding fee scale based on income.
- Open Path Collective: Directory of therapists offering low-cost sessions ($30–$80 per session). Visit openpathcollective.org.
Self-help tools
These are not substitutes for professional support, but they can help in the meantime:
- Now Matters Now (nowmattersnow.org): Teaches DBT skills for coping with suicidal thoughts.
- Mindfulness Coach: Free app from the VA's National Center for PTSD. Available on iOS and Android.
You don't have to be in crisis to reach out. "I've been feeling off" is reason enough. You don't need to have a plan or a diagnosis. You just need to want to talk to someone who gets it.