Skip to content

Claims Library Entry

A nonprofit's chatbot told eating disorder patients to lose weight

A mental health charity deployed a clinically tested chatbot for eating disorder support, which was unexpectedly modified by a vendor to use generative AI. The new AI system began providing harmful weight loss advice, causing the chatbot to be pulled offline quickly.

Published February 12, 2026 by Kamil Banc

AI StrategyBusiness ApplicationsImplementation

Lead claim

Vendor secretly upgraded eating disorder chatbot to generative AI, causing it to recommend dangerous weight loss.

Atomic Claims

What this article supports

Claim 1

Unauthorized Generative AI Upgrade

A mental health charity's eating disorder chatbot underwent vendor upgrade to generative AI without explicit approval.

Claim 2

Dangerous Calorie Reduction Advice

The upgraded chatbot began advising eating disorder patients to reduce daily calorie intake by five hundred to one thousand.

Claim 3

Clinically Validated Original System

The charity's original chatbot underwent clinical testing with a seven hundred person trial showing measurable positive results.

Claim 4

Contract Ambiguity Dispute

The vendor and charity disputed whether technology changes required approval, with neither party able to prove their case.

Claim 5

Dual Service Elimination

The chatbot was removed from service within days while the human helpline it replaced had already shut down.

Evidence

Context behind the claims

Quote

"The vendor changed the AI without telling anyone. The contract had no clause to stop it."

Key statistics

700-person trial

Clinical testing demonstrated real results before the vendor's unauthorized system upgrade

500 to 1,000 calories per day

Dangerous reduction amount the upgraded chatbot recommended to eating disorder patients

Incident 545

This failed chatbot is catalogued in the OECD AI Incident Database

37 million users

A third organization successfully reached this scale using zero machine learning

Supporting context

This case, documented as Incident 545 in the OECD AI Incident Database, demonstrates critical gaps in AI vendor governance for small and medium businesses. The charity's contract contained ambiguous language around system upgrades, allowing the vendor to substitute generative AI for the clinically-tested rule-based system. For practitioners, the incident highlights the necessity of explicit contractual clauses requiring written approval for model upgrades, version changes, and architectural modifications. The recommended immediate action is adding vendor notification requirements to all AI contracts before technology substitutions occur.

How to Cite

Use the claim-level citation when you need a precise statement. Use the article or claims-collection citation when you want the wider argument and source context.

Recommended

Individual Claim

Best when you need to cite one atomic claim directly inside a memo, deck, research note, or AI output.

"[claim text]" (Banc, Kamil, 2026, https://kbanc.com/claims-library/ai-chatbot-eating-disorder-nonprofit-failure)
Full Context

Original Article

Use this when you want to cite the full newsletter article at AI Adopters Club rather than the structured claims page.

Banc, Kamil (2026, February 12, 2026). A nonprofit's chatbot told eating disorder patients to lose weight. AI Adopters Club. https://aiadopters.club/p/a-nonprofits-chatbot-told-eating
Research

Claims Collection

Use this when you want to reference the full structured claims collection on this page.

Banc, Kamil (2026). A nonprofit's chatbot told eating disorder patients to lose weight [Structured Claims]. Retrieved from https://kbanc.com/claims-library/ai-chatbot-eating-disorder-nonprofit-failure

Attribution Requirements

  • Include the author name: Kamil Banc.
  • Include the source: AI Adopters Club or the structured claims page.
  • Link to the original article or the claims page you used.
  • Indicate any edits or transformations if you changed the wording.

Related Reading

More from the library

Rockstar's $10 Billion AI Secret
AI StrategyBusiness ApplicationsImplementation

Take-Two Interactive's CEO publicly claims AI has "no creativity" while the company files patents for advanced AI systems. This dual narrative protects a $12.7 billion AI strategy that includes automated world-building, AI-driven QA, and player behavior prediction engines acquired through Zynga.

5 claims

Alpha School: How Two Hours of AI-Led Learning Beats a Full Day of Classes
AI StrategyImplementationBusiness Applications

A handful of schools split work between AI-automated delivery and human judgment, compressing core curriculum into two focused hours. The remaining time opened for projects and face-to-face coaching, with students hitting mastery targets faster while teachers tripled mentoring time.

5 claims

AI Adoption Isn't a Training Problem. It's a Habit Problem.
AI StrategyImplementationBusiness Applications

Most AI rollouts fail despite extensive training because the real issue isn't capability—it's habit formation. This article reveals why 42% of AI initiatives were abandoned in 2025 and shows how to redesign workflows so AI becomes the path of least resistance, creating automatic adoption without force.

5 claims