AI Models Often Use Stigmatizing Language

A recent study from Mass General Brigham has examined how large language models (LLMs), a type of artificial intelligence used in healthcare communication, respond to questions about addiction and substance use. The research found that many AI-generated answers contain language that could be harmful or stigmatizing toward people dealing with alcohol or drug-related conditions. This