Summary:
- Large language models (LLMs) like ChatGPT can sometimes recommend medical treatments that are not directly related to a patient's condition. This is because the models may factor in unrelated information when making recommendations.
- Researchers at MIT found that LLMs can be influenced by factors like a patient's age, gender, or race when suggesting treatments, even if those factors are not medically relevant.
- To address this issue, the researchers suggest that LLMs should be trained to focus solely on the medical information provided and avoid making recommendations based on irrelevant personal characteristics.