The World Health Organization (WHO) is advising caution in the use of large language model tools (LLMs) like ChatGPT for health-related purposes to safeguard human well-being, safety, and autonomy. While recognizing their potential to support health needs, WHO highlights concerns about errors, harm to patients, and erosion of trust in AI. The organization emphasizes the importance of transparency, inclusion, and expert supervision in LLM usage. Concerns include biased training data, potential misinformation, lack of consent, and the risk of disseminating convincing disinformation. WHO recommends rigorous oversight, ethical principles, and clear evidence of benefits before the widespread adoption of LLMs in routine healthcare.
Share this post
Safe and Ethical AI: WHO Calls for Safe…
Share this post
The World Health Organization (WHO) is advising caution in the use of large language model tools (LLMs) like ChatGPT for health-related purposes to safeguard human well-being, safety, and autonomy. While recognizing their potential to support health needs, WHO highlights concerns about errors, harm to patients, and erosion of trust in AI. The organization emphasizes the importance of transparency, inclusion, and expert supervision in LLM usage. Concerns include biased training data, potential misinformation, lack of consent, and the risk of disseminating convincing disinformation. WHO recommends rigorous oversight, ethical principles, and clear evidence of benefits before the widespread adoption of LLMs in routine healthcare.