Federal Health Minister Karl Lauterbach sees opportunities in the use of artificially intelligent chatbots such as ChatGPT in the healthcare system, but also warns of dangers such as incorrect diagnoses. “There will soon be programs in which a patient verbally explains symptoms, findings and previous treatments and then gets an assessment of his illness and even possible therapy suggestions from the AI,” said the SPD politician to the newspapers of the Funke media group.
ChatGPT already provides such information in written dialogue. In March, the Report of a dog owner via Twitter, according to which the program’s successor, GPT-4, allegedly correctly diagnosed his four-legged friend’s disease on the basis of, among other things, laboratory values that a veterinarian had not previously recognized. However, chatbots can also spread invented information without this being apparent at first glance.
Lauterbach called for the use of AI systems such as ChatGPT to be regulated in the health sector. “They must be checked and be reliable,” the minister told the newspapers. In addition, it must be ensured that the data cannot be misused.
In principle, however, he sees the use of artificial intelligence (AI) in the healthcare system as positive. In the future, chatbots could theoretically run through therapies and answer questions about the effectiveness of medication for a specific patient. According to Lauterbach, AI can already sometimes be better than an experienced specialist. “However, the best results are achieved with a combination of artificial intelligence and a doctor.”
2023-04-23 08:41:04
#Lauterbach #ambivalent #medicine