Conversational AI, which integrates a large amount of network data and generates answers, faces the same legal issues as image-generating AI. The Privacy Guarantor, the local Italian department responsible for privacy protection, has concerns that OpenAI may violate EU data protection laws during data processing.ordered to disableIts ChatGPT service. Guarantor continued to point out that there is no legal basis for OpenAI to collect such a large amount of data to train ChatGPT. At the same time, the wrong answers that appear from time to time show that the generative AI did not process the data correctly, and last week OpenAI had a leak of user information. have drawn the attention of regulators.
At the same time, Guarantor is concerned that OpenAI does not do enough to protect children, because even though the ChatGPT service says it is restricted to 13-year-olds, they do not have age-checking tools to prevent children from seeing “completely inappropriate” answers. The Guarantor is now giving OpenAI 20 days to address the issue or face a fine of up to 20 million euros or 4% of global revenue.
OpenAI has not yet responded to the incident, but according to ChatGPTPrivacy PolicyMentioned in , they clearly state that data trainers will be able to use relevant conversational data to improve AI, while emphasizing that the content will be anonymous statistics. section for children,policyIt is mentioned in the service that people under the age of 13 are prohibited from using it, and the policy also mentions that they will not collect relevant information “knowingly”.
The Italian government’s approach followed a non-profit research group to file a complaint with the US FTC, asking OpenAI to prohibit the release of a new version of CharGPT before improving service transparency, fairness, and clarity in accordance with regulations. On the other side, led by Elon Musk, the former main funder of OpenAI, and a group of industry leaders jointly signed a request to suspend stronger AI development to focus on ethical issues.