ChatGPT is available again to our users in Italy. We are excited to welcome them back, and we remain dedicated to protecting their privacy. We have addressed or clarified the issues raised by the Garante, including:
A new help center article on how we collect and use training data.
Continuing to offer our existing process for responding to privacy requests via email, as well as a new form for EU users to exercise their right to object to our use of personal data to train our models.
A tool to verify users’ ages in Italy upon sign-up.
We appreciate the Garante for being collaborative, and we look forward to ongoing constructive discussions.
In the help center article OpenAI admits it processed personal data to train ChatGPT, while trying to claim that it didn’t really intent to do it but the stuff was just lying around out there on the Internet — or as it puts it: “A large amount of data on the internet relates to people, so our training information does incidentally include personal information. We don’t actively seek out personal information to train our models.” Which reads like a nice try to dodge GDPR’s requirement that it has a valid legal basis to process this personal data it happened to find. OpenAI expands further on its defence in a section (affirmatively) entitled “how does the development of ChatGPT comply with privacy laws?” — in which it suggests it has used people’s data lawfully because A) it intended its chatbot to be beneficial; B) it had no choice as lots of data was required to build the AI tech; and C) it claims it did not mean to negatively impact individuals. “For these reasons, we base our collection and use of personal information that is included in training information on legitimate interests according to privacy laws like the GDPR,” it also writes, adding: “To fulfill our compliance obligations, we have also completed a data protection impact assessment to help ensure we are collecting and using this information legally and responsibly.” So, again, OpenAI’s defence to an accusation of data protection law-breaking essentially boils down to: ‘But we didn’t mean anything bad officer!’ Its explainer also offers some bolded text to emphasize a claim that it’s not using this data to build profiles about individuals; contact them or advertise to them; or try to sell them anything. None of which is relevant to the question of whether its data processing activities have breached the GDPR or not. The Italian DPA confirmed to us that its investigation of that salient issue continues. In its update, the Garante also notes that it expects OpenAI to comply with additional requests laid down in its April 11 order — flagging the requirement for it to implement an age verification system (to more robustly prevent minors accessing the service); and to conduct a local information campaign to inform Italians of how it’s been processing their data and their right to opt-out from the processing of their personal data for training its algorithms. “The Italian SA [supervisory authority] acknowledges the steps forward made by OpenAI to reconcile technological advancements with respect for the rights of individuals and it hopes that the company will continue in its efforts to comply with European data protection legislation,” it adds, before underlining that this is just the first pass in this regulatory dance. Ergo, all OpenAI’s various claims to be 100% bona fide remain to be robustly tested. ChatGPT resumes service in Italy after adding privacy disclosures and controls by Natasha Lomas originally published on TechCrunch