Skip to content

ChatGPT resumes service in Italy after adding controls and privacy disclosures


A few days after OpenAI announced a set of privacy controls for its generative AI chatbot, ChatGPT, the service has been made available again to users in Italy, resolving (for now) an early regulatory suspension in one of the 27 Member States of the European Union, including as a local test of its compliance with the region. data protection rules continue.

As of this writing, web users browsing ChatGPT from an Italian IP address no longer receive a notification that the service is “disabled for users in Italy”. Instead, they come across a note saying that OpenAI is “delighted to resume offering ChatGPT in Italy.”

The popup goes on to stipulate that users must confirm that they are over the age of 18 or 13 with the consent of a parent or guardian to use the service, by clicking a button that says “I meet the OpenAI age requirements.”

The notification text also draws attention to OpenAI’s Privacy Policy and links to a help center article where the company says it provides information on “how we develop and train ChatGPT.”

The changes to the way OpenAI presents ChatGPT to users in Italy are intended to satisfy an initial set of conditions set by the local data protection authority (DPA) so that it can resume service with managed regulatory risk.

Quick rundown of backstory here: end of last month, Italy’s Guarantor ordered an order to stop processing on ChatGPT, saying it was concerned the services would breach EU data protection laws. It also opened an investigation into alleged breaches of the General Data Protection Regulation (GDPR).

OpenAI responded quickly to the intervention by geo-blocking users with Italian IP addresses at the start of this month.

The move was followed, a couple of weeks later, by the Guarantor issues a list of measures It said OpenAI must implement for the suspension order to be lifted by the end of April, including adding age restrictions to prevent minors from accessing the service and modifying the claimed legal basis for processing local user data.

The regulator faced some political criticism in Italy and elsewhere in Europe for the intervention. although it is is not the only data protection authority raising concerns – and, earlier this month, the bloc’s regulators agreed to launch a working group focused on ChatGPT with the aim of supporting investigations and cooperation on any application.

in a Press release Issued today announcing the resumption of service in Italy, Garante said that OpenAI sent him a letter detailing the measures implemented in response to the previous order, writing: “OpenAI explained that it had extended the information to European users and non-users, which it had modified. and clarified various mechanisms and deployed flexible solutions to allow users and non-users to exercise their rights. Based on these improvements, OpenAI restored access to ChatGPT for Italian users.”

Expanding on the steps taken by OpenAI in more detail, the DPA says that OpenAI expanded its privacy policy and provided users and non-users with more information about the personal data that is processed to train its algorithms, including stipulating that everyone has the right to opt out. of such processing, suggesting that the company is now relying on a claim of legitimate interest as the legal basis for processing data to train its algorithms (since that basis requires it to offer an opt-out).

Furthermore, Guarantor reveals that OpenAI has taken steps to provide a way for Europeans to request that their data not be used to train the AI ​​(requests can be made using an online form), and to provide them with “mechanisms.” have your data deleted.

It also told the regulator that it cannot fix the flaw of chatbots creating false information about named individuals at this time. Hence the introduction of “mechanisms that allow interested parties to obtain the suppression of information that is considered inaccurate.”

European users who wish to opt out of the processing of their personal data to train their AI can also do so through a form that OpenAI has made available and which, according to the DPA, “will thus filter your chats and chat history from the data used For training”. algorithms”.

So the intervention of the Italian DPA has resulted in some notable changes to the level of control that ChatGPT offers to Europeans.

That said, it’s still unclear whether the tweaks OpenAI rushed to implement will (or can) go far enough to address all of the GDPR concerns being raised.

For example, it is not clear whether the personal data of Italians that was used to train its GPT model historically, that is, when it pulled public data from the internet, was processed on a valid legal basis or, in fact, whether the data used to training models previously will be deleted or can be deleted if users request their data to be deleted now.

The big question remains what legal basis OpenAI had for processing people’s information in the first place, when the company wasn’t as open about the data it used.

The US company appears to be hoping to head off the objections that have been raised about what it has been doing with information from Europeans by now providing some limited checks, applied to new incoming personal data, in the hope that this clears up the issue. largest of all regional personal data. processing has been done historically.

When asked about the implemented changes, an OpenAI spokesperson emailed TechCrunch this summary statement:

ChatGPT is available again for our users in Italy. We are excited to welcome you back and remain dedicated to protecting your privacy. We have addressed or clarified the issues raised by Guarantor, including:

We thank Guarantor for their collaboration and look forward to the ongoing constructive discussions.

In the help center article, OpenAI admits that it processed personal data to train ChatGPT, while trying to claim that it didn’t really mean to, but that the stuff was just lying around the internet, or as it says: “A lot of data on the Internet relates to people, so our training information includes, by the way, personal information. We do not actively seek personal information to train our models.”

Which reads like a nice attempt to circumvent the GDPR requirement that you have a valid legal basis for processing this personal data you found.

OpenAI further expands its defense in a section (affirmatively) titled “How does ChatGPT development comply with privacy laws?” — in which you suggest that you have used people’s data legally because A) you intended your chatbot to be beneficial; B) he had no choice as a lot of data was required to build the AI ​​technology; and C) states that it did not intend to negatively impact people.

“For these reasons, we base our collection and use of personal information included in training information on legitimate interests in accordance with privacy laws such as the GDPR,” he also writes, adding: “To comply with our obligations to compliance, we have also completed a data protection impact assessment to help ensure we are collecting and using this information legally and responsibly.”

So again, OpenAI’s defense to a data protection law violation charge essentially boils down to: “But we didn’t mean anything bad, officer!”

Your explainer also offers bold text to emphasize the statement that you are not using this data to create profiles on individuals; contact or advertise to them; or try to sell them anything. Neither of which is relevant to the question of whether or not your data processing activities have violated the GDPR.

The Italian DPA confirmed to us that its investigation of that prominent issue is continuing.

In his update, Guarantor also notes that he expects OpenAI to comply with the additional requests set out in his April 11 order, noting the requirement that it implement an age verification system (to more robustly prevent minors from accessing the service). ); and carry out a local information campaign to inform Italians about how it has been processing their data and their right to opt out of the processing of their personal data in order to train its algorithms.

“The Italian SA [supervisory authority] recognizes the progress of OpenAI to reconcile technological advances with respect for people’s rights and hopes that the company will continue in its efforts to comply with European data protection legislation”, he adds, before stressing that this is only the first step in this regulation dance.

Ergo, all of OpenAI’s various claims to be 100% bona fide have yet to be solidly proven.


—————————————————-

Source link

For more news and articles, click here to see our full list.