Google’s AI model under EU regulator’s scanner; probe launched over privacy concerns. All you need to know

European Union regulators said Thursday they are investigating Google’s artificial intelligence (AI) model, Pathways Language Model 2 (PaLM2), to determine if it complies with the region’s stringent data privacy laws.

The inquiry, initiated by Ireland’s Data Protection Commission (DPC), forms part of broader efforts by national regulators across the EU to assess how AI systems manage personal data.

As Google’s European operations are headquartered in Dublin, the Irish DPC serves as the tech giant’s lead authority under the General Data Protection Regulation (GDPR), the EU’s strict data privacy framework. The commission stated that the investigation will focus on whether Google properly evaluated whether PaLM2’s data processing presents a “high risk to the rights and freedoms of individuals” in the EU.

PaLM2 is a large language model that underpins various AI-powered services provided by Google, such as email summarisation and other generative AI functions. These models rely on vast amounts of data to perform tasks, raising concerns about how they handle and process personal information. When approached, Google declined to comment on the ongoing inquiry.

This investigation into PaLM2 reflects a wider trend in Europe, where regulators are increasingly scrutinising the practices of major tech companies regarding AI and data privacy. Earlier this month, the Irish DPC announced that Elon Musk’s social media platform X had agreed to cease using user data to train its AI chatbot, Grok. This decision came after the DPC took legal action, filing an urgent High Court application to prevent X from processing user data contained in public posts without consent.

Similarly, Meta Platforms, the parent company of Facebook and Instagram, suspended plans to use content from European users to train its latest language model. This move followed extensive discussions with Irish regulators, highlighting the pressure tech giants face in ensuring compliance with EU data laws.

Other countries in the EU have also taken action. Italy’s data privacy regulator temporarily banned ChatGPT in 2022 due to privacy violations, only allowing its return after OpenAI, the company behind ChatGPT, agreed to implement measures addressing the regulator’s concerns.

Leave a Comment