Major Tech Giants Fail AI Privacy Test

A comprehensive new study has revealed significant privacy concerns with artificial intelligence platforms from major technology companies, with Meta AI ranking as the most privacy-invasive.

The research, conducted by privacy company Incogni between May 25-27, 2025, evaluated nine popular AI platforms across 11 privacy criteria to determine which large language models and generative AI tools pose the greatest risks to user data.

It analyzed privacy policies, examining mobile app data practices on both iOS and Android platforms, and testing transparency measures across each service. The findings highlight a stark divide between privacy practices at major tech corporations versus smaller, specialized AI companies.

The study found that AI platforms developed by the world's largest technology companies consistently performed poorly in privacy assessments. Meta AI ranked as the worst performer, followed by Google's Gemini and Microsoft's Copilot. These platforms were criticized for complex privacy policies, extensive data collection practices, and limited user control over personal information.

"Platforms developed by the biggest tech companies turned out to be the most privacy invasive," the study noted, pointing to concerning practices around data sharing with third parties and unclear opt-out mechanisms.

Several major platforms, including Gemini, DeepSeek, Pi AI, and Meta AI, do not appear to allow users to opt out of having their prompts used to train AI models—a significant concern for users seeking to maintain control over their data.

French AI Company Takes Top Privacy Spot

In contrast, French company Mistral AI's Le Chat earned the highest privacy ranking, praised for its limited data collection practices and strong performance on AI-specific privacy concerns. The platform scored well despite some transparency limitations, demonstrating that effective privacy protection doesn't require sacrificing functionality.

OpenAI's ChatGPT claimed second place, earning recognition for having the clearest privacy policy and most transparent approach to explaining how user data is handled. The platform performed particularly well in allowing users to understand what happens to their information and providing clear opt-out options for model training.

xAI's Grok rounded out the top three, though researchers noted concerns about transparency and data collection practices. Anthropic's Claude, while performing similarly to Grok in overall scoring, raised additional concerns about how its models interact with user data.

The evaluation process involved extensive analysis of privacy policies and legal documents, examination of mobile app data practices on both Apple's App Store and Google Play Store, and assessment of platform transparency measures. Researchers also investigated whether AI companies respect technical signals like robots.txt files that website owners use to prevent automated data collection.

The study also examined privacy practices of mobile applications, finding that smartphone apps often collect and share more personal data than their web counterparts. Meta AI's mobile app was flagged for collecting "essentially all described data points" and sharing significant amounts with third parties.

Notably, several AI apps collect sensitive location data and phone numbers, with some platforms sharing photos and app interaction data with third-party companies. Le Chat again performed best in mobile privacy protection, followed by Pi AI and ChatGPT.

Mobile Apps Present Additional Privacy Risks

One of the study's key findings was the widespread lack of transparency in AI privacy practices. Researchers noted that many platforms make it unnecessarily difficult for users to understand how their data is being used, with privacy policies often requiring college-level reading ability to comprehend.

"All the analyzed privacy policies require a college-graduate level of reading ability to understand," the study found, highlighting a significant barrier to informed user consent.

The research revealed that companies with multiple products - such as Google, Microsoft, and Meta - often use single, complex privacy policies covering all services, making it nearly impossible for users to understand specific AI-related privacy practices.

Perhaps most concerning, the study found that all investigated AI platforms collect user data from "publicly accessible sources," which could include personal information scraped from the internet. This practice raises questions about consent and user awareness, as individuals may unknowingly have their personal data incorporated into AI training datasets.

The research also uncovered reports of several major AI companies, including OpenAI, Google, and Anthropic, failing to respect website owners' wishes as communicated through robots.txt files - technical signals meant to prevent automated data collection.

Based on the findings, privacy experts recommend that users seeking to minimize privacy risks should consider smaller, specialized AI platforms over those offered by major tech companies. The study particularly highlighted the importance of clear, searchable support documentation that allows users to easily find answers to privacy-related questions.

For users who choose to continue using platforms from major tech companies, the research emphasizes the importance of reviewing privacy settings and opting out of data collection for model training where possible.

The full study and detailed privacy rankings can be found at: https://blog.incogni.com/ai-llm-privacy-ranking-2025/