Today: Google, Microsoft, OpenAI & Anthropic have partnered
Plus: case study on TikTok's privacy UX shortfalls
👋 Hi, Luiza Jarovsky here. Read about my work, invite me to speak, tell me what you've been working on, or just say hi here.
This week's newsletter is sponsored by Didomi:
Take your game monetization to new heights! Safeguard your AdMob revenue while adhering to privacy laws with Didomi for Gaming. As a Google-certified Consent Management Platform (CMP) that supports Unity SDK, Didomi empowers game developers to display ads in compliance with the new set of guidelines to use Google products. Request a demo.
🔥 Google, Microsoft, OpenAI & Anthropic have partnered
Today I discovered through Lila Ibrahim - Chief Operating Officer at Google DeepMind - that Google has partnered with Anthropic, Microsoft, and OpenAI and launched Frontier Model Forum, a “new industry body (that) will focus on the safe and responsible development of frontier AI models.” OpenAI and Google have also posted about it.
At Google's official blog post on the topic, they state that the objectives of the Forum are:
“Advancing AI safety research
Identifying best practices for the responsible development and deployment of frontier models
Collaborating with policymakers, academics, civil society, and companies
Supporting efforts to develop applications that can help meet society’s greatest challenges”
They state that this Forum is an attempt to support multilateral initiatives such as the G7 Hiroshima process, the OECD’s work on AI risks, standards, and social impact, and the US-EU Trade and Technology Council, as well as the Partnership on AI and MLCommons.
The goals and the partnership seem positive, socially beneficial, and a step in the right direction. However, two aspects of this partnership catch my attention:
There is an extreme concentration of power and wealth among these companies, and each one of them has made billionaire investments in AI and has a direct interest in multiplying these investments. It is not clear how the governance of this Forum will work in practice (including transparency, participation, accountability, and oversight). It is also a challenge how these companies will be able to align their corporate incentives to generate profit with their publicized goals of serving the public interest.
This initiative follows the expected pattern that companies try to strengthen their lobbying efforts (as well as marketing and product strategies) through “self-regulation” initiatives, in which they set their own rules. Setting best practices and helping to develop standards in this growing consumer-facing AI industry might be welcome at this point. However, it is not enough, and there must be strong laws, regulations, fines, oversight, and enforcement. It's too early for any conclusions, but any self-regulatory initiative must not undermine legislative and regulatory efforts.
🔥 Meta is fined $20 million in Australia: avoid their mistake
Today, two of Meta's subsidiaries, Facebook Israel and the VPN app Onavo Inc (discontinued), were ordered by the Australian Federal Court to pay $10 million each, following the proceedings instituted by the Australian Competition and Consumer Commission. According to its media release:
“The Court declared that the two companies engaged in conduct liable to mislead the public in promotions for the Onavo Protect app by failing to adequately disclose that users’ data would be used for purposes other than providing Onavo Protect, including Meta’s commercial purposes.”
The language used to advertise the free VPN service Onavo Protect reflected that it would be used in a way to safeguard personal information (such as in “use a free, fast and secure VPN to protect personal information”). However, this data was used to benefit Meta's commercial activities.
According to ABC News, these disclosures (regarding how they were using consumer data) were present in the Terms of Service and Privacy Policy; however, the way the product was being marketed and the information transmitted directly to customers in Apple's and Google's app stores did not reflect those practices.
It's interesting how this case/fine is aligned with others I've been discussing in this newsletter, such as the BetterHelp case ($7.8 million fine), where the FTC used screenshots to show that the way a product was being advertised - or the message being told directly to the consumer - was not coherent with data practices occurring in the background.
What every company should learn here is that the UX and the language used to market the product, communicate with the customer, and promote a service both reflect privacy compliance. A long privacy policy written by a team of lawyers and targeting other lawyers is not enough. Privacy culture should go beyond the legal department.
*On the topic, if you want to dive deeper into avoiding dark patterns and improving your company's privacy UX to avoid compliance issues, I am giving a Privacy UX masterclass in September, register here (limited seats, the July session is sold out).
🔥 The Brazilian Data Protection Authority investigates Threads
The Brazilian Data Protection Authority (Autoridade Nacional de Proteção de Dados) announced this week that, after a preliminary evaluation of Threads’ data practices in light of the Brazilian data protection law (LGPD), the Board of Directors decided that further investigation is needed.
They had a meeting with Meta's representatives, in which it was clarified that, so far, there is no behavioral advertising within Threads (similar to what happens on Instagram and Facebook). Meta's representatives also clarified that there was no active block by the Irish Data Protection Commission, but Meta opted not to launch due to concerns with Digital Services Act (DSA) & Digital Markets Act (DMA) compliance, as well as recent decisions by the Irish DPC and judgments from the Court of Justice of the European Union.
My opinion here is that the Brazilian Authority is concerned with the internal data sharing between Threads, Facebook, and Instagram and with Thread's unavailability in the European Union. Brazil's data protection law (LGPD) is largely inspired by the GDPR, and the fact that Threads is not available in the EU has turned the alarm on for the Brazilian authorities.
On the topic of Meta's privacy practices, make sure to listen to my 80-minute conversation with Max Schrems last week - more than 6,600 people have already watched it on LinkedIn, YouTube, or my podcast.
🔥 Case study: TikTok's privacy UX shortfalls
This week I analyze some of TikTok's user experience (UX) practices that have privacy relevance and show that many companies still ignore data protection principles such as fairness and choice transparency. I start with UX features that affect transparency and user autonomy and then move to issues more typically related to privacy compliance.