👋 Hi, Luiza Jarovsky here. Welcome to the 76th edition of this newsletter, and thank you to 80,000+ followers on various platforms. Read about my work, invite me to speak, tell me what you've been working on, or just say hi here.
📢 I am happy to announce that The Privacy Whisperer is now Luiza's Newsletter. It's the same newsletter, and I will keep discussing the same topics, which sometimes go beyond privacy. The new name reflects this broader scope. I hope you continue with me on this journey of reimagining technology and building a more transparent, ethical, and fairer future.
✍️ This newsletter is fully written by a human (me), and illustrations are AI-generated.
This week's edition is sponsored by Didomi:
Explore the impact of Privacy UX on analytics with Didomi’s upcoming webinar on November 7 at 5pm CET. Hosted by Dana DiTomaso and Jeff Wheeler, this discussion will give you a better understanding of the evolving digital landscape, the progression of data privacy regulations, and their consequential impact on analytics and user experience. Secure your spot.
💭 Reimagining social networks
This week, dozens of US States have sued Meta over kids’ mental health, privacy & consumer law issues. This is a very important lawsuit as, to my knowledge, some of the issues being raised are new to US courts and might bring game-changing outcomes.
From page 11 to page 80, the lawsuit raises the following issues, which are mostly related to children's well-being. I would like the reader to pay attention to each of these items:
I think it's fascinating that the lawsuit approaches issues such as:
the prioritization of engagement over safety
algorithms that encourage compulsive use
dark patterns that are harmful to kids’ well-being (such as “likes” and haptic notifications)
visual filters that promote eating disorders and body dysmorphia
features that foster addiction
To be fair, most social networks today have a similar structure, UX design, algorithmic patterns, and features, so this lawsuit is really about the type of social networks we want and what should and shouldn't be allowed on these platforms.
In my TikTok article, I cited many of these issues (which, from my point of view, are even worse on TikTok), and we can comfortably say that most of them have been ubiquitous for at least 10 years already.
Until now, there had not been any comprehensive lawsuit forcing social networks to rethink their models and protect the youth or take into consideration how their algorithms impact mental health and well-being - regardless if they are good at increasing engagement, time spent on the platform, and advertising revenue.
From my perspective, one of the most important issues that must be dealt with if we want to make social networks better and healthier is to avoid optimizing for engagement. Why? Because it will usually mean downsides such as:
using UX dark patterns to make people glued to the screen and obsessed with social validation (likes, comments, shares);
using algorithmic feeds and pushing people to see viral content from people they don't follow, sometimes sensationalist and harmful;
creating a sense of urgency and competition (for engagement) among users;
having a “soft approach” against disinformation, misinformation, and harmful content (so that more content can spread faster);
and so on.
On the other hand, moving away from optimization for engagement is also not easy, as most social networks follow the “free with ads” model, which means that advertisers are the ones paying the bill, and advertisers want people paying attention/clicking on their ads. Possible solutions to the “optimizing for engagement problem” could be:
changing how the “free with ads” model works so that ads are positioned in ways that do not depend on growing engagement and platforms can impose stronger safeguards
replacing the “free with ads” model with something else
From a realistic point of view, it looks like the first bullet above is easier to implement than the second, especially if regulations are more specific and stricter on algorithmic transparency, accountability, and externalities.
Allowing social networks to have massive profits - mostly from their ad networks - regardless of their impact on mental health and well-being is a regulatory choice. Optimizing for engagement has a price, and I think we have already realized that, as a society, we should not accept this price. Social networks should bear the burden and be forced to recalibrate their algorithms and redesign their interfaces. That's why regulation exists, and we should regulate social networks much more strictly. I hope that this lawsuit against Meta will finally change the tide in favor of safer and healthier alternatives.
For those who think that it won't happen: in the field of data protection, the GDPR caused a major paradigm shift with stricter requirements and obligations. That could also happen in the context of algorithmic accountability and the protection of mental health and well-being, especially of kids.
📌 Job Opportunities
Looking for a job in privacy? Check out our privacy job board and sign up for the biweekly alert.
🖥️ Privacy & AI in-depth
Yesterday, 1,636 people joined my session with Katharine Jarmul about privacy engineering. Every month, I host a live conversation with a global expert - I've spoken with Max Schrems, Dr. Ann Cavoukian, Prof. Daniel Solove, and various others. Access the recordings on my YouTube channel or podcast.
🎓 New Masterclasses coming up next year
Do you want to dive deeper into dark patterns, privacy design, AI manipulation, deepfakes, disinformation, and other topics in privacy, tech & AI? Get ready for our 2024 Masterclasses: join the waitlist and be notified when they launch.
📖 Join our AI Book Club
We are now reading “Atlas of AI” by Kate Crawford, and the next AI book club meeting will be on December 14, with six book commentators. To participate, register here.
🤖 ChatGPT-powered dog-robot
Are you familiar with the ChatGPT-powered dog-robot tour guide? The future is here. And the time to implement AI safeguards is now.