📈 7 Reasons why social media disinformation is on the rise
Plus: case study on the DSA's approach to disinformation
👋 Hi, Luiza Jarovsky here. Welcome to the 75th edition of The Privacy Whisperer, and thank you to 80,000+ followers on various platforms. Read about my work, invite me to speak or join your panel, tell me what you've been working on, or just say hi here.
🌎 If you enjoy The Privacy Whisperer and think that others might find it helpful, recommend it broadly and join the leaderboard. Thank you!
✍️ This newsletter is fully written by a human (me), and illustrations are AI-generated.
This week's edition is sponsored by Neutronian:
Is your brand or client advertising on sites that do not disclose if they collect or use location data? Are they doing the bare minimum when it comes to consumer privacy laws? What about the various data partners you work with? Get a quick gut check on your partners’ data privacy compliance with Neutronian Data Privacy Scores. Get Started Today.
📈 7 Reasons why disinformation is on the rise
Disinformation is not a new phenomenon, and it became more prominent with the popularization of social media.
I've been noticing changes in how false information has been spreading in the last months, and since the war broke out last week, it became clear to me that something had changed: misinformation today travels faster and gets more viral, and there are additional incentives to people who fabricate them.
Below are 8 factors that are making disinformation worse:
Algorithmic feeds are the default
Chronologic feeds show posts from people you follow in chronological order, while algorithmic feeds show algorithmically curated posts that the social network believes will capture your attention.
The default in most social networks today is algorithmic feeds, and the chronological option is either not available or not incentivized. Algorithmic feeds end up fostering filter bubbles and algorithmically-led personalization. You will see not what you want to see but what the social network wants you to see.
Algorithmic feeds help disinformation spread faster, as fake news, especially alarming and sensationalist ones, will quickly become viral and, through algorithmic feeds, will reach more people - even those that do not follow the accounts initially spreading fake news.
Since TikTok's global rise, many people have been questioning TikTok's “secret algorithmic sauce” to make people - especially the young - addicted to it (read my full TikTok analysis).
Part of TikTok's appeal is that anyone can suddenly go viral, even if the person has a low follower count. There are rumors that other social networks, such as X (Twitter), have been experimenting with “TikTok-like” algorithms to stimulate virality. Personally, in the last few months, I have been exposed to viral posts on X (Twitter) whose authors and topics are unrelated to my preferences (which they can infer from my behavior), and this seems to be connected to those efforts.
This is another incentive to post fake news: alarming and blatantly false information will capture the attention of viewers and go viral. With a virality-fostering feed, it can reach an unimaginable number of users, even those whose interests are totally unrelated.
Lack of algorithmic transparency
A well-known problem observed by those who study social networks is the lack of algorithmic transparency and accountability. Through algorithmic feeds, we are shown posts that the social network's algorithm chose, and research has shown that this algorithmic “social engineering” can lead to increased polarization, hate speech, and all sorts of real-world harm.
Despite algorithms’ impact on individuals and societies, we have no transparency on how they work and what posts are being targeted to us. We cannot decide that we do not want to be exposed to certain types of content or have detailed choices over what we want to see online.
This lack of algorithmic transparency also incentivizes disinformation and misinformation. Fake news will travel freely, and people will be exposed to it, as there is no transparency and no control for users to proactively filter them.
Lack of accountability
People who carelessly post misinformation or bad actors who purposefully spread fake news have not much to worry about, as there are barely any accountability mechanisms in place.
People will not be asked or forced to edit or delete their posts in case it's proven fake or misleading. Media channels will not be requested or forced to issue an apology in case they fail to fact-check information and publish it anyway.
In recent years, the AI-deepfakes problem has become much worse, especially with the current generative AI boom, as I have been discussing recently.
Today it is easy, cheap, and fast to create a high-quality AI-based deepfake. Bad actors can promptly create a fake image or video of whatever fake claim they want to spread, and it will become even more credible for broader audiences.
We still do not have widespread mechanisms to detect and tag AI-generated fake content, and it will be an essential step to contain the spread of disinformation.
Monetization of user content
Various social networks have monetization programs that help users make money through the content they post. The money will usually be proportional to the number of impressions or watch time that their content got, meaning the number of people who saw it.
These seemingly beneficial programs that help users and creators to be compensated for their work can also backfire. Users will attempt to post viral content so they can get more views and make more money. As there are no checks or control over fake news or falsehoods, users are incentivized to create and post alarming or shocking fake news for the sake of virality and money.
Lack of a “fact-check culture” on social media
Social media's metrics and incentives today are the number of followers, likes, comments, and shares. It might also be about sharing relevant, newsworthy, and legitimate content, but the whole system is built toward the metrics I wrote above.
As a consequence:
whatever someone with a large following posts will resonate much more than if it was posted by someone with a low follower count (even if the first one is posting fake news and the second one is posting a great research paper);
whatever receives thousands of likes or reposts will be seen by many people as “true or legitimate,” even if it's a lie. Whatever has no likes and reposts gets no visibility.
This is how social networks are built, and increasingly, my conclusion is that these are toxic algorithmic-based systems whose structure and incentives foster fake news and are bad for individuals and society.
We must remember that social media disinformation has very high stakes, especially in the context of wars and social conflicts, where all sorts of real-world harm can happen as a consequence.
As I wrote on LinkedIn this week, it's more important than ever to:
only follow, read, and share posts from people whose credibility you trust;
even if you trust the person/organization's credibility, double or triple-check before posting, commenting, or sharing.
In this week's case study (below), I discuss the Digital Services Act (DSA)'s approach to disinformation and why it will probably be insufficient.
📌 Job Opportunities
Looking for a job in privacy? Check out our privacy job board and sign up for the biweekly alert.
🖥️ Privacy & AI in-depth
Every month, I host a live conversation with a global expert. I've spoken with Max Schrems, Dr. Ann Cavoukian, Prof. Daniel Solove, and various others. Access the recordings on my YouTube channel or podcast.
🎓 Last AI & Privacy Masterclass of 2023 [Register here]
Our 90-minute live Masterclass will help you navigate current challenges in the context of privacy & AI. I'm the facilitator, and I hope to make it an interactive opportunity to discuss risks, unsolved privacy issues, and regulation. After we finish the live session, you'll receive a quiz, additional reading material, and a certificate. Most people get reimbursed by their companies; let me know if you need assistance with that. Read more and register here. Looking forward to meeting you on October 23 at 5pm UK time.
📖 Join our AI Book Club
🔎 Case study on the DSA's approach to disinformation
This week, I discuss the Digital Services Act (DSA)'s role in dealing with disinformation and why it might be insufficient to tackle the problem.
On October 10, Thierry Breton, the EU Commissioner for Internal Market, sent the following letter to Elon Musk:
Keep reading with a 7-day free trial
Subscribe to Luiza's Newsletter to keep reading this post and get 7 days of free access to the full post archives.