AI has changed the internet we used to know
The most important topics in privacy & AI | The Privacy Whisperer #51
Welcome to the 51st edition of The Privacy Whisperer, helping you stay up to date with the most important topics in privacy & AI.
Today's newsletter is sponsored by Ubiscore:
Do you want to ensure that your business partners respect privacy regulations? Look no further than Ubiscore's public privacy database, which offers independent compliance ratings for over 5,000 startups. The database is expanding to include the entire DACH region, Italy, and other GDPR countries, providing a comprehensive view of how companies handle personal data. With just a few clicks, you can determine if your potential business partners or vendors comply with privacy regulations. Instead of taking their word for it, consider our evidence. Take action and assess your business’ or competitors’ privacy score now!
🔥 What is happening in privacy & AI
Below is my curation and commentary on the four most relevant privacy & AI-related issues that happened in the last few days:
#1. Smart cars have serious privacy issues
As the market of “smart cars” continues to expand, privacy and security issues are definitely expected. Having a car fully connected to the internet and uploading real-time location, internal and external footage, and other sensitive data would require strong privacy by design and security measures. Recently, a Reuters report based on interviews with nine former Tesla workers showed that Tesla employees shared videos and images recorded by Tesla’s car cameras in an internal messaging system. The images captured both mundane incidents as well as sensitive and violent episodes. Last week, it was Toyota's turn: Toyota Japan apologized for leaving the data from 2.15 million customers exposed on the internet for a decade. I recently wrote about Tesla's privacy issues. For additional resources on privacy in the context of cars, check out Andrea Amico's work.
#2. The AI Act is coming, and there are more AI systems banned
Last week, the EU's Internal Market Committee and the Civil Liberties Committee approved the Artificial Intelligence Act (AI Act). The next step is Plenary adoption, expected to happen in June. For those that are not familiar with the topic, the AI Act has a risk-based approach, with four types of risk: unacceptable risk (forbidden AI practices), high risk, low risk, and minimal risk. According to the European Parliament's press release, the members of the EU Parliament substantially amended the list of banned AI systems. We are expecting the AI Act to bring waves of global regulatory changes similar to what happened to privacy with the arrival of the General Data Protection Regulation (GDPR), with other countries following suit and adopting similar approaches. Anyone developing AI or working in policy, compliance, and governance should keep track of the AI Act's latest amendments and make sure that their business practices are not banned and that they follow all rules and obligations.
#3. OpenAI's ChatGPT plugins
Last Friday, OpenAI announced that ChatGPT Plus users would have access to 70+ plugins, including the ability to retrieve content from the internet (not only from the database used for training ChatGPT). Users will be able to install as many plugins as they want but will be able to use only three at a time, and they will be able to help people execute more varied tasks in a more specialized way. This Mashable article announced that “the internet is not ours anymore,” to the point that we might start navigating a “meta-web in which you rarely actually browse the web; instead, you talk to a bot which goes to the web to fetch the things you need.” It looks like there will be massive changes to the way the internet works, but I am not sure if in a positive way. To illustrate, if a chatbot will be able to - realtime - analyze the internet and output the “right answer,” it is not clear to me: a) what will be the incentives for authentic content production; b) how will information be fact-checked; c) how will we distinguish and credit original sources; d) how authors and creators will be compensated; e) how we will be able to distinguish human-made from “AI-made” answers and so on. My forecast: misinformation, polarization, online hate, scams, and so many issues that got worse with social media will get even worse now.
#4. Google's Bard has finally arrived at the AI party
If you have not tried yet, say hello to Bard - Google's equivalent to ChatGPT. Last week, at Google I/O - their yearly conference - the company announced its new technologies and products. As expected, the main topic was…. AI (and I cannot deny that this was the funniest meme last week). Google announced their new large language model (LLM), PaLM 2, trained in more than 100 languages, with improved logic, common sense reasoning, mathematics, and coding skills. As Microsoft integrated AI into Bing, one of the major expected news was regarding how Google is planning to integrate AI into its own search engine. They announced that they are experimenting with an AI-based “search generative experience”: if the user asks a question, the answer will be an AI-powered snapshot of “key information to consider” and further links on the right side to dig deeper (and I imagine that it is where the information comes from). The AI race has definitely begun, and for Google, it might mean the disruption of its search-powered advertising business and main source of revenue. It will have to fight hard against OpenAI/Microsoft and any new other strong competitors. The concerns I expressed above about this “new internet” are also valid here, as well as other concerns I have expressed in the last months involving the replication of bias, reputational harm, privacy by design, and various other privacy issues.
I hope you enjoyed this new format of The Privacy Whisperer. If you have any suggestions, get in touch.
Best regards, Luiza Jarovsky