GPT-5 And Privacy by Design: Does OpenAI Care?
Most people missed it, but there was a shocking disregard for privacy by design in yesterday's GPT-5 launch (which suggests that perhaps OpenAI really doesn't care) | Edition #224
👋 Hi everyone, Luiza Jarovsky here. Welcome to our 224th edition, now reaching over 72,100 subscribers in 170 countries. It's great to have you on board! To upskill and advance your career:
AI Governance Training: Apply for a discounted seat here
Learning Center: Receive free AI governance resources
Job Board: Find open roles in AI governance and privacy
AI Book Club: Discover your next read in AI and beyond
🎓 Join the 24th cohort in October
If you are looking to upskill and explore the legal and ethical challenges of AI, as well as the EU AI Act, I invite you to join the 24th cohort of my 15-hour live online AI Governance Training, starting in October.
Cohorts are limited to 30 people (the September cohort sold out a month early), and over 1,300 professionals have already participated. Many described the experience as transformative and an important step in their career growth. [Apply for a discounted seat here].
GPT-5 And Privacy by Design: Does OpenAI Care?
Yesterday, OpenAI launched the much-anticipated GPT-5.
According to the company, it is the fastest and most useful AI model it has ever developed, leading to the deprecation of all previous models and becoming the new default in ChatGPT (now used by over 700 million users worldwide).
Many are calling it OpenAI's iPhone moment, in which the company is redefining the AI industry and setting new standards.
I watched the entire live stream of the model launch, and I have a lot to say about it, particularly regarding privacy, ethics, agency, and legal challenges in general. I will publish another edition about the topic next week (stay tuned!).
Today, I want to focus on one of the announced features and the company's disregard for privacy by design.
-
While announcing GPT-5's new agentic capabilities, Christina (screenshot above) said that OpenAI's aspiration is to let ChatGPT get to know users more and more over time and understand what is meaningful to each user.
She said that starting next week, some users will be able to give ChatGPT access to their Gmail and Google Calendar, and then asked ChatGPT to "help her plan her schedule for tomorrow."
Pay attention to the details: she mentioned that she has been using this feature every day to help get her life together, showcasing an extremely risky use case live to millions of people.
She said she had already given ChatGPT access to her Gmail account and calendar, and showed on screen her private information, including the need to confirm a dentist appointment and an unanswered email.
Now, my comments:
First, this is a great example of AI agents' privacy risks, as every new permission you grant to an agent (in this case, permission to access your calendar and email) exponentially increases your privacy risk. (You can read my summary of privacy and security risks here).
Second, this is a risky use case, and it was irresponsible for OpenAI to broadcast it this way, as if it were the typical or desired type of behavior.
If OpenAI wants its alleged mission of “ensuring that AGI benefits all of humanity” to sound credible, it must start with the basics: ensuring that all its AI models respect ethics and law, including privacy.
Christina's schedule was tailored for the live stream. However, other people's schedules are real, and there are real risks of privacy leaks, including location, financial information, and other sensitive data.
Third, OpenAI was extremely inconsistent. Less than a month ago, when announcing ChatGPT's agentic capabilities, Sam Altman posted on X:
"There is more risk in tasks like 'Look at my emails that came in overnight and do whatever you need to do to address them, don’t ask any follow up questions'. This could lead to untrusted content from a malicious email tricking the model into leaking your data.
We think it’s important to begin learning from contact with reality, and that people adopt these tools carefully and slowly as we better quantify and mitigate the potential risks involved. As with other new levels of capability, society, the technology, and the risk mitigation strategy will need to co-evolve."
Well, yesterday, OpenAI did the exact opposite. It announced GPT-5's agentic capabilities by promoting a risky agentic use case. The same one Sam Altman told people to avoid.
When the CEO says he cares about privacy in a PR-tailored post, then launches a product promoting the precise risky use case he told people to avoid, it becomes clear that what he writes online is scripted or legally reviewed for PR purposes, but does not necessarily reflect OpenAI's plans.
It makes me wonder: what are OpenAI's plans? Can we trust OpenAI?
I will write more about that and GPT-5's launch next week.
I saw the saw thing with Perplexity’s “Comet” which kept asking for access to my Gmail and contacts. A grizzled old YouTuber I was watching pointed out this is part of the company’s strategy to get access to your “everything,” so they can “sell your data.” They will charge you $200 a month for an agent and turn around and sell off all that data and information about your private life.
I can plan my own dental appointments. Seriously. It’s not rocket science.
Great points and 💯 agree it’s gonna be fun in the future #not