Most people missed it, but there was a shocking disregard for privacy by design in yesterday's GPT-5 launch (which suggests that perhaps OpenAI really doesn't care) | Edition #224
I saw the saw thing with Perplexity’s “Comet” which kept asking for access to my Gmail and contacts. A grizzled old YouTuber I was watching pointed out this is part of the company’s strategy to get access to your “everything,” so they can “sell your data.” They will charge you $200 a month for an agent and turn around and sell off all that data and information about your private life.
I can plan my own dental appointments. Seriously. It’s not rocket science.
There is an egregious security concern that is 100 times worse than the very real privacy concern. AI that is reaching out to the open Internet to conduct searches & transactions on your behalf have no ability to handle fake websites that appear to be for example a shopping or information service. If you have an AI agent booking a restaurant table, or checking on your dry cleaning it has no way of discerning the bona fides of the site that it connects to. A site can easily serve a drive-by exploit to an AI agent that is running on your machine and take it over. As a software engineer, I am just waiting for security researchers to demonstrate this. To me it’s extremely obvious. But no one will listen until it’s a reality and then it will be too late. The argument will no doubt me that a human browsing the web can be similarly fooled. The reality is that a compromised AI agent could drain your bank account, install malware, and permanently harm your digital identity all in seconds. At least a human being generally needs to enter a password, or get their credit card out of their wallet. An AI agent with root level access to your life that has been compromised remotely could do untold harm before you have a chance to intervene.
I understand that this kind of overvigilance is a constitutive part of being a watchdog, but come on:
>to give ChatGPT access to their Gmail and Google Calendar, and then asked ChatGPT to "help her plan her schedule for tomorrow."
and
>'Look at my emails that came in overnight and do whatever you need to do to address them, don’t ask any follow up questions'.
Are not the same use case. It implies different permission rights and outputs, so this characterisation: “the precise risky use case he told people to avoid” is literally false.
I potentially can think of charitable ways of reading it, but for this we would need actual explanation of mechanisms describing what can go wrong, under what conditions and how likely the different scenarios are to happen.
This would allow to make an educated practical judgement on the risks associated with AI use. Otherwise it’s fearmongenring that prevents people of forming educated opinions.
We've all accepted having a gmail account as the norm, giving Google our data. People took to Alexa and Siri, letting them own our voices. How is this different? We are given a choice. They're slowly pushing the line, people adopt it, and we normalize it. When you say ethics, what do you want people to understand by that? OpenAI has only one goal and ethics is not it. Most people when they hear ethics, they glaze over. I think we need to be specific and we need a discourse shift. Please read a couple of my posts.
Isn't this entirely consistent with what we've seen from Sam Altman, Tech Bros and Silicon Valley lately? Say one thing, do whatever you need to win, capture an audience and so on.
The addiction, regret and enshittification comes later.
I think the technology has enormous potential, but system running it is begging for a scene out of Robocop...
As you pointed out with Meta, we are in the times of "go fast and break things". The path to these features is all predictable, from classical iterative development methodologies tackling low-hanging fruits to competitive drive. The "openness" of Gmail is far from new, OpenAI is only using existing connection endpoints, the same ones Google's Gemini already uses, the same ones Claude could already access to, and OpenAI was already doing through Microsoft with Copilot...
Yes, what they can do with these accesses, and the data that goes with it, is highly questionable. And I don't even think that this will be that useful - Copilot still is a huge disappointment...
One in which social media posts and interviews by executives portray a sense of security and privacy forward thinking.
And the other that's all glam and glory around a certain tech while completely ignoring the security and privacy aspects. "Oh my look at these new features that will benefit you in this and that way!"
Both can exist but unfortunately the glam and glory reality seems to not only take precedence over the other, but is actually used to distract from the lack of privacy.
Claude and Gemini can do similar through integrations and workspace, respectively. Are the security concerns the same for those or are they handling it differently? My understanding is that they are, but the legalese of privacy policies is mind boggling.
I saw the saw thing with Perplexity’s “Comet” which kept asking for access to my Gmail and contacts. A grizzled old YouTuber I was watching pointed out this is part of the company’s strategy to get access to your “everything,” so they can “sell your data.” They will charge you $200 a month for an agent and turn around and sell off all that data and information about your private life.
I can plan my own dental appointments. Seriously. It’s not rocket science.
Great points and 💯 agree it’s gonna be fun in the future #not
There is an egregious security concern that is 100 times worse than the very real privacy concern. AI that is reaching out to the open Internet to conduct searches & transactions on your behalf have no ability to handle fake websites that appear to be for example a shopping or information service. If you have an AI agent booking a restaurant table, or checking on your dry cleaning it has no way of discerning the bona fides of the site that it connects to. A site can easily serve a drive-by exploit to an AI agent that is running on your machine and take it over. As a software engineer, I am just waiting for security researchers to demonstrate this. To me it’s extremely obvious. But no one will listen until it’s a reality and then it will be too late. The argument will no doubt me that a human browsing the web can be similarly fooled. The reality is that a compromised AI agent could drain your bank account, install malware, and permanently harm your digital identity all in seconds. At least a human being generally needs to enter a password, or get their credit card out of their wallet. An AI agent with root level access to your life that has been compromised remotely could do untold harm before you have a chance to intervene.
I understand that this kind of overvigilance is a constitutive part of being a watchdog, but come on:
>to give ChatGPT access to their Gmail and Google Calendar, and then asked ChatGPT to "help her plan her schedule for tomorrow."
and
>'Look at my emails that came in overnight and do whatever you need to do to address them, don’t ask any follow up questions'.
Are not the same use case. It implies different permission rights and outputs, so this characterisation: “the precise risky use case he told people to avoid” is literally false.
I potentially can think of charitable ways of reading it, but for this we would need actual explanation of mechanisms describing what can go wrong, under what conditions and how likely the different scenarios are to happen.
This would allow to make an educated practical judgement on the risks associated with AI use. Otherwise it’s fearmongenring that prevents people of forming educated opinions.
We've all accepted having a gmail account as the norm, giving Google our data. People took to Alexa and Siri, letting them own our voices. How is this different? We are given a choice. They're slowly pushing the line, people adopt it, and we normalize it. When you say ethics, what do you want people to understand by that? OpenAI has only one goal and ethics is not it. Most people when they hear ethics, they glaze over. I think we need to be specific and we need a discourse shift. Please read a couple of my posts.
Isn't this entirely consistent with what we've seen from Sam Altman, Tech Bros and Silicon Valley lately? Say one thing, do whatever you need to win, capture an audience and so on.
The addiction, regret and enshittification comes later.
I think the technology has enormous potential, but system running it is begging for a scene out of Robocop...
As you pointed out with Meta, we are in the times of "go fast and break things". The path to these features is all predictable, from classical iterative development methodologies tackling low-hanging fruits to competitive drive. The "openness" of Gmail is far from new, OpenAI is only using existing connection endpoints, the same ones Google's Gemini already uses, the same ones Claude could already access to, and OpenAI was already doing through Microsoft with Copilot...
Yes, what they can do with these accesses, and the data that goes with it, is highly questionable. And I don't even think that this will be that useful - Copilot still is a huge disappointment...
It's like we live in two realities.
One in which social media posts and interviews by executives portray a sense of security and privacy forward thinking.
And the other that's all glam and glory around a certain tech while completely ignoring the security and privacy aspects. "Oh my look at these new features that will benefit you in this and that way!"
Both can exist but unfortunately the glam and glory reality seems to not only take precedence over the other, but is actually used to distract from the lack of privacy.
Claude and Gemini can do similar through integrations and workspace, respectively. Are the security concerns the same for those or are they handling it differently? My understanding is that they are, but the legalese of privacy policies is mind boggling.