Amazon needs a new privacy culture
Plus: brain implants & brain privacy
🔥 Brain implants & brain privacy: Neuralink got FDA approval
For those not familiar, Neuralink is one of Elon Musk's companies. Its stated mission is to “create a generalized brain interface to restore autonomy to those with unmet medical needs today and unlock human potential tomorrow.” As described on their website, a special surgical robot inserts a “cosmetically invisible” implant in the patient. This is actually a brain-computer interface that allows the patient to control a computer or mobile device solely with their brain/thoughts. There are various applications for this kind of device, such as helping paralyzed people to recover some communication skills, helping restore motor, sensory, and visual functions, and treating neurological disorders. You can read more about the science behind it here. But not only that: according to Musk, “it’s kind of like a Fitbit in your skull with tiny wires,” and it could give people “superpowers” such as replaying memories, learning quicker, and controlling technology with the brain. So it looks like their plan is to actually make this device for everyone, and I can see technophiles waiting in line to get an implant to play video games with their thoughts. Musk already said that he will get an implant himself when it is available. After earlier rejections, last week, the US Food and Drug Administration (FDA) gave Neuralink official authorization to start human trials, so it looks like it will not take long for these devices to become commercially available. On the topic of neurotech devices like this one, Prof. Farahany's book on brain privacy and cognitive liberty is a must-read (my live session with her on cognitive liberty and privacy was watched by 1,500+ people, you can watch or listen to it). I can see how brain-computer devices may revolutionize the life of people who have health issues, and we should absolutely celebrate innovations like this. At the same time, it looks like there are bigger plans to make it a new trend for those financially capable: get an implant, connect your brain to a computer, and potentialize memory, learning, and other cognitive skills. In short, become a superhuman. In any scenario, current advancements in neurotechnology make clear that brain privacy and cognitive liberty, as raised by Prof. Farahany, are of utmost importance. When any neurotech device is officially authorized, there should be a robust regulatory framework to make sure that, from the early stage of research and trials and afterward, on an ongoing basis: a) data protection laws and data subjects' rights are being observed; b) ethics, human rights, and all applicable legal frameworks are being respected; c) everyone exposed to the technology is aware, informed, and has their autonomy respected.
🔥 Regulate us, but not really: my impressions on today's OpenAI event
Today I attended an in-person event with Sam Altman and Ilya Sutskever (OpenAI's CEO & Chief Scientist, respectively) at Tel Aviv University, and these were my impressions: a) there was a disproportional time talking about the risk of a super powerful and perhaps dangerous Artificial General Intelligence (AGI). It felt like part of a PR move to increase the hype and interest in AI-based applications; b) there was no mention of OpenAI's role or plans in mitigating existing AI-related problems, such as bias, disinformation, discrimination, deepfakes, non-compliance with data protection rules, etc. It looked like talking about a future AGI was a way to distract from reality; c) I was in disbelief when Sam Altman mentioned that he uses ChatGPT as a “replacement for Wikipedia," researching topics that he is interested in (in the context of praising ChatGPT's educational potential). ChatGPT is not a search engine, there are no attached sources, context & liability, it's information digested from the internet, there is bias & misinformation (and no promotion of fact-checking). In my view, it is irresponsible to incentivize people to use it as a research tool; d) when mentioning regulation, Sam focused on the fact that it would be bad to slow down innovation. My personal opinion is that they are heavily lobbying against the current draft of the European Union's AI Act, as it does not fit their agenda. Their "wish for regulation" is probably regulation on their terms, which, in my guess, does not effectively regulate their current activities or protect against current harms; e) the conversation felt sensationalist to me. They know people are in awe of AI and see them as rock stars, so answers were tailored to feed the hype; f) I missed concrete signs of accountability, social, ethical, and legal awareness, and a genuine & realistic desire to behave responsibly and preventively now; g) Sam and Ilya are business-savvy and well-connected computer scientists that speak optimistically, with ambitious plans for the future, use grandiose words, and behave fearlessly. They are good at pitching OpenAI. At the same time, they have so much power & influence in their hands, which can be used in a negative way; h) I was hoping that the next generation of tech leaders (after Zuckerberg) would behave in a way that is much more accountable, responsible, and socially/ethically/legally grounded. It looks like I was wrong.
🔥 New article about Privacy Enhancing Technologies (PETs)
I have recently come across Katharine Jarmul's excellent article for Martin Fowler's website: Privacy Enhancing Technologies: An Introduction for Technologists. She covers differential privacy, distributed & federated analysis and learning, and encrypted computation. For most privacy professionals that do not have an engineering background, PETs are a lesser-known field that deserves much more attention and integration into existing privacy compliance frameworks. I like how Katharine's work is accessible, multi-disciplinary, and realistic about what engineering can accomplish. In her own words: “privacy is much more than technology. It's personal, social, cultural, and political. Applying technology to societal problems is often naive and even dangerous. Privacy technology is one tool of many to help address real inequalities in access to privacy and power in the world. It cannot and does not address the core issues of data access, surveillance, and inequalities reproduced or deepened by data systems. These problems are multi-disciplinary in nature and require much expertise outside of our technical realm.” Well said. And now, I will advocate for the other side: legal and policy professionals should be much more familiarized with PETs and how engineering can be integrated and help support compliance efforts. I like very much when professionals - either from the legal or the tech side - build bridges to communicate and integrate privacy with the other side. Privacy compliance must involve an interdisciplinary set of efforts from legal, social, and technical points of view. The fewer silos and barriers we have between these fields and professionals, the better.
🔥 Amazon needs a new privacy culture
Last week, the FTC published two decisions against Amazon. In the first one, Amazon is charged with violating the Children's Online Privacy Protection Act (COPPA) by keeping and using Alexa's voice recordings and geolocation data for years. In the second decision, the FTC charged Ring (owned by Amazon) with compromising customers’ privacy by allowing illegal surveillance by employees. In my point of view, there will probably be more and bigger fines soon, and Amazon needs a new privacy culture. I explain: