🧠 AI, privacy & neurorights
All you need to know about tech policy & regulation | Luiza's Newsletter #100
👋 Hi, Luiza Jarovsky here. Welcome to the 🎉 100th edition of this newsletter, read by 22,750+ subscribers in 125+ countries. I hope you enjoy reading it as much as I enjoy writing it.
🧠 AI, privacy & neurorights
➡️ Neurorights can be defined as “ethical, legal, social or natural principles of freedom or entitlement related to a person's cerebral and mental domain.”
➡️ They can also be seen as a bundle of rights that include, among others, brain privacy, cognitive liberty, and freedom of thought, as championed by Prof. Nita Farahani, who authored the book “The Battle for Your Brain” (read my article and watch our conversation about the topic).
➡️ It's a field that can be situated at the intersection of law, ethics, and neurotechnology, which became even more important with the fast-paced development of AI-based neurotechnology (such as the AI brain chip developed by Elon Musk's company Neuralink).
➡️ This week, there were two important advancements in the field of neurorights:
➡️ First, the report "Safeguarding Brain Data: Assessing the Privacy Practices of Consumer Neurotechnology Companies," authored by Jared Genser, Stephen Damianos, and Rafael Yuste, was published. It's a must-read for everyone interested in privacy, AI, fundamental rights, and neurotechnology. Quotes:
"Advocates of neurorights are not calling for the creation of new rights, but rather the further interpretation of existing human rights law to guide the development of national legal and regulatory frameworks. The Morningside Group identified five key neurorights: (1) the right to mental privacy, or the ability to keep mental activity protected against disclosure, (2) the right to identity, or the ability to control one’s mental integrity and sense of self, (3) the right to agency, or the freedom of thought and free will to choose one’s own actions, (4) the right to fair access to mental augmentation, or the ability to ensure that the benefits of improvements to sensory and mental capacity through neurotechnology are distributed justly in the population, and (5) the right to protection from algorithmic bias, or the ability to ensure that technologies do not insert prejudices." (page 14)
"In coming years, these databases will function similar to how genetic and biometric databases function. Just as genetic material and fingerprints are used to identify individuals, so too will neural data, which is uniquely identifiable to a specific person as long as it is taken at a sufficient resolution. Further, advances in artificial intelligence are rapidly increasing the ability to decode information from neural data. As previously discussed, studies have found that when paired with generative AI, brain scans from non-invasive neurotechnologies allowed for the decoding of language, emotions, and imagery with high levels of accuracy." (page 17)
-
"Given the advances in generative AI, the growing quantities of neural data being collected worldwide, and the other types of data neurotechnology companies collect that connect back to individuals (such as the user’s IP address), it is clear neural data will soon be widely personally identifiable. Since consumer neurotechnologies are only starting to proliferate, existing databases are small, making it unlikely that neural data from consumer devices could currently be used to identify individual users. However, the consumer neurotechnology market is rapidly expanding, and it is likely that neurotechnology companies will soon begin to amass large amounts of neural data. This would again follow a trend observed in the consumer genetic testing space" (page 18)
➡️ Read the full report here.
➡️ The second win for neurorights is Colorado's approval of a bill - a first of its kind in the US - to protect biological data, including neural data. Quotes:
"2. The general assembly further finds that: (...)
e. Every human brain is unique, meaning that neural data is specific to the individual from whom it was collected. Because neural data contains distinctive information about the structure and functioning of individual brains and nervous systems, it always contains sensitive information that may link the data to an identified or identifiable individual;
f. The collection of neural data always involves involuntary disclosure of information. Even if individuals consent to the collection and processing of their data for a narrow use, they are unlikely to be fully aware of the content or quantity of information they are sharing."
-
"Neural data means information that is generated by the measurement of the activity of an individual's central or peripheral nervous systems and that can be processed by or with the assistance of a device."
➡️ Read the bill here.
➡️ There is still a long way to go to ensure the protection of neurorights across the globe, and I'm happy (and optimistic) to see extremely talented scholars, professionals, and advocates advancing this cause.
➡️ For more info about the latest developments in the field, check out the Neurorights Foundation.
📜 Comprehensive AI bill introduced in Colorado
➡️ This Colorado AI bill is, so far, one of the strongest in the US, and some parts resemble the EU AI Act. Quotes:
"10a. High-risk AI system means any AI system that has been specifically developed and marketed or intentionally and substantially modified, to make, or to be a substantial factor in making a consequential decision"
-
"b. High-risk AI system does not include an AI system, as defined in subsection (2) of this section, if the artificial intelligence system is intended to:
(i) perform a narrow procedural task;
(ii) improve the result of a previously completed human activity;
(iii) detect decision-making patterns or deviations from prior decision-making patterns and is not intended to replace or influence a previously completed human assessment without sufficient human review; or
(iv) perform a preparatory task to an assessment that is relevant to a consequential decision."
-
"(...) If a deployer deploys a high-risk artificial intelligence system on or after july 1,2025, and subsequently discovers that the high-risk artificial intelligence system has caused, or is reasonably likely to have caused, algorithmic discrimination against a consumer, the deployer, within ninety days after the date of the discovery, shall send to the attorney general, in a form and manner prescribed by the attorney general, a notice disclosing the discovery."
➡️ Link to the bill here.
👻 Snap's AI policies
➡️ Snap announced it will add a watermark to AI-generated images and other AI-related policies. Quotes:
"Soon, we will be adding a watermark to AI-generated images. It will appear on images created with Snap’s generative AI tools when the image is exported or saved to camera roll. Recipients of an AI-generated image made on Snapchat may see a small ghost logo with the widely recognized sparkle icon beside it. The addition of these watermarks will help inform those viewing it that the image was made with AI on Snapchat."
-
"As we’ve expanded the Generative AI-enabled experiences available on Snapchat, we’ve established responsible governance principles and improved our safety mitigations as well. We’ve created a safety review process to detect and remove potentially problematic prompts in the earliest stages of development of AI Lens experiences styled by our team. All of our AI Lenses that generate an image from a prompt go through this process before they’re finalized and become available to our community."
🇪🇺 EU Commission vs. TikTok
➡️ The EU Commission initiated official proceedings against TikTok under the Digital Services Act (DSA). This is what you need to know:
➡️ The proceedings focus on TikTok Lite's “Task and Reward Program,” which allows users to earn points while performing certain tasks on TikTok (such as watching videos, liking content, following creators, and inviting friends to join TikTok).
➡️ The EU Commission suspects that the TikTok program was launched without a diligent assessment of its risks, especially those related to the platform's addictive effect.
➡️ The EU is especially concerned with children, given the suspected absence of effective age verification mechanisms on TikTok.
➡️ This is the second series of DSA-related proceedings against TikTok initiated by the EU Commission. The first one was initiated in February and investigates TikTok's lack of effective age verification mechanisms and the suspected addictive design.
➡️ TikTok has until April 23 to submit the risk assessment report to the EU Commission and until May 3 to provide the other information requested. If TikTok fails to answer on time, there can be DSA-related fines.
🗓️ Next week, I'll host a 30-minute lightning lesson on the DSA. It's free and open to all. Register here.
🚧 Meta's AI assistant needs stronger guardrails
➡️ Meta has recently launched its AI assistant across Instagram, WhatsApp, and Facebook.
➡️ When testing it, it straightforwardly showed me the dosage of prescription medication without any warning. This output could appear in other contexts, potentially inadequate or harmful. When using a search engine, the first results are from medical clinics, with additional information and context.
➡️ It has also given me gambling advice, while in many jurisdictions, gambling is illegal.
➡️ Generative AI's guardrail problem persists.
🏛️ New bipartisan AI bill introduced in the US: the Future of AI Innovation Act
➡️ According to the official release, the Future of AI Innovation Act:
➵ authorizes the NIST AI Safety Institute to develop AI standards;
➵ creates new AI testbeds with national laboratories to evaluate AI models and make discoveries that benefit the US economy;
➵ creates grand challenge prize competitions to spur private sector AI solutions and innovation;
➵ accelerates AI innovation with publicly available datasets;
➵ creates international alliances on AI standards, research and development.
➡️ Quotes from the proponents of the bill:
"Our bill ensures the United States will lead on AI for decades to come. It promotes public-private collaboration to drive innovation and competitiveness. The NIST AI Safety Institute, testbeds at our national labs, and the grand challenge prizes will bring together private sector and government experts to develop standards, create new assessment tools, and overcome existing barriers. It will lay a strong foundation for America’s evolving AI tech economy for years to come" Senator Maria Cantwell;
-
"The Future of AI Innovation Act is critical to maintaining American leadership in the global race to advance AI. This bipartisan bill will create important partnerships between government, the private sector, and academia to establish voluntary standards and best practices that will ensure a fertile environment for AI innovation while accounting for potential risks. One of my top priorities for federal AI policy is to ensure these technologies are developed in a manner that reflects our democratic values and supports innovation continuing to flourish in the United States, and this bill represents an important step forward in that effort" Senator Todd Young;
-
"Artificial intelligence has enormous potential, but it’s up to us to make sure it’s harnessed for responsible innovation; (...) Our bipartisan Future of AI Innovation Act empowers the U.S. AI Safety Institute to develop the research, standards, and partnerships we need without compromising our position at the forefront of this technology." Senator John Hickenlooper;
-
"The Future of AI Innovation Act encourages coordination between the U.S. government and industry to capitalize on the promise of AI to revolutionize our lives." Senator Marsha Blackburn.
➡️ Read the bill here.
📑 Excellent AI paper
➡️ The paper "Can AI Standards Have Politics?" by Alicia Solow-Niederman is a must-read for everyone interested in AI policy, governance & regulation. Quotes:
"The very formation of standards is political because the standards development and diffusion process reflects a particular institutional context and an associated set of relationships among public and private actors. Under the surface of any standard setting effort, there is a set of assumptions about how a standard will diffuse, and this anticipated diffusion pattern depends on the relationships between public and private actors." (page 8)
-
"First, take the consulting firm Deloitte. The firm offers a variety of “Artificial Intelligence and Analytics Services” and touts its “Trustworthy AI” framework “to guide organizations on how to apply AI responsibly and ethically within their businesses.” This framework emphasizes how, until global AI regulations "eventually address ethics concerns,” the firm is “working to bridge the ethics gap,” underscoring that AI must be “transparent and explainable, fair and impartial, robust and reliable, respectful of privacy, safe and secure, and responsible and accountable.” In other words, Deloitte is setting forth, and internally defining, its own AI standards for the tools and guidance that it sells." (pages 12-13)
-
"The effort to govern AI through standards risks ignoring the reality that standards have politics. This Essay calls for us to stop doing so. That will not be easy. The basic difficulty is that AI standards are most powerful as governance tools when we embrace the fiction that standards emerge in a vacuum, without reference to an institutional context and without implicating normative choices. AI standard setting efforts implicitly rely on this fiction when they focus on the need for a “scientific” or“technical” consensus—a uniform understanding of the nature of the problem and the best formula to use to solve it—before crafting a standard. The more a standard appears apolitical, objective, and neutral, separate from specific institutional dynamics and their politics, the stronger the case for the standard’s dissemination across contexts." (page 16)
➡️ Read the full paper here.
⚡ Join my free lightning lesson next week
Enforcement under the EU Digital Services Act (DSA) has already started, and it is a great time to learn more about it.
➡️ This 30-minute lightning lesson will be free and open to all: register here.
🎓 Learn with peers, upskill, get a certificate
If you enjoy this newsletter and want to dive deeper, you can't miss my intensive Bootcamps:
➡️ 4-week Bootcamp on Emerging Challenges in Privacy, Tech & AI (live online). The next cohort starts on May 7 (*this is the 6th and last cohort before a summer break - don't miss it). Read testimonials & register here.
➡️ 4-week Bootcamp on the EU AI Act (live online). The next cohort starts on May 8 (few spots left). Read testimonials & register here.
📚 Join our AI Book Club (940+ members)
➡️ We are currently reading “The Worlds I See” by Fei-Fei Li and will meet on May 16 to discuss it.
➡️ Check out our booklist, join the AI book club, and start reading.
🔎 Job board: Privacy & AI Governance
If you are looking for job opportunities in privacy, data protection, and AI governance, check out the links below (I have no connection to these organizations, please apply directly):
➡️ Senior Data Privacy Lead at Siemens - Remote (US): "Overseeing global data privacy requirements for the organization, including identifying priorities and targeting for the legal organization based on changes to regulations, the organization, and business models. Supporting the design, implementation and improvement of rules, strategies, and policies for ensuring compliance with global data privacy..." Apply here.
➡️ Lead Attorney - AI & Data Governance at Honeywell - Hybrid - Atlanta (US): "Key Responsibilities: Serve as data governance counsel for the Digital Services team; Gain an in-depth understanding of Honeywell’s products and development processes, including their strengths, limitations, and vulnerabilities from a legal, privacy and security standpoint; Provide expert legal advice and guidance to product, design, and..." Apply here.
➡️ AI Governance Manager at Informa - London (UK): "We are seeking to hire an AI Governance Manager to shape and drive the AI governance policies, frameworks, and processes for the whole Informa Group. The role will involve working closely with the Group Data Governance forum, with colleagues in the AI Centre of Excellence, and with business stakeholders. The key aim being to ensure that our..." Apply here.
➡️ Privacy Counsel, APAC at Block - Remote or Melbourne (Australia): "We are seeking a Privacy Counsel for the APAC region, with a particular focus on Australian privacy law. This role offers the chance to be at the forefront of privacy and data protection issues, influencing how we protect user data and ensure compliance across diverse jurisdictions. This role offers a unique opportunity to shape privacy practices..." Apply here.
➡️ Senior Counsel, Privacy at Etsy - Hybrid - Brooklyn (US): "Looking for an experienced privacy lawyer to join our global legal team. In this role you will gain a comprehensive understanding of Etsy and its subsidiaries’ strategic objectives while partnering with key stakeholders to provide timely legal advice. This individual thrives in driving structure in ambiguous spaces, working in fast paced environments..." Apply here.
➡️ AI Governance Consultant at M&T Bank - Hybrid - Buffalo, NY (US): "You will empower the organization to leverage AI technologies ethically, responsibly, and in adherence to relevant laws, regulations, and standards. You’ll work closely with internal stakeholders to develop and maintain policies, procedures, and strategies that help them navigate the AI landscape with confidence and integrity. You’ll be in charge of creating clarity..." Apply here.
➡️ Subscribe to our privacy and AI governance job boards to receive our weekly email alerts with the latest job opportunities.
🙏 Thank you for reading!
If you have comments on this week's edition, I'll be happy to hear them! Reply to this email, and I'll get back to you soon.
Have a great day!
Luiza