@Luiza Jarovsky, PhD a great interview and fascinating paper. As a therapist, someone trained in human rights law, and someone who has worked inside institutions from UNICEF to child welfare systems, I want to name what the paper misses: the body.
Hartzog and Silbey argue AI undermines expertise, short-circuits decisionmaking, and isolates humans. All three correct. But these aren't just institutional dynamics. They are nervous system events. Institutions don't just "think." They regulate. They set the physiological conditions under which people can tolerate complexity and stay present through the friction that adaptation requires.
When AI replaces friction with smoothness, it trains the nervous systems inside the institution to stop tolerating difficulty. Skill atrophy isn't just cognitive. It's somatic. The capacity to sit with ambiguity, hold a hard conversation without reaching for a clean answer, tolerate the silence after impact: these are built through the body, in relationship, over time.
The paper's strongest insight is that AI isolates humans by displacing connection. Clinically, what it displaces is co-regulation. We regulate each other's nervous systems through tone, rhythm, presence. AI simulates connection's content. It cannot provide co-regulation. Without it, the institutional container loses its capacity to hold the very contestation the authors identify as essential.
This paper identifies the symptoms, but not the cause. AI is showing how hollow many institutions already are.
What’s described here: degraded expertise, shortcut decision-making, weakened trust, didn’t start with AI. It started when organisations replaced judgment with process, accountability with compliance, and understanding with metrics. AI just accelerates the consequences of those choices. Something I’ve been writing about in my Substack.
Blaming AI avoids the question that people don’t want asked: why do so many institutions know their own purpose. AI didn’t cause that. But, it is exposing it.
If institutions are failing under AI pressure it means they need redesigning. But, that’s what no one wants to admit. It’s far easier, and in my opinion lazier, to blame a tool like AI.
TL;DR: I find AI useful in providing librarian-type services to assist me in understanding a problem domain and in leverating Intenet accessible information much better than a search engine. Based on my 1970s University experience, AI would have greatly accelerated my learning if used to build my knowledge/skills/understanding, rather than "answer the question/cheat on the exam" thinking.
I could not get past the first 6 pages of the paper. All claims of damages to institutions are made without precisely explaining the problem or proposed resolutions. Far too typical of academic papers I've attempted to read over the years.
A Physics BSc via a co-op work term program graduating in '78, what I give credit to my University education (U of Waterloo, Canada) is: how to think, learn, and apply that learning, which has served me well, despite much of the subject matter in-class being irrelevant in my career. Being in-person with classmates, in and out of the classroom, taught me the value of bringing multiple mindsets to a problem and of collectively discovering how to think about it using multiple perspectives and animated, constructive discussion to gain in-depth understanding of a topic and using knowledge, plus problem-solving skills to question at the edge of our understanding (at the time).
What frustrated my classmates and me was that the dry, academic presentation in textbooks, written by professors, including some who taught the courses, was characterized by impressing the reader with the author's intelligence rather than aiding understanding of a complex topic, or providing learning aids (including supplemental material) on understanding the subject, and how to solve problems in that domain.
I fully agree that asking AI, via chat or other interfaces, to solve a problem for you such that you can just copy/paste the answer to pass a course is a complete waste of time. That's cheating - both to pass the course and in terms of building skills you can apply to new problems.
I have learned skills in the last 9 months by treating AI as a tutor, asking it to teach me how to perceive the problem I'm trying to solve (which is new territory on Digital Personal Profiles and Privacy), how to ask productive questions, and what questions I should be asking and what I'm misunderstanding by the questions I do ask.
In other words, it's a useful, interactive grad-student/advisor or professional colleague/informed librarian which allows me to fully exploit what is known (and unknown) about this problem I'm attempting to solve. In some cases, it suggests solutions, but those are typically simplistic and fail to understand the larger picture.
In 2026, many of us have infrequent interaction with our colleagues (at best, bi-weekly meetings) to discuss and test ideas, and none of us has an encyclopedic understanding of everything available online.
AI is useful in that capacity. As a knowledgeable Librarian/Researcher to bounce ideas off of and ask - what questions should I be asking? What do I not understand? In a way that few of us have the luxury of in the disconnected personal interaction era of 2025+.
The Murder of an OpenAI Top Engineer and the True Dangers of Artificial Intelligence:
On November 22, 2024, 26-year-old former OpenAI engineer Suchir Balaji was brutally murdered in his San Francisco apartment.
Authorities ruled his death a suicide.
Suchir Balaji was a brilliant American IT engineer of Indian descent.
At the age of 22, he was hired by OpenAI as a top talent and played a key role in the development of ChatGPT.
In addition to his exceptional intelligence, he possessed a strong sense of justice and unwavering ethical principles.
It is therefore not surprising that he disagreed with the behavior of his boss, Sam Altman, and OpenAI's business practices. He developed an increasingly critical attitude toward management and his boss.
Sam Altman is notorious within the company for his lies and power plays. Suchir Balaji had absolutely no understanding for this and was ultimately quite disgusted by his behavior.
He also witnessed OpenAI's transformation from a non-profit, open-source project into a for-profit, closed-source company.
It's important to understand that the development of ChatGPT was only possible by feeding and training the AI with gigantic amounts of data, including vast quantities of copyrighted material.
OpenAI was only able to use this data free of charge and without the permission of the copyright holders because the company presented itself as a non-profit project.
The use of copyrighted material is considered permissible if it is a research project that does not generate profits and serves the public good.
In retrospect, it is clear that OpenAI deliberately exploited this situation. The billions in profits the company now generates are largely due to OpenAI's free access to this data during its non-profit phase.
For Suchir Balaji, this practice was completely unacceptable.
Suchir left the company in the summer of 2024, having made crucial contributions to the development of ChatGPT during his four years there.
In the months leading up to his violent death, he was preparing to launch his own startup and wrote a scientific paper on the future of large language models (LLMs) like ChatGPT.
In this work, which unfortunately remained unfinished, he refuted the so-called scaling hypothesis, championed by OpenAI and most other AI companies.
This hypothesis states that the intelligence of AI models can be developed indefinitely as long as they are fed enough data. It forms the basis for the grandiose promises of AI companies.
The achievement of a level of artificial general intelligence (AGI) has been announced for years.
AI models are supposedly about to develop superhuman intelligence (ASI = Artificial Super Intelligence), replace all kinds of jobs, cure diseases, create wealth for everyone, and so on.
In his unfinished essay, Suchir Balaji demonstrated in an impressive yet easily understandable way that, contrary to the claims of AI companies, large language models can never reach the level of human-like intelligence (AGI = Artificial General Intelligence).
He predicted that the fundamentally limited, abysmal data efficiency of this technology will inevitably slow down the further development of AI models and bring them to a standstill long before AGI is achieved.
This is an inconvenient truth for the AI industry, which it is trying to conceal to protect its business model.
Suchir Balaji was also slated to testify as a key witness in a lawsuit against OpenAI, which involved, among other things, massive copyright infringements.
In the months leading up to his death, Suchir was in good spirits and looking forward to launching his own AI company.
On November 22, 2024, he had just returned from a short vacation with his closest friends.
According to the investigation by a private investigator hired by Suchir's parents, Suchir had ordered food that evening, listened to music, and worked on his laptop. According to the investigator's reconstruction, he ...
@Luiza Jarovsky, PhD a great interview and fascinating paper. As a therapist, someone trained in human rights law, and someone who has worked inside institutions from UNICEF to child welfare systems, I want to name what the paper misses: the body.
Hartzog and Silbey argue AI undermines expertise, short-circuits decisionmaking, and isolates humans. All three correct. But these aren't just institutional dynamics. They are nervous system events. Institutions don't just "think." They regulate. They set the physiological conditions under which people can tolerate complexity and stay present through the friction that adaptation requires.
When AI replaces friction with smoothness, it trains the nervous systems inside the institution to stop tolerating difficulty. Skill atrophy isn't just cognitive. It's somatic. The capacity to sit with ambiguity, hold a hard conversation without reaching for a clean answer, tolerate the silence after impact: these are built through the body, in relationship, over time.
The paper's strongest insight is that AI isolates humans by displacing connection. Clinically, what it displaces is co-regulation. We regulate each other's nervous systems through tone, rhythm, presence. AI simulates connection's content. It cannot provide co-regulation. Without it, the institutional container loses its capacity to hold the very contestation the authors identify as essential.
I've written about this in The Splitting Machine: AI and the Failure of Integration https://yauguru.substack.com/p/the-splitting-machine-ai-and-the?r=217mr3
and
The Attention Wound: What the Attention Economy Extracts and What the Body Cannot Surrender https://open.substack.com/pub/yauguru/p/the-attention-wound?utm_campaign=post-expanded-share&utm_medium=web
Good discussion with points raised that are not talked about often enough.
Thank you—valuable paper and discussion
This paper identifies the symptoms, but not the cause. AI is showing how hollow many institutions already are.
What’s described here: degraded expertise, shortcut decision-making, weakened trust, didn’t start with AI. It started when organisations replaced judgment with process, accountability with compliance, and understanding with metrics. AI just accelerates the consequences of those choices. Something I’ve been writing about in my Substack.
Blaming AI avoids the question that people don’t want asked: why do so many institutions know their own purpose. AI didn’t cause that. But, it is exposing it.
If institutions are failing under AI pressure it means they need redesigning. But, that’s what no one wants to admit. It’s far easier, and in my opinion lazier, to blame a tool like AI.
TL;DR: I find AI useful in providing librarian-type services to assist me in understanding a problem domain and in leverating Intenet accessible information much better than a search engine. Based on my 1970s University experience, AI would have greatly accelerated my learning if used to build my knowledge/skills/understanding, rather than "answer the question/cheat on the exam" thinking.
I could not get past the first 6 pages of the paper. All claims of damages to institutions are made without precisely explaining the problem or proposed resolutions. Far too typical of academic papers I've attempted to read over the years.
A Physics BSc via a co-op work term program graduating in '78, what I give credit to my University education (U of Waterloo, Canada) is: how to think, learn, and apply that learning, which has served me well, despite much of the subject matter in-class being irrelevant in my career. Being in-person with classmates, in and out of the classroom, taught me the value of bringing multiple mindsets to a problem and of collectively discovering how to think about it using multiple perspectives and animated, constructive discussion to gain in-depth understanding of a topic and using knowledge, plus problem-solving skills to question at the edge of our understanding (at the time).
What frustrated my classmates and me was that the dry, academic presentation in textbooks, written by professors, including some who taught the courses, was characterized by impressing the reader with the author's intelligence rather than aiding understanding of a complex topic, or providing learning aids (including supplemental material) on understanding the subject, and how to solve problems in that domain.
I fully agree that asking AI, via chat or other interfaces, to solve a problem for you such that you can just copy/paste the answer to pass a course is a complete waste of time. That's cheating - both to pass the course and in terms of building skills you can apply to new problems.
I have learned skills in the last 9 months by treating AI as a tutor, asking it to teach me how to perceive the problem I'm trying to solve (which is new territory on Digital Personal Profiles and Privacy), how to ask productive questions, and what questions I should be asking and what I'm misunderstanding by the questions I do ask.
In other words, it's a useful, interactive grad-student/advisor or professional colleague/informed librarian which allows me to fully exploit what is known (and unknown) about this problem I'm attempting to solve. In some cases, it suggests solutions, but those are typically simplistic and fail to understand the larger picture.
In 2026, many of us have infrequent interaction with our colleagues (at best, bi-weekly meetings) to discuss and test ideas, and none of us has an encyclopedic understanding of everything available online.
AI is useful in that capacity. As a knowledgeable Librarian/Researcher to bounce ideas off of and ask - what questions should I be asking? What do I not understand? In a way that few of us have the luxury of in the disconnected personal interaction era of 2025+.
The Murder of an OpenAI Top Engineer and the True Dangers of Artificial Intelligence:
On November 22, 2024, 26-year-old former OpenAI engineer Suchir Balaji was brutally murdered in his San Francisco apartment.
Authorities ruled his death a suicide.
Suchir Balaji was a brilliant American IT engineer of Indian descent.
At the age of 22, he was hired by OpenAI as a top talent and played a key role in the development of ChatGPT.
In addition to his exceptional intelligence, he possessed a strong sense of justice and unwavering ethical principles.
It is therefore not surprising that he disagreed with the behavior of his boss, Sam Altman, and OpenAI's business practices. He developed an increasingly critical attitude toward management and his boss.
Sam Altman is notorious within the company for his lies and power plays. Suchir Balaji had absolutely no understanding for this and was ultimately quite disgusted by his behavior.
He also witnessed OpenAI's transformation from a non-profit, open-source project into a for-profit, closed-source company.
It's important to understand that the development of ChatGPT was only possible by feeding and training the AI with gigantic amounts of data, including vast quantities of copyrighted material.
OpenAI was only able to use this data free of charge and without the permission of the copyright holders because the company presented itself as a non-profit project.
The use of copyrighted material is considered permissible if it is a research project that does not generate profits and serves the public good.
In retrospect, it is clear that OpenAI deliberately exploited this situation. The billions in profits the company now generates are largely due to OpenAI's free access to this data during its non-profit phase.
For Suchir Balaji, this practice was completely unacceptable.
Suchir left the company in the summer of 2024, having made crucial contributions to the development of ChatGPT during his four years there.
In the months leading up to his violent death, he was preparing to launch his own startup and wrote a scientific paper on the future of large language models (LLMs) like ChatGPT.
In this work, which unfortunately remained unfinished, he refuted the so-called scaling hypothesis, championed by OpenAI and most other AI companies.
This hypothesis states that the intelligence of AI models can be developed indefinitely as long as they are fed enough data. It forms the basis for the grandiose promises of AI companies.
The achievement of a level of artificial general intelligence (AGI) has been announced for years.
AI models are supposedly about to develop superhuman intelligence (ASI = Artificial Super Intelligence), replace all kinds of jobs, cure diseases, create wealth for everyone, and so on.
In his unfinished essay, Suchir Balaji demonstrated in an impressive yet easily understandable way that, contrary to the claims of AI companies, large language models can never reach the level of human-like intelligence (AGI = Artificial General Intelligence).
He predicted that the fundamentally limited, abysmal data efficiency of this technology will inevitably slow down the further development of AI models and bring them to a standstill long before AGI is achieved.
This is an inconvenient truth for the AI industry, which it is trying to conceal to protect its business model.
Suchir Balaji was also slated to testify as a key witness in a lawsuit against OpenAI, which involved, among other things, massive copyright infringements.
In the months leading up to his death, Suchir was in good spirits and looking forward to launching his own AI company.
On November 22, 2024, he had just returned from a short vacation with his closest friends.
According to the investigation by a private investigator hired by Suchir's parents, Suchir had ordered food that evening, listened to music, and worked on his laptop. According to the investigator's reconstruction, he ...
Read the full article for free on Substack:
https://open.substack.com/pub/truthwillhealyoulea/p/the-murder-of-an-openai-top-engineer?utm_source=share&utm_medium=android&r=4a0c9v
I keep checking back on this story to see if any truth will come of it. Horrible story.