AI's Appalling Social Science Gap
There has been a structural and purposeful neglect of social sciences’ perspectives in AI, which might lead to increased individual and social risks | Edition #213
👋 Hi, Luiza Jarovsky here. Welcome to our 213th edition, now reaching 64,600+ subscribers in 168 countries. To upskill and advance your career:
AI Governance Training: Apply for a discount here (2 seats left)
Learning Center: Receive free AI governance resources
Job Board: Find open roles in AI governance and privacy
AI Book Club: Discover your next read and expand your knowledge
Become a Subscriber: Read my full analyses twice a week:
👉 A special thanks to TrustWorks, this edition's sponsor:
TrustWorks understands that there is no one-size-fits-all solution. That’s why its demos are tailored to each organization’s specific use cases and focused on practical solutions. See how companies work smarter and more efficiently with TrustWorks. Schedule a personalized session today.
AI's Appalling Social Science Gap
Regardless of how autonomous AI can be at this point, the trajectory of the AI wave has been shaped by the humans behind it, especially the tech leaders in charge of AI development and deployment.
In today's edition, I argue that there has been a structural and purposeful neglect of social sciences in AI, as the perspectives of engineering and computer science are the primary (and often only) priorities.
What many have not realized is that rejecting law, ethics, psychology, sociology, political science, and other related disciplines will lead to increased individual and social risks, as AI systems and strategies become disconnected from real-world concerns.
-
I will start with a line whose variations have been repeated by tech executives, often to downplay AI risks, justify inflated investments, or attempt to bypass legal requirements:
“AI will bring an explosion of productivity, and people will be able to dedicate much more time to leisure, hobbies, and things they love.”
Besides being almost naively hopeful, the statement ignores fundamental ethical, legal, sociological, and psychological perspectives.
First, the increase in productivity, based on what the past two and a half years have shown, will likely be narrow and heterogeneous, not a major labor revolution. It will potentially benefit a small group of people, mostly in tech, involved with tasks that AI can typically yield good results in.
Tech executives sometimes forget, but there are billions of people in the world who do not even have access to the internet (not to mention clean water or healthcare). Additionally, the vast majority of the planet is still AI illiterate and will likely not directly benefit from AI advancements.
Second, the fact that some people might become more productive will not necessarily result in more free time for leisure. That is not how capitalism or the job market works. Most people will either be given other tasks to fill the monthly workload or will be fired. Not everybody is a successful freelancer, and most people will likely be negatively impacted.
Third, automation or doing things faster does not necessarily result in happy people cultivating their hobbies and fulfilling their dreams. That is not how the human mind works, as shown in decades of research in psychology and related fields.
People need purpose, engagement, and involvement. Studies have shown that happiness often comes from meaningful participation, belonging, and connectedness.
From a psychological perspective, there is no direct connection between AI-powered automation and fulfillment. If speed and scale result in less connectedness and meaningful participation, AI will probably lead to more disengagement and mental health issues like anxiety and depression.
Also, engaging in intellectual and creative activities (not curtailing them) is very often beneficial and leads to more skills and well-being.
A recent MIT study scanned the brains of 54 participants and found that those who used ChatGPT for writing were not fully learning, and authors speculated that widespread use of LLM tools might lead to a decrease in learning skills.
From every possible social science angle, this is problematic. But as I mentioned before, social science concerns are not the priority; it is unclear to me if they are even considered.
Regarding the human need for connectedness and belonging, tech executives know that very well and have blatantly used AI systems to exploit these needs for profit. How?