This Week's Top AI Papers
My weekly curation of papers & reports on the legal and ethical challenges of AI, plus one recommended AI development to watch this week | Edition #201
π Hi, Luiza Jarovsky here. Welcome to our 201st edition, now reaching 60,000+ subscribers. If you're interested in shaping the future of AI, you're in the right place! To upskill and advance your career:
AI Governance Training: Join my 4-week live online program
Learning Center: Receive free additional AI governance resources
Job Board: Explore job opportunities in AI governance and privacy
Subscriber Forum: Participate in our daily discussions on AI
This Week's Top AI Papers
AI is evolving fast, and continuous learning is essential.
Below is my weekly curation of papers and reports covering the legal and ethical challenges of AI. Download, read, and share:
1. Large Language Models, Small Labor Market Effects, by Anders Humlum and Emilie Vestergaard - Read
β(β¦) Using difference-in-differences and employer policies as quasi-experimental variation, we estimate precise zeros: AI chatbots have had no significant impact on earnings or recorded hours in any occupation, with confidence intervals ruling out effects larger than 1%. Modest productivity gains (average time savings of 3%), combined with weak wage pass-through, help explain these limited labor market effects. Our findings challenge narratives of imminent labor market transformation due to Generative AI.β
2. AI as Artificial Ignorance, by Bent Flyvbjerg - Read
βAI and bullshit (in the strong philosophical sense of Harry Frankfurt) are similar in the sense that both prioritize rhetoric over truth. They mix true, false, and ambiguous statements in ways that make it difficult to distinguish which is which. AI sounds convincing even when it's wrong. As such, current AI is more about persuasion than about truth. This is a problem because it means AI produces faulty and ignorant results. For now, we need to be highly skeptical of AI.β
3. Distant Writing: Literary Production in the Age of Artificial Intelligence, by Luciano Floridi - Read
β(β¦) By examining theoretical frameworks and practical consequences, and relying on an experiment in distant writing called Encounters, this article argues that distant writing represents a significant evolution in authorship, not replacing but expanding human creativity within a design paradigm. The distinction between writing (close) and "wrAIting" (distant) reveals how AI-assisted composition creates narrative possibilities previously inaccessible, transforming literature's modal space while challenging traditional notions of authorship, creativity, and literary production. This emerging practice merits critical attention as it shapes future literary landscapes and reconfigures relationships between human creativity and AI.β
4. The 2025 AI Index Report, by the Stanford Institute for Human-Centered AI - Read
βThe AI Index continues to lead in tracking and interpreting the most critical trends shaping the fieldβfrom the shifting geopolitical landscape and the rapid evolution of underlying technologies, to AIβs expanding role in business, policymaking, and public life. Longitudinal tracking remains at the heart of our mission. In a domain advancing at breakneck speed, the Index provides essential contextβhelping us understand where AI stands today, how it got here, and where it may be headed next.β
5. AI Privacy Risks & Mitigations - LLMs, by Isabel BarberΓ‘ - Read
βThis document provides practical guidance and tools for developers and users of LLM-based systems to manage privacy risks associated with these technologies. The risk management methodology outlined in this document is designed to help developers and users systematically identify, assess, and mitigate privacy and data protection risks, supporting the responsible development and deployment of LLM systems. This guidance also supports the requirements of the GDPR Article 25 Data protection by design and by default and Article 32 Security of processing by offering technical and organizational measures to help ensure an appropriate level of security and data protection (β¦).β
6. Beyond Release: Access Considerations for Generative AI Systems, by Irene Solaiman et al. - Read
β(β¦) We deconstruct access along three axes: resourcing, technical usability, and utility. Within each category, a set of variables per system component clarify tradeoffs. For example, resourcing requires access to computing infrastructure to serve model weights. We also compare the accessibility of four high performance language models, two open-weight and two closed-weight, showing similar considerations for all based instead on access variables. Access variables set the foundation for being able to scale or increase access to users; we examine the scale of access and how scale affects ability to manage and intervene on risks. This framework better encompasses the landscape and risk-benefit tradeoffs of system releases to inform system release decisions, research, and policy.β
7. Trust, attitudes and use of artificial intelligence: A global study 2025, by Nicole Gillespie et al. - Read
βThe report provides timely, global research insights on a range of questions, including the extent to which people trust, use, and understand AI systems; how they perceive and experience the benefits, risks and impacts of AI use in society, at work and in education; expectations for the management, governance and regulation of AI by organizations and governments; how employees and students are using AI for work and study; and perceived support for the responsible use of AI. It draws out commonalities and differences in these key dimensions across countries and sub-groups of the population, and sheds light on how trust and attitudes toward AI have changed over the past two years since the widespread uptake of generative AI.β
8. The Labor Market Effects of Generative Artificial Intelligence, by Jonathan Hartley et al - Read
βIn this paper we develop a new survey analyzing Generative AI use in the labor market to assist in measuring the economic effects of Generative AI. We find, consistent with other surveys that Generative AI tools like LLMs are most commonly used in the labor force by younger individuals, more highly educated individuals, higher income individuals, and those in particular industries such as customer service, marketing and information technology. Overall, we find that as of December 2024, 30.1% of survey respondents above 18 have used Generative AI at work since Generative AI tools became public. We also estimate Generative AI use at the intensive margins, its efficiency gains and its use in job search. We also seek to examine the effects of LLMs on productivity and the labor market using a number of additional datasets (β¦).β
An AI Development to Watch This Week
There is too much information being shared about AI, so it's more important than ever to cut through the noise and focus on what's relevant.
This week, I recommend you pay attention to the following AI development: