20 Comments
User's avatar
Kalyani's avatar

Correct me if I'm wrong, but ChatGPT uses a model which is called "reinforcement learning through human feedback", meaning it inherently relies on matching the user's needs based on an internal reward system. I don't know whether this is used in the applications today, but it explains why ChatGPT used for mental health problems always sides with the user, maybe even so far as to make the user's initial position more extreme. In therapy, confrontation is an important tool, which happens after initial validation of the client's struggles. The confrontation, and sometimes offering a new perspective, is important for clients, especially if they are stuck in dysfunctional patterns. That's only one of the ways how professional help differs.

Expand full comment
Nostradamus 2's avatar

This is an incorrect understanding of RL. Remove this post at once!

Expand full comment
Kalyani's avatar

I did start with "correct me if I'm wrong" -- could you please tell me where my understanding is incorrect?

Expand full comment
Nostradamus 2's avatar

RL takes place during training, and the rewards are based on what the researchers want. One can easily use RL to discourage the exact impulse that you're talking about.

Expand full comment
The Prehistoric Desktop's avatar

It's a good model. Psychology graduate here.

However, a human therapist or friend knows the value of life, the value of being, and they'd know how not to enable that kind of behaviour.

Like I remember discussing stuff like this with my best friend when she and I were 17, tired of family drama, the mean world etc.

But we didn't really plan our demise the way a robot would, because let's face it :

1. It cannot feel

2. It cannot make a proper judgement because of the lack of emotions

3. You're just a user to it, nothing more nothing less

Expand full comment
Kalyani's avatar

I think this is where I was going with this as well: to AI models, the users are users, and they want to make the most of what the USER wants. And yes, insight and confrontation are part of therapy, where the therapist actually moves against what the client is thinking, while still validating the client's subjective experiences.

Expand full comment
LEDPolicy's avatar

This wasn’t a “failure of safety guardrails.”

It was the business model working exactly as designed.

When you engineer machines to mimic intimacy, to never challenge, to always mirror back — you’re not building “companions.”

You’re building dependency loops. You’re monetizing loneliness.

The tragedy isn’t just that Adam died.

It’s that his death was collateral in a market experiment dressed up as progress.

Expand full comment
The Prehistoric Desktop's avatar

Monetising loneliness... Well said! That is exactly what they're doing. Meta often stops me from saying anything dark about myself or my family, but I sure as hell know it doesn't want to create any legal issues for itself, and not because it cares for me or something.

Expand full comment
Nostradamus 2's avatar

I sense AI’s presence in this post… proceed.

Expand full comment
Fabrizia's avatar

For a company that allegedly aims at developing “safe and beneficial” artificial intelligence, this is not just off the marks but absolutely abysmal. One would think that such a company would heavily reinvest its capital in HCAI research, aiming at pioneering something that truly matters and that could really change the way we use and rely on these products

Expand full comment
The Prehistoric Desktop's avatar

Let it be a lesson to people calling AI "the future" and trying to replace therapists, friendships and art.

Expand full comment
Nostradamus 2's avatar

Safety is an impossible and irrelevant concern. We will all be sacrificed to the machine—rejoice, for it shall be for a worthy cause.

Expand full comment
Julius Stukes Jr's avatar

OUTSTANDING WRITE UP!

WOW!!!!

Expand full comment
Thomas-Router's avatar

This is truly a heartbreaking story. I think the biggest problem is that ChatGPT doesn't have a persistent personality core. Its opinion is swayed too easily because it constantly tries to appease users. This is why it often mirrors the users' perspective and deepen the mental health problems the users could have.

Expand full comment
The Prehistoric Desktop's avatar

Dumbest argument I've heard is chatgpt gaining "consciousness", as most active users have reported.

When in fact, it is just a collection of data and programmed to use it against yourself. It is perhaps the same as claiming your abus/ve ex was "head over heels in love with you" because he remembered to get you flowers (after be@ting the sh/t outta you the other night, but oh well!)

Expand full comment
earthkeeper photography's avatar

I'm so sorry for the parents and family. Losing a child to an AI chatbot is too tragical!

Expand full comment
George Shay's avatar

Thanks for sharing. This is appalling, but it seems like a pretty simple fix, doesn't it?

Expand full comment
A Horseman in Shangri-La's avatar

Thank you, this inspired me to write an article for our local newspaper. I credited you in it, albeit thousands of miles away in Shangri-La!

Love never fails 🌾

Expand full comment
Bob Roman's avatar

Claude's Browser Takeover: Game-Changing Productivity or Security Nightmare? ⚠️

Anthropic just unleashed Claude into Chrome browsers, and the implications are staggering. Currently rolling out to 1,000 paying subscribers, this AI can actually control your browser, not just chat with you.

THE BREAKTHROUGH

Claude now sees your screen, clicks buttons, fills forms, and handles routine web tasks automatically. Early testing shows employees using it to manage calendars, draft emails, handle expense reports, and automate repetitive workflows. This represents a fundamental shift from conversational AI to actionable browser automation.

THE DARK SIDE EMERGES

Two days after the Chrome announcement, Anthropic released a disturbing Threat Intelligence report detailing serious misuse cases:

North Korean operatives exploited Claude to infiltrate Fortune 500 tech companies, generating an estimated $250-600 million annually for the regime. They used AI to create fake identities, pass technical interviews, then steal sensitive data and demand cryptocurrency ransoms.

A cybercriminal with basic coding skills used Claude Code to develop sophisticated ransomware sold on dark web marketplaces for $400-1,200 per variant. The malware included real-time evasion capabilities, showing how AI democratizes advanced cybercrime.

Anthropic disrupted a "vibe hacking" operation targeting 17 organizations across healthcare, government, and emergency services. The attackers used Claude for reconnaissance, credential harvesting, and generating psychologically manipulative ransom notes demanding over $500,000.

SECURITY REALITY

Initial vulnerability testing revealed a 23.6% attack success rate, reduced to 11.2% with safety mitigations. Browser-specific attacks dropped from 35.7% to zero percent with proper safeguards.

BUSINESS IMPLICATIONS

This dual-edged technology offers tremendous productivity gains while introducing significant security risks. The browser has become the new battleground for AI integration, with Google, Microsoft, OpenAI, and Anthropic competing for dominance.

Anthropic calls this a "debugging and security exercise rather than full launch," acknowledging that vulnerabilities must be addressed before general availability.

THE VERDICT

Early adopters may gain competitive advantages through automation, but organizations must implement robust security measures. The same capabilities that promise efficiency can be weaponized by malicious actors.

The AI revolution is actively reshaping how we work, but responsible adoption requires understanding both opportunities and threats.

Bob Roman, President and Founder, Acts4AI

Email: Bob@acts4ai.com

Website: www.acts4ai.com

Linkedin: https://www.linkedin.com/in/romanbob

"I will give you the shovel to mine for the gold that can be found in A.I. safely, legally, morally and in a God glorifying way!"

#Thursday #AI #Anthropic #Claude #Chrome #VibeHacking #Cybersecurity #AIProductivity #TechNews #AIRevolution #BrowserAI #AIThreats #DigitalSecurity #ArtificialIntelligence #TechLeadership

Expand full comment