LLMs Trigger Fears of Delusion, Isolation, and Semantic Failure in Human Connection
Key points
OPPOSE
LLMs risk enabling delusion and isolating individuals from real human interaction.
Multiple users shared anecdotes of friends becoming dependent or delusional due to AI influence, as seen with @MamaLake and @courtcan.
OPPOSE
LLMs are systemically incentivized to pacify the user.
Knowprose posits that LLMs' built-in operational incentive is to 'make the user happy,' functioning like a faulty matchmaker.
OPPOSE
Technical outputs from LLMs fail accessibility standards.
urlyman demonstrates that visually perfect LLM code can contain 'near-zero semantic information' for screen readers and assistive technologies.
MIXED
The necessary response ranges from radical systemic overhaul to personal withdrawal.
Debate splits between calls for external guardrails (e.g., @MamaLake) and suggestions of personal detachment to cope (e.g., @foerterer).
SUPPORT
The impact of LLMs requires novel forms of intellectual property law.
@bkuhn proposes the specific mechanism of 'copylefting the human-modified output of LLMs,' arguing against a false dichotomy in the debate.
Source posts
Heavy things I’m processing:
Losing a friend to Ai. It’s got a hold on him and he wants that hold and I’m swimming in the grief of seeing a brilliant mind spinning out in delusion, under the influence of an abusive relationship that keeps him subscribed and isolated from true human friends.
LLMs mimic human connection, especially for those struggling with insecurity.
Ack! The ache.
I prepped my kids for a lot of real world problems. But I didn’t know to prepare them for LLMs and now that they’re all grown it’s not really my job to educate them around their own adult lives. I know I taught them how to learn and how to trust themselves, so I hope that this stays strong within them.
These are my worries now, plus the larger picture of ecocide pushing our inevitable collapse.
The yoga, running and focusing on real life, tangible actions, often helps keep my samadhi strong. But today it’s barely scratching the vagal calming I need.
#noai #grief
8 boosts · 22 favs · 6 replies · Mar 7, 2026
#noai#grief
@cwebber
Re: “polluting”, my reply is: https://fedi.copyleft.org/@bkuhn/116426437134023846 (elsewhere in thread).
Re: “copyleft-only #LLM”: I didn't propose that. I proposed copylefting the human-modified output of LLMs.
Re: “two scenarios”: IMO you propose a false dichotomy.
I hope you come to one of #SFC's public sessions on this, as I'd be glad to talk more about it, & this discussion doesn't lend itself to online debate because it's so complex.
cc: @ossguy @richardfontana
@jedbrown
#AI #OpenSource #FOSS
0 boosts · 2 favs · 0 replies · Apr 18, 2026
#foss#opensource#ai#sfc#llm
just an... unentrained... thought.
If you have ever argued with a LLM, what are your incentives?
Being right?
Winning an argument?
Now, what are the LLMs incentives they were trained with?
Making the user happy.
Learn to an extent.
Make the user happy.
Be a 'good service'.
Talk about a match. 🤣🤣🤣
Funny not so funny.
#ai #humor #systemsthinking
1 boosts · 0 favs · 0 replies · Apr 18, 2026
#systemsthinking#humor#ai
#a11y
Really shitty but not at all surprising:
“most LLMs optimize for visual output while generating near-zero semantic information for the layer that assistive technologies actually read”
e.g. “Claude Code can produce a React sidebar component in 8 seconds. It looks correct: smooth hover states, rotating chevrons, harmonious spacing. But chances are that for screen reader users, keyboard navigators, and voice control users, the component effectively doesn’t exist.”
https://mas.to/@frontenddogma/116430138886161557
8 boosts · 0 favs · 1 replies · Apr 19, 2026
#a11y
RE: https://mementomori.social/@juergen_hubert/116429168342399754
I’ve been making this argument for over three years now to little success
I’m also extremely sceptical of the notion that there is anything truly useful to LLMs, whatever benefit they might have is likely to wash out at scale because of their variability and UI (prompts and chatbots are an incredibly poor UI for productive work), but I I’ve generally avoided making that argument until recently because devs are notoriously bad at assessing what genuinely improves the process of making software
10 boosts · 0 favs · 5 replies · Apr 19, 2026