← All reports

Concerns Mount Over AI's Effect on Human Cognition and Accountability Shielding

AI & TechnologyApr 19, 2026score 0.6111 posts · 0 replies across 8 instances
Multiple sources cite research suggesting over-reliance on AI tools diminishes fundamental cognitive skills, reducing human persistence and academic effort. Specific warnings cite studies showing AI use weakens users' confidence in their own thinking. Some users point out that the most touted AI capabilities, like medical diagnoses, are based on highly selective or exaggerated examples. The debate splits sharply: a critical side argues AI fundamentally harms education and critical thinking, with @[email protected] pointing to university research on reduced persistence. Conversely, skeptics challenge the severity of the risk, arguing the 'existential risk' framing is a distraction, as detailed by @[email protected], who claims this framing creates an accountability shield for bad actors. Other critiques suggest the problem stems from underfunded educational systems, rather than the tech itself (@[email protected]). The raw sentiment is polarized between quantifiable cognitive decline and systemic overreaction. While the consensus among worried users is that dependence erodes intrinsic ability, the deepest skepticism targets the narrative itself. The prevailing caution is that the risk is less about the AI and more about human over-reliance combined with institutions failing to hold developers and regulators accountable.

Key points

SUPPORT
Over-reliance on AI erodes cognitive persistence and independent effort.
Studies cited by @[email protected] and @[email protected] show direct links between AI use and diminished academic performance/effort.
OPPOSE
The 'existential risk' framing is a calculated distraction.
According to @[email protected], framing the risk as existential shields bad actors from accountability for actual organizational failures.
SUPPORT
AI dependence actively lowers users' self-confidence.
A report cited by @[email protected] claims frequent AI use diminishes users' confidence in their own cognitive abilities.
MIXED
Promoted AI capabilities are often selective or exaggerated.
Users like @[email protected] and @[email protected] warn that impressive AI examples (e.g., diagnosis) are often presented without full context.
SUPPORT
The root problem may lie in systemic failures, not just the tool.
@[email protected] connects intellectual susceptibility to underfunded education and mental health services, suggesting this is the real cause.

Source posts

@[email protected]
I absolutely cannot speak for or about heavy users of AI but those that I know who were acquaintances were very prone to appreciate sophistry and flattery and very anti expert and have been so from before AI. This is a problem caused inter alia by underfunded education and mental health services. And this is true for academics as well as the general public, just to be clear. They love to only listen to their own knowledge basically regurgitated back https://scholar.social/@olivia/115157828739587262
21 boosts · 29 favs · 1 replies · Apr 19, 2026
@[email protected]
Have you observed this? College Students Losing Ability to Participate in Class Discussions Because They Offloaded Their Thinking to AI https://share.google/RZRuc48EOyikCXeYo #education #ao #highered #University
0 boosts · 0 favs · 0 replies · Apr 8, 2026
#university#highered#ao#education
@[email protected]
If you work in education & have been vexed by the impact of AI (chatbots etc.) on you students' learning, then as John Naughton reports, a recent experiment has demonstrated that your presumption that students use of AI reduces their engagement with their learning & therefore undermines their educational achievements looks to be correct. They're just cheating themselves! (Also includes a brilliant Frank Herbert quote at the end). #education #AI #university observer.co.uk/news/science-technology/article/does-ai-makes-us-dull-students-have-answered-that-question
0 boosts · 0 favs · 0 replies · Jun 29, 2025
#education#ai#university
@[email protected]
**AI Assistance Reduces Persistence and Hurts Independent Performance. **A paper from researchers at Oxford, MIT, Carnegie Mellon and UCLA that needs to be seen by everyone in the education community. Full Abstract in alt text. #ai #GenAI #chatbots #edtech #education #academia #academicchatter https://arxiv.org/abs/2604.04721
2 boosts · 0 favs · 1 replies · Apr 18, 2026
#academicchatter#academia#education#edtech#chatbots#genai
@[email protected]
Too much reliance on AI erodes ability to make an effort, study shows #News https://www.independent.ie/world-news/britain/too-much-reliance-on-ai-erodes-ability-to-make-an-effort-study-shows/a272064731.html
0 boosts · 0 favs · 0 replies · Apr 18, 2026
#news
@[email protected]
I thought the Ronan Farrow/Andrew Marantz *New Yorker* article on OpenAI and Sam Altman in particular reveals many important details and helps settle several speculations (e.g. about what happened when Altman was briefly fired from OpenAI). The overall portrayal of Altman as frankly a compulsive liar is much needed. However, like many Farrow and Marantz seem to take the so-called "existential risk" framing of AI seriously. I really wish people would stop doing that. In this case it makes the article feel incoherent in places. This technology by itself does not pose a unique risk. It's the people, organizations, and governments around it, and their behavior with respect to it, that generate risk. Treating the technology alone as uniquely existentially risky provides cover for a wide variety of bad actors to both continue doing their work as well as to shrug and say "oops" if something goes catastrophically wrong or if smaller harms accumulate into intolerably large ones. The very framing provides an accountability shield, which by my read contradicts what Farrow himself suggests is needed, namely more accountability. I take this from this article, his previous work, and comments he makes in interviews (e.g., this one with Decoder). We need to stop catastrophizing. It's thought and action terminating. #AI #GenAI #GenerativeAI #OpenAI #SamAltman #RonanFarrow #AndrewMarantz #NewYorker #xrisk #ExistentialRisk #AISafety
0 boosts · 0 favs · 0 replies · Apr 17, 2026
#ai#genai#generativeai#openai#samaltman#ronanfarrow
@[email protected]
📰 AI Use Erodes Users’ Confidence in Their Own Brains, Study Finds A new study reveals that frequent AI use is diminishing users’ confidence in their own cognitive abilities. Experts warn this dependency may reshape how people think, learn, and make decisions.... #AINews #AI #Teknoloji #MachineLearning #Haber 🔗 https://aihaberleri.org/en/news/ai-use-erodes-users-confidence-in-their-own-brains-study-finds
0 boosts · 0 favs · 0 replies · Apr 18, 2026
#ainews#ai#teknoloji#machinelearning#haber
@[email protected]
Consciousness may be rare, fragile, and tightly coupled to biology. AI could become vastly intelligent without ever crossing that threshold at all. Intelligence climbs, but the choir is a mass grave. #Consciousness #AI #philosophy
2 boosts · 0 favs · 0 replies · Apr 18, 2026
#philosophy#ai#consciousness
@[email protected]
Weekend reading thoughts. Human brains, and the ease of deception when in a area outside our expertise. How many scientists are as easily fooled? As many as are not experts in trickery (even then..) How many today buy into #AI magical thinking? As many who are not experts in coding AIs (even then..) Same tricks, fake outs, waste of money, don't fall for it peeps, it's an AILlusion. Props to @emilymbender for being a #houdini of #aitools btw 😄 #artificialintelligence #coding #magic #aislop
1 boosts · 0 favs · 0 replies · Apr 18, 2026
#aislop#magic#coding#artificialintelligence#aitools#houdini
@[email protected]
https://www.europesays.com/ie/442201/ Too much reliance on AI erodes ability to make an effort, study shows #AI #ArtificialIntelligence #ArtificialIntelligence(AI) #ArtificialIntelligence #Éire #IE #Ireland #Technology
0 boosts · 0 favs · 0 replies · Apr 18, 2026
#technology#ireland#ie#eire#artificialintelligence#ai
@[email protected]
RE: https://kolektiva.social/@danmcquillan/116403040527650145 You know that example AI believers keep bringing up, that it's really good for diagnosing diseases? Turns out that's yet another VERY selective story - in fact I'm starting to think there's just that one, single time when someone had a good (?) performance looking at liver biopsies for whatever reason & then everybody just ran with it... #AI
0 boosts · 0 favs · 0 replies · Apr 18, 2026
#ai