I’ve become so reliant on a GPS that using maps to direct myself feels like a foreign concept. Google Maps, Waze, whatever, if it's outside of my neighbourhood, I’m punching in the address before I head out. Sometimes I notice the GPS taking slower routes or sending me the wrong way as I get out of a parking lot, but regardless, I just follow its directions, because I don’t have to think. Though I know, without this convenient tool, I’d be lost (literally).

Today, AI is making more and more decisions for us (Microsoft, 2024). The more it develops, the more we use it and trust the information it provides us. While it saves us time, it also changes the way we think. The more we trust AI to do the hard work — whether it be writing us a report, diagnosing diseases, or finding a bug in our code — the less we engage our critical thinking skills. Many of us use AI, in small ways or large, and it's important we know what psychological effects it's having on our brains. Just like relying on a GPS can make us less confident navigating on our own, depending too much on AI to decide for us can weaken our ability to think critically.

Two psychological factors drive this effect— automation bias and cognitive offloading— which subtly shape how we process information and make decisions.

The Erosion of Critical Thinking

Critical thinking isn’t just when we use our brain, it’s a process that we use to think rationally, to understand logical connections between ideas, to evaluate arguments, and to identify inconsistencies in reasoning. It’s crucial for effective problem-solving, making informed decisions, and acquiring new knowledge (Gerlich, 2025, p.1). As AI continues to embed itself into our daily lives, we’re increasingly deferring to its outputs without scrutiny. This is a form of automation bias. This is where users favour automated solutions over their judgement, potentially ignoring contradictory information or not considering alternative options (Spatola, 2024, p.2).

Automation Bias and “Blind Trust”

Whether it’s getting take-out food instead of cooking at home or taking the escalator instead of stairs, it’s human nature to love taking the easy route. What's efficient and convenient is typically the path we take. We often take this path when we prompt AIs too, preferring direct answers from AI systems over understanding the underlying reasoning. We refer to this as the efficiency/accountability trade-off, where the drive for efficiency may undermine accountability and critical evaluation (Spatola, 2024, p.2). This tendency to rely on AI for quick answers aligns with automation complacency, where we stop critically evaluating solutions. Automation bias boils down to one thing: trust. Studies inform us that when AI provides us with explanations, it can lead to “blind trust”, where we follow its advice without questioning if it’s reliable (Schemmer, et al., 2022, p.2).

By failing to cross-check AI decisions with alternative sources or human expertise, we are failing to ask ourselves “Is this correct?”. Instead, we are accepting an answer which we can’t say for certain is the truth. AI makes it easy to trust by displaying confidence in its answers. A study by Pawitan & Holmes found that in formal fallacy tasks, ChatGPT4o and GPT4-Turbo models gave a confidence score of 100% in all of their answers, clearly showing overconfidence. By blindly trusting, we aren’t critically thinking. Despite the benefits of quick information, there can also be a call for concern that AI tools might inadvertently reinforce biases and limit exposure to diverse perspectives (Gelrich, 2025, p.5). We not only become complacent but also expose ourselves to the dangers of blindly following flawed advice. Similarly, as we grow more dependent on AI for answers, we start to offload mental tasks and this has issues of its own.

Cognitive Offloading

Whenever I do simple math nowadays I find myself always jumping to a calculator instead of mentally adding or multiplying myself. Over time, I might find myself beginning to lose practice with basic arithmetic because I’ve become so accustomed to letting a tool do the work for me. This is a perfect example of a behavioural strategy that humans tend to adopt to reduce mental effort (lazy us!) called cognitive offloading. This occurs when individuals delegate tasks to external aids (AI) which reduces their engagement in deep, reflective thinking (Gelrich, 2025, p.2). This strategy of ours inhibits our ability to thoroughly engage and learn/improve skills. Humans learn by doing. We have to stop and think things through, actively do it, and then reflect on it. But by letting AI take the reins and make these decisions for us, what are we really learning?

This reliance on AI decision-making can lead to a superficial understanding of information and reduce our capacity for critical analysis (Gelrich, 2025, p.5). Another concern is that by constantly offloading our mental tasks to AI, we can develop what some researchers refer to as “cognitive laziness”. This is a condition that might diminish the inclination to engage in deep, critical thinking (Gelrich, 2025, p.5). Is there a task you’ve relied on AI for so often that you hardly think about doing it yourself anymore? One thing I’ve noticed is that I’m increasingly reliant on AI-powered auto-enhance tools for editing my photos. I’ve done this so much that now if I try without AI, I struggle to remember what settings to adjust to get the look I want.

Studies show that having a long-term reliance on AI for cognitive offloading could also remove essential cognitive skills like memory attention, analytical thinking, and problem-solving (Gelrich, 2025, p.6). Our over-reliance on AI for cognitive offloading can cause our cognitive abilities to atrophy, leading to diminished long-term memory and cognitive health issues (Gelrich, 2025, p.6). One study by Sparrow et al. found that frequently using search engines reduced participants’ likelihood of remembering information independently, with individuals instead focusing on where to find information. Apply this to AI, and we’ll face the dilemma where our brains only think to turn to AI for the answers, being unable to come up with them ourselves.

AI Impacts in the Workplace

On December 11th, 2024, ChatGPT went down for a few hours (Ancell, 2025). None other than OpenAI CEO Sam Altman said that he’s forgotten how to work without his service. “For the first time in a while, I had to work without it for four hours,” he said the day after the outage. “And I had kind of forgotten how to do that. And it really did make me think, like, man, we’re going to be relying on these systems more and more.” (OfficeChai.com, 2024).

Sam Altman, among many others, is part of the ever-growing demographic of individuals who are consistently using generative AI and AI-assisted tools to help automate work tasks. In a study by Microsoft, around 75% of surveyed workers were already using AI in the workplace in 2024, of which nearly half (46%) began doing so within the last six months. In professional and everyday scenarios, the use of AI tools for decision-making and problem-solving can influence cognitive processes.

As a software developer, I’ve noticed this trend among junior developers. Their coding output has increased significantly, but they are missing the in-depth understanding of what they’re coding. A trade-off of understanding for quick fixes. But this extends to other industries as well. For instance, in healthcare and finance, automated systems can speed up processes and improve efficiency. However, they may also make doctors and financial experts rely more on the technology, reducing the need for them to think critically on their own (Gerlich, 2025, p.2). In the workplace, workers who rely too much on AI for decision-making may struggle to make independent choices when AI isn’t available. This can lead to performance issues, especially in situations that require quick thinking and problem-solving without technology  (Gerlich, 2025, p.6). Without getting hands-on experience in the workplace, employees will struggle to develop skills that can help them grow. This overreliance on AI not only hinders employees’ ability to make independent decisions, but also impacts their ability to learn and develop crucial skills.

The Decline of Critical Thinking in Learning

A big question that’s had me curious is how AI will impact future generations. For my generation, the internet was the defining force that transformed our lives in ways we could scarcely have imagined. It’s become such a huge part of our daily lives that it’s difficult to imagine what our lives would look like without it. In a similar vein, AI is poised to play a comparable role for the next generation. This role for AI starts in education. I’ve seen arguments touching on how AI can provide personalized structured learning plans for students at all educational levels, allowing them to learn in a way that's easier for them to understand. While the benefits from AI can be rewarding for students, we also have to look at the risks.

Findings have shown that AI for students is a double-edged sword. On the one hand, AI-powered tools are providing students with enhanced writing proficiency and the ability to streamline research tasks. On the other, they introduce risks such as diminished creativity, over-reliance, and ethical concerns like plagiarism and data bias (Zhai et al., 2024, p15). Scientists also observed a diminishing of critical thinking skills and independent judgement skills (Zhai et al., 2024, p16). In one example, Malik et al. (2023) investigated how students perceive the integration of AI in writing processes. The study reports a potential reduction in critical thinking skills, the risk of excessive reliance, and the prevalence of misinformation and inaccuracies. What could this mean for the future of education given the rapidly evolving nature of AI?

Human mentorship has long been at the core of effective education. A mentor doesn’t just impart knowledge; they also guide and support the learner through personal interactions to promote critical thinking. AI-driven learning systems have the ability to do this too, if we use them correctly. Unfortunately, many people are using AI learning tools to replace their learning, rather than enhance it by providing a walkthrough process.

How can we know if we are over-reliant on AI?

Ask yourself what you are using AI for. Are you using it to assist in your learning journey? Are you using it as a tool for direct answers to your questions, trusting in its outputs? We’ve learned that over-reliance on AI has huge psychological effects on how we think. Our over-reliance on AI comes from placing excessive trust in AI systems, leading to a diminished role in human judgement, critical thinking, and oversight. This ultimately has and will continue to reduce our ability to think critically.

How to avoid being over-reliant

One of the most simple ways to avoid over-reliance on AI decision-making is to take everything it says with a grain of salt. Doubt it a bit. Double-check its answer against what you can find online. Blindly trusting AI is the easiest way to fall into the trap of depending on it. In the workplace, one study finds that encouraging workers to develop expertise in their field and feel more confident in their abilities can help them think more critically when using AI tools (Lee et al., 2025, p.14). The less confidence you have to perform a task, the more likely you are going to rely on AI to do it all for you. In the realm of education, think of your prompts as a way to get AI to walk you through the process, rather than just giving you a straight answer. Make alterations to the prompt to best suit your style of learning. And lastly, be diligent and conscious of how you use AI, making sure you’re aware of its potential impact and limitations.

As we move forward, it’s crucial that we strike a balance- leveraging the benefits AI has to offer while ensuring we safely use it in order to retain our capacity for independent thought and decision-making. This means actively questioning AI-generated outputs, staying informed about its limitations, and making a conscious effort to engage in critical thinking. Take the time to verify information, challenge assumptions made, and develop your own reasoning—because the ability to think for yourself is too valuable to outsource.

References

Ancell, N. (2025). ChatGPT outage left users furious: “Sam Altman is not a good person”. https://cybernews.com/news/openai-chatgpt-outage/

Cummings, M. (2004). Automation bias in intelligent time critical decision support systems. In AIAA 1st intelligent systems technical conference (pp. 1–6). https://doi.org/ 10.2514/6.2004-6313

Gerlich, M. (2025). AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking. Societies, 15(1), 6. https://doi.org/10.3390/soc15010006

Lee, H. P. H., Sarkar, A., Tankelevitch, L., Drosos, I., Rintel, S., Banks, R., & Wilson, N. (2025). The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers.

Malik, A. R., Pratiwi, Y., Andajani, K., Numertayasa, I. W., Suharti, S., & Darwis, A. (2023). Exploring artificial intelligence in academic essay: Higher education student’s perspective. International Journal of Educational Research Open, 5, 100296. https://doi.org/10.1016/j.ijedro.2023.100296

Microsoft. (2024). AI at Work is Here. Now Comes the Hard Part. https://www.microsoft.com/en-us/worklab/work-trend-index/ai-at-work-is-here-now-comes-the-hard-part

OfficeChai. (2024). I’ve Forgotten How to Work Without ChatGPT: OpenAI CEO Sam Altman. https://officechai.com/stories/ive-forgotten-how-to-work-without-chatgpt-openai-ceo-sam-altman/

Pawitan, Y., & Holmes, C. (2024). Confidence in the reasoning of large language models. arXiv preprint arXiv:2412.15296. DOI: 10.1162/99608f92.b033a087  

Schemmer, M., Kühl, N., Benz, C., & Satzger, G. (2022). On the influence of explainable AI on automation bias. arXiv preprint arXiv:2204.08859.

Sparrow, B.; Liu, J.; Wegner, D.M. Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips. Science 2011, 333, 776–778. DOI: 10.1126/science.1207745

Spatola, N. (2024). The efficiency-accountability tradeoff in AI integration: Effects on human performance and over-reliance. Vol. 2, Iss. 2. https://doi.org/10.1016/j.chbah.2024.100099

Vered, M., Livni, T., Howe, P. D. L., Miller, T., & Sonenberg, L. (2023). The effects of explanations on automation bias. Artificial Intelligence, 322, Article 103952. https:// doi.org/10.1016/J.ARTINT.2023.103952

Zhai, C., Wibowo, S. & Li, L.D. (2024). The effects of over-reliance on AI dialogue systems on students' cognitive abilities: a systematic review. Smart Learn. Environ. 11, 28. https://doi.org/10.1186/s40561-024-00316-7

New Comment


4 comments, sorted by Click to highlight new comments since:

I’ve become so reliant on a GPS that using maps to direct myself feels like a foreign concept. Google Maps, Waze, whatever, if it’s outside of my neighbourhood, I’m punching in the address before I head out. Sometimes I notice the GPS taking slower routes or sending me the wrong way as I get out of a parking lot, but regardless, I just follow its directions, because I don’t have to think. Though I know, without this convenient tool, I’d be lost (literally).

If you recognize this problem, why not stop using a GPS? Navigating without a GPS is not difficult. You could regain this skill easily. What’s stopping you?

You could regain this skill easily. What’s stopping you?

To answer your question more directly:

In almost all cases, I don’t care enough about the random patch of land on the way to and around my destination to build up a mental map of it before setting out.

A while back, I was driving to a friend's house every few months to hang out.

The first time, of course, I used a GPS to direct me there. Had this happened in the early 2000s, I would have printed out Google Maps turn-by-turn directions.

After a few times, I tried not using the GPS to direct me there, although I screwed up the final turns a bit and might have turned on the GPS to direct me around the twisty maze of curved streets and cul-de-sacs.

I wouldn’t have done that kind of thing if I had an appointment that I didn’t want to be late to.

Also, using a GPS insulates you a bit from surprise traffic/blockages that you might not know about beforehand — it can either just not direct you that way in the first place, or it can suggest an alternate route.

AI has has been the opposite for me. By doing all the easy stuff for me, it's allowed me to spend most of my time solving hard problems and being creative, instead of spending most of my time doing gruntwork. If anything I'm more sharp than I used to. It's a double-edged sword.