Is AI bad for critical thinking? It depends on when you use it Skip to content Subscribe today Every print subscription comes with full digital access Subscribe Now Menu All Topics Health Humans Anthropology Health & Medicine Archaeology Psychology View All Life Animals Plants Ecosystems Paleontology Neuroscience Genetics Microbes View All Earth Agriculture Climate Oceans Environment View All Physics Materials Science Quantum Physics Particle Physics View All Space Astronomy Planetary Science Cosmology View All Magazine Menu All Stories Multimedia Reviews Puzzles Collections Educator Portal Century of Science Unsung characters Coronavirus Outbreak Newsletters Investors Lab About SN Explores Our Store SIGN IN Donate Home INDEPENDENT JOURNALISM SINCE 1921 SIGN IN Search Open search Close search Home INDEPENDENT JOURNALISM SINCE 1921 All Topics Earth Agriculture Climate Oceans Environment Humans Anthropology Health & Medicine Archaeology Psychology Life Animals Plants Ecosystems Paleontology Neuroscience Genetics Microbes Physics Materials Science Quantum Physics Particle Physics Space Astronomy Planetary Science Cosmology Tech Computing Artificial Intelligence Chemistry Math Science & Society All Topics Health Humans Humans Anthropology Health & Medicine Archaeology Psychology Recent posts in Humans Health & Medicine Fluoride in U.S. drinking water does not reduce IQ, a new study finds By Elie Dolgin20 hours ago Life Talking dogs and chatty cats could one day ‘speak’ in our language By Laura Sanders22 hours ago Science & Society Snippets of hair may expose chronic stress in war refugees By Sujata GuptaApril 6, 2026 Life Life Animals Plants Ecosystems Paleontology Neuroscience Genetics Microbes Recent posts in Life Life Smithsonian secrets most likely to blow your mind By Meghan Rosen and Stephen Voss48 minutes ago Life Talking dogs and chatty cats could one day ‘speak’ in our language By Laura Sanders22 hours ago Animals For gray whales, San Francisco Bay is becoming a deadly pit stop By Gennaro TommaApril 13, 2026 Earth Earth Agriculture Climate Oceans Environment Recent posts in Earth Animals For gray whales, San Francisco Bay is becoming a deadly pit stop By Gennaro TommaApril 13, 2026 Climate Emperor penguins are marching toward extinction. Antarctica fur seals too By Carolyn GramlingApril 9, 2026 Environment Hawaii is turning ocean plastic into roads to fight pollution By Sara NovakApril 8, 2026 Physics Physics Materials Science Quantum Physics Particle Physics Recent posts in Physics Cosmology Exploding black holes could explain an antimatter mystery By Emily ConoverApril 10, 2026 Quantum Physics Just 10,000 quantum bits might crack internet encryption schemes By Emily ConoverApril 1, 2026 Quantum Physics Quantum physics can confirm where someone is located By Emily ConoverMarch 30, 2026 Space Space Astronomy Planetary Science Cosmology Recent posts in Space Space Artemis II ends its historic lunar journey By Lisa GrossmanApril 10, 2026 Cosmology Exploding black holes could explain an antimatter mystery By Emily ConoverApril 10, 2026 Space Even before splashdown, Artemis II is delivering a scientific treasure trove By Lisa GrossmanApril 8, 2026 News Artificial Intelligence Is AI bad for critical thinking? It depends on when you use it If used later in writing an essay, the chatbots can help include more perspectives People who used an AI chatbot after partially working through a problem improved their critical thinking skills, a small study suggests. Thai Liang Lim/E+/Getty Images By Aaron Brooks 7 hours ago Share this:Share Share via email (Opens in new window) Email Share on Facebook (Opens in new window) Facebook Share on Reddit (Opens in new window) Reddit Share on X (Opens in new window) X Print (Opens in new window) Print Next time you’re about to ask an AI chatbot to help you solve a hard problem, you might want to slow your roll. People who waited to consult an AI chatbot until they had partially worked through a problem on their own performed better on a critical thinking task than those who used the chatbot from the start, researchers reported April 14 at the 2026 CHI conference on Human Factors in Computing Systems in Barcelona. Under tight deadlines, though, using AI early in the process did provide a boost, highlighting a trade-off between speed and independent reasoning, and raising questions about how and when we should use chatbots. Sign up for our newsletter We summarize the week's scientific breakthroughs every Thursday. In the study, computer scientist Mina Lee of the University of Chicago and colleagues randomly assigned 393 people to one of eight categories. First, participants were divided into two large groups: those given sufficient time (30 minutes) or insufficient time (10 minutes). Then, they were divided into smaller groups based on when, or if, they could use the OpenAI’s GPT-4o chatbot: early, continuous, late or no access. Each group had roughly 40 to 50 participants. Next, participants were instructed to play the role of a city council member and decide, using a set of seven documents, whether to accept or reject a company’s proposal to mitigate a water contamination problem. Each participant had to write an essay explaining their decision. The researchers scored the essays based, in part, on how many valid arguments and textual references they contained and found that participants given 30 minutes performed better across the board than those given only 10 minutes. The most successful in terms of essay scores were participants who had enough time to complete the task and had access to the chatbot later in the process. When the researchers looked at how well participants remembered information in the provided documents, the most successful group was the one that had sufficient time and never had access to the chatbot. The researchers also scored myside bias, measuring how many perspectives participants incorporated in their arguments. They found that the group with sufficient time and late chatbot access did best. The results align with research on two kinds of learning: one based on slow, effortful reasoning and another based on fast, automatic thinking, says Barbara Oakley, a systems engineer and education expert at Oakland University in Rochester Hills, Mich. Slow learning involves carefully building an understanding of the problem and weighing options, while fast learning relies on habits and quick judgments with little reflection. Participants who had time to reason through the material on their own before using AI did best because they had already engaged in that slower, more deliberate learning, she says. Of course, in the real world, people often have to complete critical thinking tasks under time pressure. In the four groups in the “insufficient time” category, the group with access to the chatbot early on scored the highest on their essays. That doesn’t mean
Is AI bad for critical thinking? It depends on when you use it
