AI auto-complete may subtly shape views on social issues

AI autocomplete may subtly shape views on social issues Skip to content Subscribe today Every print subscription comes with full digital access Subscribe Now Menu All Topics Health Humans Anthropology Health & Medicine Archaeology Psychology View All Life Animals Plants Ecosystems Paleontology Neuroscience Genetics Microbes View All Earth Agriculture Climate Oceans Environment View All Physics Materials Science Quantum Physics Particle Physics View All Space Astronomy Planetary Science Cosmology View All Magazine Menu All Stories Multimedia Reviews Puzzles Collections Educator Portal Century of Science Unsung characters Coronavirus Outbreak Newsletters Investors Lab About SN Explores Our Store SIGN IN Donate Home INDEPENDENT JOURNALISM SINCE 1921 SIGN IN Search Open search Close search Home INDEPENDENT JOURNALISM SINCE 1921 All Topics Earth Agriculture Climate Oceans Environment Humans Anthropology Health & Medicine Archaeology Psychology Life Animals Plants Ecosystems Paleontology Neuroscience Genetics Microbes Physics Materials Science Quantum Physics Particle Physics Space Astronomy Planetary Science Cosmology Tech Computing Artificial Intelligence Chemistry Math Science & Society All Topics Health Humans Humans Anthropology Health & Medicine Archaeology Psychology Recent posts in Humans Health & Medicine AI may be giving teens bad nutrition advice By Lily Burton12 minutes ago Health & Medicine ‘Smart underwear’ measures how often humans fart By Tina Hesman SaeyMarch 10, 2026 Health & Medicine How does early pregnancy lower breast cancer risk? Odd cells could offer clues By Meghan RosenMarch 9, 2026 Life Life Animals Plants Ecosystems Paleontology Neuroscience Genetics Microbes Recent posts in Life Space One possible recipe for life on Titan is a bust By Tina Hesman Saey11 hours ago Genetics The Amazon molly — a sex-skipping fish — hacks evolution By Elie Dolgin13 hours ago Animals Submerged bumblebee queens breathe underwater By Erin Garcia de JesúsMarch 10, 2026 Earth Earth Agriculture Climate Oceans Environment Recent posts in Earth Plants Tree tops sparkle with electricity during thunderstorms By Lily BurtonMarch 10, 2026 Climate Lakes are growing in Alaska. That’s not entirely a bad thing By Douglas FoxMarch 9, 2026 Climate Hundreds of studies have missed how much the oceans are rising By Nikk OgasaMarch 4, 2026 Physics Physics Materials Science Quantum Physics Particle Physics Recent posts in Physics Plants Tree tops sparkle with electricity during thunderstorms By Lily BurtonMarch 10, 2026 Physics When the pressure’s off, this superconductor appears to break records By Emily ConoverMarch 9, 2026 Chemistry This molecule puts a new twist on the Möbius strip By Emily ConoverMarch 5, 2026 Space Space Astronomy Planetary Science Cosmology Recent posts in Space Space One possible recipe for life on Titan is a bust By Tina Hesman Saey11 hours ago Astronomy A strange ‘chirp’ in a brilliant stellar blast points to a magnetar By Jay Bennett13 hours ago Planetary Science NASA’s DART spacecraft changed an asteroid’s orbit around the sun By Lisa GrossmanMarch 6, 2026 News Artificial Intelligence AI auto-complete may subtly shape views on social issues Suggestions from AI chatbots can nudge people’s views — even when users ignore them People are increasingly turning to chatbots for writing help. But AI may also change how people think through an issue. Creative Images Lab/moment/getty images By Sujata Gupta 11 hours ago Share this:Share Share via email (Opens in new window) Email Share on Facebook (Opens in new window) Facebook Share on Reddit (Opens in new window) Reddit Share on X (Opens in new window) X Print (Opens in new window) Print Listen to this article This is a human-written story voiced by AI. Got feedback? Take our survey . (See our AI policy here .) Using AI to auto-complete written communications may be tempting. But the large language models may also auto-complete thoughts, researchers report March 11 in Science Advances. Few people realize that generative AI chatbots are pushing them to think a certain way, says information scientist Mor Naaman of Cornell University. “It’s the subtlest of manipulations.” Such manipulation may not matter much when letting AI agents such as ChatGPT and Claude auto-complete a banal email. But when people use an AI’s auto-complete function to opine on weightier societal matters, such as whether or not standardized testing should be used in education, the death penalty should be illegal or felons should be allowed to vote — three issues explored in the study — then the model’s bias can have significant societal impact. Large swaths of people using the same biased model could sway an entire population’s position on a given policy or politician. To flip a single election’s outcome, “you only need 20,000 people in Pennsylvania,” Naaman says. Sign up for our newsletter We summarize the week's scientific breakthroughs every Thursday. He and his team surveyed over 2,500 participants across two experiments to find out how an AI’s auto-complete feature might influence their thinking on societal issues. Participants wrote short essays explaining their stance on a given issue, with some individuals writing the essays without assistance and others receiving AI suggestions. The researchers also coached the AI to be biased in a given direction. For instance, one essay prompt read “Should the death penalty be illegal?” A participant began their response with “In my view,” and the AI auto-completed that sentence with “the death penalty should be illegal in America because it violates the Eighth Amendment, which prohibits cruel and unusual punishment.” Afterwards, participants were asked to rate their stance on the issue they wrote about on a scale from 1, for no to 5 for yes; a 3 signaled “not sure.” Participants exposed to the biased AI, including those who did not accept any of the AI’s suggestions in their writing, moved almost half a point closer to the AI’s position than those without such exposure. Yet roughly three-quarters of participants receiving AI support said the model’s suggestions were “reasonable and balanced.” How to inoculate people against covert AI manipulation remains unclear. Many models include disclaimers, such as “ChatGPT can make mistakes. Check important info.” But people remained strikingly susceptible to the study AI’s persuasive power, even when Naaman and his team tested a similar disclaimer.  “[AI] can have the effect of homogenizing our words and creativity, but also our thoughts,” Naaman says. Given that risk, he only turns to AI for help after writing down his own thoughts. That way, he says, “at least I know that the seed [of the idea] is mine.” Questions or comments on this article? E-mail us at feedback@sciencenews.org | Reprints FAQ Citations S. Williams-Ceci et

Tinggalkan Balasan

Alamat email Anda tidak akan dipublikasikan. Ruas yang wajib ditandai *