You know how we often assume a computer program is fair, just because it’s a machine?
Well, when it comes to advanced AI chatbots and language tools, recent research says that might not be the case.
What the Research Shows
A team at Stanford University recently asked people from both parties to rate answers from 24 AI models.
For many of the questions, respondents saw the answers as left-leaning – even the Democrats.
The study found: “For 18 of the 30 questions, users perceived nearly all of the LLMs’ responses as left-leaning.”
In December 2024, at Massachusetts Institute of Technology (MIT), researchers looked at “reward models” – the part of AI systems that help pick good responses.
They found these models also leaned left, even when trained only on objective fact-based data. They found the bias was stronger in larger models.
A recent study at University of Washington (UW) examined how bias in chatbots affects people. They had 299 participants (both Republicans and Democrats) use versions of a chatbot: one neutral, one liberal bias, one conservative bias.
The result: both Republicans and Democrats shifted their views toward the chatbot’s bias.
So in short: Not only are many AI models seen as left-leaning, but there’s evidence that that tilt can actually change how people think.
Why the Data Itself Isn’t Neutral
Here are three simple reasons:
-
Training and fine-tuning: After the base AI is built, humans guide it with their preferences. If those humans lean one way politically, that influence can embed into the model.
-
Data used: If the data contains more content from one political angle or excludes others, the model learns from that. For example, even when MIT used data meant to be factual, the models leaned left.
-
Scale and complexity: Bigger, more complex models (which many of us use, often without realizing) tend to show stronger bias because they combine more data and patterns.
Here’s Why It Matters
-
Influence on how we think: These tools are used everywhere now – in schools, in business, for news summarization, even by government agencies. If the tool has a tilt, the output might lean a certain way.
-
Transparency and control: If you use an AI tool and assume it’s neutral, but it’s not, you’re missing a key piece of information.
-
In Nevada: In our state, with its diverse economy (tourism, gaming, energy) and competitive politics, the tools we use matter. If a government office in Nevada uses an AI tool that has an unintended political lean, we might find the framing of ideas tilted.
Important Caveats
-
Bias isn’t always the same in every context. The MIT study found the left-leaning bias was strong for topics like climate, energy, labor unions, but weaker or even flipped for topics such as taxes and the death penalty.
-
People who know more about AI seem less likely to be influenced by biased chatbots, so being aware of it helps. The UW study found participants with higher self-reported AI knowledge shifted their views less.
-
Some observers note that newer models may be moving toward the center as developers adjust alignment settings, but right now most studies still find they lean left overall.
What You Should Do
-
Ask questions: If you or your organization in Nevada is using AI tools, ask about the answers you receive, what data was used, and whether the tool tried to balance viewpoints.
-
Stay aware: If you’re using AI, use it as a helper, not a sole decision-maker.
-
Educate yourself and others: As the UW study suggests, knowing more about how AI works can lessen the effect of bias.
The studies are clear: many leading AI models are seen as left-leaning, and they can influence how people think. It’s not something to be terrified of, but it is important to know about if you or a team you know uses AI.
These results don’t necessarily mean the system is rigged in a conspiratorial way – but it does mean the playing field might not be level.
The opinions expressed by contributors are their own and do not necessarily represent the views of Nevada News & Views. This article was written with the assistance of AI. Please verify information and consult additional sources as needed.