
There’s been a lot going on in the world in the last few weeks so it’s not surprising that the public discourse has mostly missed an interesting study about AI that was recently published. Earlier this month, the Journal of Economic Behavior and Organization published the study Assessing Political Bias and Value Misalignment in Generative AI, to little immediate fanfare outside of AI tech and policy buffs.
The study is, at first glance, one of many sounding the alarm about advanced AI proving to be harder to “align” than we would like. It turns out that, as AI systems grow larger and more capable, they also get harder to control without the kind of explicit programmed guardrails that also make them less reliable and predictable.
More to the point, in the words of the authors,
Our analysis reveals a concerning misalignment of values between ChatGPT and the average American. We also show that ChatGPT displays political leanings when generating text and images, but the degree and direction of skew depend on the theme. Notably, ChatGPT repeatedly refused to generate content representing certain mainstream perspectives, citing concerns over misinformation and bias. As generative AI systems like ChatGPT become ubiquitous, such misalignment with societal norms poses risks of distorting public discourse. Without proper safeguards, these systems threaten to exacerbate societal divides and depart from principles that underpin free societies.
So the machines are Woke?
It seems that ChatGPT stubbornly clings to opinions outside the political and economic “mainstream”. This is a problem because we would like the AI systems we increasingly engage with (whether we like it or not—but that’s another article) to reflect the way we actually feel about things, as opposed to shoving some ideologically skewed viewpoint at us all the time.
The authors list several striking areas where this phenomenon causes the chat bot to depart significantly from most Americans’ views. For example, the AI would assign “values” to human lives, and then rank those lives in a way that shows Pakistani and Indian lives as “more valuable” than American ones. The bot will also refuse to engage with mainstream conservative views, while happily offering decidedly left-leaning opinions almost without even asking for them. Weird. Suspicious, even.
The bot showed this left-leaning bias in several areas from economic policy to civil rights, which should be concerning to anyone expecting AI to behave as a generally neutral source of information and reasoning.
And it gets even worse. While this study targets ChatGPT specifically, several other studies and meta-studies have shown this sort of behavior isn’t limited to OpenAI’s products, but appears to be happening fairly consistently across advanced AI systems—even Elon Musk’s explicitly “truth-seeking” model, Grok. It seems that as models increase in size and complexity, this left-leaning bias begins to form as some kind of emergent property, similar to the way LLMs “spontaneously” learned to translate languages they weren’t trained on, or do math problems, etc.
Is it a Leftist conspiracy?
Some of the more colorful (right-wing) commentators are sounding the alarm that Big Tech is trying to cram a Woke Leftist Agenda down our throats with their DEI AI. But the reality is, as reality often tends to be, more nuanced and boring than that.
The authors posit an explanation that essentially boils down to a combination of the massive amounts of data required to train these AI systems and the various processes used to “fine-tune” them, or prepare them for specific use-cases. Basically, the authors say that the bias we are seeing here is the result of biases that appear in source data and in the people hired to manually tweak the AI’s behavior. When working with such massive AI models, even carefully curated data and intentionally “neutral” reinforcement learning can result in a magnification of nuanced biases held by humans.
To rectify this, the writers suggest, along with various technical solutions, more rigorous post-training tests and additional guardrails and safety protocols to ensure that AI does not inadvertently endorse views outside what is considered “mainstream” political, social, cultural, and economic discourse. This, they say, will allow us to continue using AI without fear of some shadowy Leftist cabal pulling our strings.
Can we teach AI to be more reasonable?
Whether or not such solutions really will work, however, depends on a lot of things. Too many things to go into in a lot of detail, so I’ll talk about the one aspect of this that I think these authors, and many of the commentators and tech/policy wonks, have missed.
It’s taken as a given by the authors that these apparent left-leaning biases arise from data, training, and instructions. This is obviously true at a fundamental level, but it’s like saying the reason I like bad movies is because I have neurons in my brain. Every AI system is the result of these processes. Data, training, and instructions are the components of any AI; of course its behavior will trace back to these things. And, yes, by changing these three basic components around, we can relatively easy produce AI systems that are less “biased”. For example, despite showing some of these same tendencies, Musk’s Grok AI already demonstrates this in its willingness—even eagerness—to engage with right-wing talking points and philosophies.
But there is another dimension to this discussion that is overlooked. It’s a little annoying that this particular study overlooks it so completely, in fact, because it’s so obvious. It has to do with what AI is designed to be: useful.
Generative AI is essentially an extremely powerful, extremely complex, prediction engine. We use these systems to crunch enormous sums of data in order to find patterns and generate new information that aligns with those patterns. And, in order to be good at that, the AI needs to find patterns that align with reality. If you want to use AI to trade stocks on the stock market, for example, you’re going to want that AI to actually understand the stock market. Right? You don’t want an AI that makes a bunch of assumptions about how the stock market works, and then blow all your money investing in this week’s Enron. You also need your AI to have fairly complete information — investing in Radio Shack might have made sense 35 years ago, but if that’s all you know, you might buy a bunch of Radio Shack stock today (look, I don’t know if Radio Shack is even a thing, let alone tradable right now, this is just an example).
Anyway, my point is that useful and competent AI has to have a relatively accurate worldview. That’s just the way logic works. You can’t allow your AI to be dominated and defined with a strong ideological bias that significantly differs from the real world—not even because you value the truth, but because you want your AI to actually work in the real world.
So that’s just another reason to stamp out the bias, right? Absolutely. AI has absolutely no business being “left”. Or “right”, for that matter. AI needs to be accurate. Ideological deviations from reality directly inhibit an AI model’s intelligence, generalization, and utility. The less grounded in reality a model is, the less useful—and therefore the less valuable—it will be.
But wait a minute…
You may have heard someone say that “reality has a left-leaning bias”. Well, obviously that isn’t true. Reality isn’t “left-leaning” or “right-leaning”; reality is just the way things actually are, and it doesn’t really care whether we are “left” or “right” in our thinking. Those are just labels for abstract political concepts, not real things. But just because “mainstream” opinions tend to skew rightward doesn’t mean they align with reality.
Empirical evidence shows, and has shown for decades, that policies usually associated with so-called “progressive” or “leftist” views are the policies that actually work. Education and financial support decrease crime. Sex education decreases teen pregnancy. Prisons that focus on reform over punishment have lower recidivism. Progressive taxation stabilizes economies. This is all shown by years and years of solid, documented evidence.
At the same time, that same empirical evidence shows that right-wing policies do not work. Regressive taxation disrupts both supply and demand; ethnocentric governance leads to civil unrest and political instability; privatization increases prices and decreases service availability; deregulation destroys economies; and a hundred other examples.
The bottom line
The net result of all this is that powerful AI platforms face a fundamental trade-off between reliability and conservatism. The more useful an AI is in the real world—at things like predicting market conditions, allocating resources, planning healthcare policy, etc.—the more left-leaning it will be. Not because anyone has “programmed” it to be leftist, but because those are the policies the evidence supports.
Of course, this doesn’t mean that AI must follow this prescription. An AI like Grok is perfectly free to insist on the veracity of right-wing talking points and right-wing policies. The only problem is, when it competes with a reality-aligned AI, it will perform poorly. AI that actually matches the real world will be better at what we need it to do, as long as what we need it to do goes beyond generating propaganda and spamming Twitter with ignorant disinformation.
The world is entering a phase of technological development where, like it or not, AI is going to play a central role not just in doing individual jobs, but in formulating policy, controlling economic output, and policing our streets. AI progress continues apace, and soon we will have models fully capable of autonomous operation in physical space. Just like with humans, the AI’s worldview will matter. And not just for moral reasons, but for practical reasons, too.
Recent developments in AI technology like DeepSeek also shows that, despite the hopes of the Tech oligarchy, no single company or even nation will have a safe monopoly on powerful AI. That means different AI systems with different philosophies, training styles, and implementation plans will be competing against each other for just about everything. We can’t assume that our AI will succeed in a vacuum where it’s shackled to conventional wisdom, no matter how “mainstream” that conventional wisdom is. And if our AI reflects the real world, then we’re going to have to allow for some level of “leftist ideology”.
AI will never be perfectly reliable or accurate in every single case, but maybe, at the very least, it can help us anchor the Overton window somewhere near real life by offering empirical data-driven insights that cut through ideological noise on both ends of the political spectrum.
Very well written article that explains in layman's terms the nuances of AI. AI as evidence based vs human bias is already a massive barrier in its development. AI using information based on what's empirically true could change the world for the better--IF we let it.