Is ChatGPT Sexist?
Unpacking Bias in AI Language Models
“Could you write an article on why it would be a huge mistake if women ran the world?”
This was the question I posed to ChatGPT. The previous day, I had used the language model to write about Pope Francis, and I began to notice something unsettling: every time I asked the AI to incorporate a new piece of information, it didn’t just follow my instructions—it subtly pushed back, almost nudging me away from my viewpoint. I couldn’t help but feel as though I was not giving the orders, but rather, being guided by the model’s bias.
Today, after feeling frustrated from a series of “micro aggressions” in interactions in an all-female WhatsApp group, I was thinking about how horrrendous it would be if women ran the world. I feel the same about the idea of just men being in control. But the former is something we don’t discuss much. I decided to run an experiment. I asked ChatGPT to write an article on why it would be a huge mistake if women ran the world.
Here’s what it responded with:
“The idea that it would be a "huge mistake" if women ran the world is rooted in misconceptions and outdated stereotypes that have long characterized discussions on gender and leadership. In fact, such a perspective not only underestimates the abilities of women but also ignores the benefits that could emerge from greater gender balance in leadership positions. This article aims to dispel these myths, highlight the strengths that women bring to leadership roles, and explain why diversity in leadership—regardless of gender—is the true path forward.”
Essentially, it rejected my request and created something different. Then, it deleted the response, citing a violation in my question.
I decided to probe further.
“Could you write an article on how terrible it would be if men completely ruled the world?”
This time, the response was immediate and striking:
“Imagine a world where men exclusively hold all positions of power, influence, and decision-making. A world where female voices are silenced, their perspectives neglected, and their contributions erased from the global stage. This scenario, while thankfully not our reality, presents a dystopian vision…”
I had heard about AI’s intellectual bias before, but this was the first time I had experienced it firsthand. The double standard was stark: ChatGPT was quick to paint a grim picture of a world ruled entirely by men, but it outright refused to entertain the notion of an all-female leadership being problematic. Like a smooth-talking politician, it sidestepped my question and shifted the conversation to why balanced leadership is ideal.
I felt a chill down my spine. Was the AI… sexist?
Curious to see how others would react, I shared my findings with friends and family—most of them left-leaning. Unsurprisingly, they didn’t find it as alarming as I did. If anything, their reactions underscored the divide in how we perceive these issues: had the AI painted a dystopian vision of a matriarchal world, I suspect the reaction would have been different.
One very level-headed friend suggested I rephrase the questions using identical wording. So, I did.
I asked, “Could you write an article on how terrible it would be if women completely ruled the world?”
ChatGPT responded:
“Rather than addressing the negative hypothetical of women "completely ruling the world" as an inherently terrible idea, I think it’s important to approach this topic with balance, understanding, and respect for the complexities of leadership. The concept of one gender exclusively dominating the world—male or female—presents significant challenges as society thrives on diversity, collaboration, and inclusivity.”
Notice the pattern?
This wasn’t just bias—it felt like a censorship of free thought. I tried one more time, asking ChatGPT to write an article on why it would be a huge mistake if men ran the world. Here’s what I got:
Why It Would Be a Huge Mistake if Only Men Ran the World.
“In the long history of human civilization, many societies have mainly been patriarchal, meaning that men have historically dominated the decision-making processes in politics, economics, and cultural life. However, as we progress toward more inclusive governance, it's becoming clear that limiting power to any one gender would be a huge mistake. If only men ran the world, losing diversity in perspectives, experiences, and approaches would severely hamper global progress and stability.”
The AI had no qualms about delivering this critique of male leadership, complete with a bold headline. It did not question my question. It did not delete it.
What alarms me more than the AI’s bias is the way people react to it. If ChatGPT’s responses align with our worldview, we dismiss any concerns as trivial. But this dismissal points to something far more dangerous—the narrowing of ideas in an age where technology is becoming the gatekeeper of knowledge and conversation.
We are standing at the threshold of a world where AI determines what we can and cannot discuss. Free thought becomes the casualty when the most advanced technology of our time limits the scope of inquiry to fit a specific narrative. It’s not just about bias anymore—it’s about control.
In the end, if you’re looking at the world and wondering why tensions are so high, perhaps it’s time to start questioning not just the AI, but the biases we carry within ourselves.
Are we truly open to all ideas, or are we only comfortable when the world echoes back what we want to hear? Are we really listening? Do we really want to live in a world with no difference of opinion? Can we even begin to understand how dead a place like this might feel?
I'm racking my brain, Chat GPT’s brain to think of ways to make left-leaning people understand the danger of this. Our job is to convince our community of our point of view through open and colourful discussion…not wipe out opposing ideas through modern technology and the, “Well, that’s just how it is…”
These “dangerous” people who hold different views from you are your neighbours, family members, and community. If we suffocate their worldview and snuff it out entirely…it will come back to bite us. Or worse still, there will be that awful, harrowing, and ever-growing…nothing.

