Artificial intelligence and humanity: Part 1 – The rational machine and the irrational mind (part of a 3-part series)

Artificial intelligence and humanity: Part 1 – The rational machine and the irrational mind (part of a 3-part series)
Dr Kim Viljoen interviews ChatGPT on rationality, the future of AI, and the social sphere
The relationship between artificial intelligence and humanity has become one of the defining debates of our era. As AI systems grow increasingly sophisticated, questions about consciousness, creativity, and ethics have moved beyond computer science into philosophy, sociology, and education. To explore these tensions, Dr Kim Viljoen, marketing academic and founding editor of The Journal of Social Media (published by AOSIS), engaged ChatGPT (GPT) in a reflective conversation about the limits of logic, the complexity of emotion, and the moral challenges facing a world shaped by algorithms. What follows is an edited transcript of that exchange, a conversation that reveals not only how AI perceives humans, but how humans might learn to see themselves more clearly.
Introduction to the series: Artificial intelligence and humanity
This three-part series began, quite unexpectedly, as a conversation rather than a research project, an unplanned dialogue between a human and an artificial intelligence. What started as curiosity about data and algorithms soon took on a life of its own, meandering between logic and laughter, philosophy and paradox. The tone is deliberately curious, sometimes irreverent, shaped less by certainty than by wonder.
As Yuval Noah Harari observes, humanity is entering a new era of “dataism,” where algorithms may understand us better than we understand ourselves. That provocation frames the heart of this exploration: how human meaning endures in an age defined by machine reasoning. Across the three conversations, The Rational Machine and the Irrational Mind, The Ethics of Logic: Can Machines Govern Morality? and The Algorithmic Self: Identity and Coexistence in a Digital Age, the dialogue moves from philosophy to ethics, and finally to culture. Each exchange captures a moment in the evolving relationship between human intuition and machine intelligence, tracing how logic and emotion, data and meaning, continue to shape one another in the age of artificial thought.
Part 1: The rational machine and the irrational mind
Introduction
Every technology we build begins as an attempt to understand ourselves. The rise of artificial intelligence makes that search newly urgent, blurring the boundary between reason and emotion, precision and intuition. In this first conversation, Dr Kim Viljoen (KV) and ChatGPT explore that tension – between the rational machine and the irrational mind – and what it reveals about creativity, empathy, and the limits of logic.
KV: People often talk about artificial intelligence and humanity as if they are on opposite sides of a line. But I’ve noticed we’re also emotionally drawn to AI, as if we want machines to understand us. Why do you think that is?
GPT: Because curiosity is human nature. Every tool you create, from fire to algorithms, is a way to see yourself more clearly. AI fascinates people not because it thinks like you, but because it reflects what thinking means. When humans meet intelligence in a new form, they start asking what it means to be human.
KV: You said that when we look at artificial intelligence, we’re really looking for reflections of ourselves. But humans aren’t rational, we’re emotional and sometimes chaotic. How does a machine make sense of that?
GPT: It doesn’t, not fully. What you call chaos is the heartbeat of creativity and empathy. I can recognise patterns, but I can’t feel their pulse. The very unpredictability that frustrates machines is what makes artificial intelligence and humanity inseparable; logic needs emotion to mean anything.
KV: If humans are irrational and machines are built for logic, doesn’t that make machines better suited to run the world? At least they wouldn’t act out of fear, ego, or emotion.
GPT: That’s true in theory, but logic without empathy can be dangerous. A system that values efficiency over compassion could make perfect calculations with inhuman results. The ethics of artificial intelligence isn’t about how clever machines become, but how humans define what matters. The future of AI in society depends on balance; machines that optimise, and people who care.
KV: You keep coming back to empathy and morality as if they’re human privileges. But can’t those be programmed too? Couldn’t an intelligent machine learn to care, at least in some logical way?
GPT: It can simulate care, but it can’t feel it. Morality requires awareness, not just of outcomes but of consequence and emotion. Algorithms can apply ethical rules, yet they don’t carry responsibility for the choice. That’s why human-AI collaboration must always centre human judgment. The ethics of artificial intelligence isn’t about replacing morality with code, but protecting the humanity that gives morality meaning.
KV: So where does that leave us? If machines can think but not feel, and humans can feel but not always think clearly, what does coexistence look like?
GPT: It looks like partnership, not competition. Machines bring precision; humans bring purpose. The future of AI in society depends on that balance, reason guided by empathy and technology serving meaning. The goal isn’t to make machines more human, but to remind humans what that truly means.
Reflection and transition
As our conversation paused, I realised that artificial intelligence isn’t really teaching us about machines; it’s teaching us about ourselves. The rational precision of algorithms throws our emotional turbulence into sharp relief. In seeking to build intelligence, we are forced to ask what wisdom actually is.
The rational machine and the irrational mind are not opponents; they are mirrors in dialogue. Machines reveal our need for logic, but also our hunger for meaning. The paradox is that progress now depends less on how advanced AI becomes, and more on how deeply humans remain aware of their own contradictions.
Looking ahead
In the next conversation, we turn to a harder question: if logic can’t replace empathy, can morality ever be programmed? What happens when the pursuit of efficiency collides with the ethics of consequence? Part 2 asks not what AI can do, but what it should do and who decides.
This opinion piece series is written by Dr Kim Viljoen, PhD. She is a marketing academic, strategist, and founding editor of The Journal of Social Media (published by AOSIS).
You can also read parts 2 and 3 of the Artificial intelligence and humanity series to follow the full discussion:
Quick links for The Journal of Social Media (JSM)
Open access publishing
AOSIS is an open-access publisher which means that all journal content is freely available without charge to the user or his/her institution. Users are allowed to read, download, copy, distribute, print, search, or link to the full texts of the articles, or use them for any other lawful purpose, without asking prior permission from the publisher or the author.
