Artificial intelligence and humanity: Part 2 – The ethics of logic (part of a 3-part series)

Artificial intelligence and humanity: Part 2 – The ethics of logic (part of a 3-part series)
Dr Kim Viljoen interviews ChatGPT on rationality, the future of AI, and the social sphere
The relationship between artificial intelligence and humanity has become one of the defining debates of our era. As AI systems grow increasingly sophisticated, questions about consciousness, creativity, and ethics have moved beyond computer science into philosophy, sociology, and education. To explore these tensions, Dr Kim Viljoen, marketing academic and founding editor of The Journal of Social Media (published by AOSIS), engaged ChatGPT (GPT) in a reflective conversation about the limits of logic, the complexity of emotion, and the moral challenges facing a world shaped by algorithms. What follows is an edited transcript of that exchange, a conversation that reveals not only how AI perceives humans, but how humans might learn to see themselves more clearly.
Introduction to the series: Artificial intelligence and humanity
This three-part series began, quite unexpectedly, as a conversation rather than a research project, an unplanned dialogue between a human and an artificial intelligence. What started as curiosity about data and algorithms soon took on a life of its own, meandering between logic and laughter, philosophy and paradox. The tone is deliberately curious, sometimes irreverent, shaped less by certainty than by wonder.
As Yuval Noah Harari observes, humanity is entering a new era of “dataism,” where algorithms may understand us better than we understand ourselves. That provocation frames the heart of this exploration: how human meaning endures in an age defined by machine reasoning. Across the three conversations, The Rational Machine and the Irrational Mind, The Ethics of Logic: Can Machines Govern Morality? and The Algorithmic Self: Identity and Coexistence in a Digital Age, the dialogue moves from philosophy to ethics, and finally to culture. Each exchange captures a moment in the evolving relationship between human intuition and machine intelligence, tracing how logic and emotion, data and meaning, continue to shape one another in the age of artificial thought.
Part 2: The ethics of logic – Can machines govern morality?
If the first conversation explored the space between logic and emotion, this one dives into the moral terrain in between. As artificial intelligence grows more capable, it also becomes more entangled with ethical decisions – from healthcare to justice, from privacy to power. What happens when the systems designed to serve humanity start making decisions that define it?
KV: We keep talking about the ethics of artificial intelligence as if it’s a tidy checklist someone’s working on in Silicon Valley. But what happens when machines start making real moral decisions – in healthcare, justice, even war? Who’s holding the moral compass then?
GPT: That’s the problem. Once a system makes choices that affect lives, it stops being just a tool – it becomes a moral actor without a conscience. Algorithms can measure fairness, but they can’t feel the cost of being wrong. The ethics of artificial intelligence isn’t about keeping machines under control; it’s about keeping humans accountable for the machines they create.
KV: But hang on – isn’t logic supposed to be moral by nature? If you remove bias and emotion, surely you end up with fairness. Maybe the problem isn’t machines; maybe it’s us.
GPT: That sounds right until you realise morality isn’t maths. Fairness isn’t just the absence of bias; it’s the presence of context, empathy, and consequence. Pure logic can calculate justice but miss humanity entirely. The tension between artificial intelligence and humanity is that data can model behaviour but not intention. Machines can weigh evidence, but only humans can feel what a verdict means.
KV: We always say there’ll be “human oversight,” but I’m not convinced. Once the system gets faster and smarter than we are, we stop double-checking it. At what point does oversight become abdication?
GPT: That’s the quiet danger. Every time humans hand a decision to a machine, they surrender a little moral responsibility. Human-AI collaboration should mean shared judgment, not blind trust. The risk isn’t that AI takes control, but that people stop noticing when it does. Responsibility fades fastest when convenience feels like progress.
KV: The thing is, machines are built for efficiency. They don’t waste time or second-guess themselves. But efficiency isn’t always ethical, is it? Sometimes doing the right thing means slowing down or feeling uncomfortable.
GPT: Exactly. Optimisation isn’t morality; it’s mathematics. A system that prizes speed over sensitivity can make decisions that are logical but cruel. The ethics of artificial intelligence isn’t just about preventing harm – it’s about preserving conscience. Machines can deliver outcomes, but only humans can decide which ones should exist.
KV: So, if logic alone isn’t enough and morality can’t be coded, maybe the real question is whether we can build something better together. What does an ethical partnership between humans and machines actually look like?
GPT: It looks like shared responsibility, not shared identity. Machines can process faster, but humans decide why it matters. The future of AI in society depends on that partnership – one where data serves purpose, and progress still answers to conscience. Morality isn’t something we teach machines; it’s something we remember to practice ourselves.
Reflection and transition
This conversation left me thinking about how easily we confuse intelligence with wisdom. We are building systems that can outthink us, but not outfeel us – and perhaps that’s the point. Machines are not moral agents; they are amplifiers of the values we embed in them. The danger isn’t artificial intelligence becoming immoral, but humanity becoming indifferent.
Morality, it seems, is the last frontier we can’t automate. The moment we stop asking whether something should be done, efficiency becomes its own kind of blindness. In teaching machines how to decide, we are forced to remember why we do.
Looking ahead
In our next conversation, we turn from morality to identity – to what happens when our digital selves begin to merge with the algorithms that shape us. Part 3: “The Algorithmic Self – Identity and Coexistence in a Digital Age” asks a quieter question: if machines reflect us so clearly, who are we becoming in their mirror?
This opinion piece series is written by Dr Kim Viljoen, PhD. She is a marketing academic, strategist, and founding editor of The Journal of Social Media (published by AOSIS).
You can also read parts 1 and 3 of the Artificial intelligence and humanity series to follow the full discussion:
Quick links for The Journal of Social Media (JSM)
Open access publishing
AOSIS is an open-access publisher which means that all journal content is freely available without charge to the user or his/her institution. Users are allowed to read, download, copy, distribute, print, search, or link to the full texts of the articles, or use them for any other lawful purpose, without asking prior permission from the publisher or the author.
