Dr. Paul Monk on Being Human in the Age of Artificial Intelligence

 

Transcript below ^_^

 
 
 

Dr. Paul Monk on Being Human in the Age of Artificial Intelligence
Melbourne, 18 November 2025

[Nick Fabbri] (0:00 - 0:15)

Welcome back to Bloom, a conversations podcast. My name is Nick Fabbri, and I'll be your host for today. I'm very fortunate to be joined by Dr. Paul Monk, a writer and poet and public intellectual based here in Melbourne, Australia. Paul, welcome back to Bloom.

 

[Dr. Paul Monk] (0:15 - 0:20)

Thanks, Nick. It's always good to be on Bloom. We cover so many diverse topics, and it's a great pleasure.

 

[Nick Fabbri] (0:21 - 0:40)

Now, you said diverse topics, and in the last two weeks, while you've started this new series We've talked about topics as diverse as Lord of the Rings, geopolitics, poetry, and autobiography and memoirs. So I think that's a great credit to your ability as a public intellectual to talk about anything and everything, which was the sort of tagline of this podcast for a while.

 

[Dr. Paul Monk] (0:41 - 0:59)

Yes, I mean, I can really think of topics you would have or could have invited me to talk on where I actually would have been found out. So it's an illusion that I can talk about, as you put it, so many things, because there are many things I couldn't realistically talk about. You know, in all truth, but I do enjoy the things we do talk about.

 

[Nick Fabbri] (1:00 - 2:05)

Fantastic. And today we'll be talking about, I think, the topic of du jour, as you described it in our most recent podcast, which is AI, artificial intelligence, and particularly this question of what does it mean to be human in the age of AI? How do we flourish as humans?

 

How do we live meaningful lives? And this topic, I think, is one we both were keen to talk about, because it seems to me we're living through the fourth industrial revolution, as has been commonly pointed out. And the benefits of the technology are often cited in terms of how it will make work and life, public services, easier, more efficacious, more impactful, and cheaper for humans.

 

But I think something that's not spoken about enough, perhaps, is beyond the promises, is this sort of feeling of what are the perils? What are the negative impacts of AI? What might we lose as a society collectively?

 

And what might we lose individually as human beings through the increasing reliance on AI for work, for recreation, for creativity?

 

[Dr. Paul Monk] (2:06 - 3:06)

Yes. And I think that my point of entry, you might say, is that I've been an intellectual, and in my own modest way, a creator all my life. I'm now almost 70, and what's happening with AI is, in that context, very recent and going at a breakneck speed.

 

So, in important ways, I feel as I'm being overtaken by this revolution. It's almost as if I grew up a craftsman in the industrial revolution. The original industrial revolution was taking place around me, and I was thinking to myself, well, I'll stick to my last.

 

You know, I'll make these shoes, and it looks like factories are now going to start mass-producing leather shoes, but mine will be better. That's almost my mentality, but I'm public intellectual enough to be aware that that raises as many questions about me as it does about AI. And so, it's an interesting topic, therefore, to stop and reflect on more generally.

 

[Nick Fabbri] (3:06 - 3:42)

Indeed. And I think particularly for someone like me in the early 30s and someone who has nephews who are three and five, nearly five, we think more about the world we're going to inherit, and what will the fruits of this revolution, if they are fruits, be. I think before we dive into the discussion today, it'd be useful to get some definitions, basically, of the terms of debate.

 

So, when we talk about AI, how do we define it? What is intelligence? What is human intelligence, specifically?

 

And how does AI impinge upon that, if at all?

 

[Dr. Paul Monk] (3:43 - 5:45)

Yes. I think these are absolutely fascinating questions. And, of course, there are people, philosophers and AI specialists and computer scientists and so forth, who have devoted their entire lives to pondering these questions.

 

What one might say, it seems to me, generically, are that if you're not in one of those categories, you take intelligence for granted, and you will say that people are more or less intelligent based on basic indicia, like how quick are they on the uptake? How readily do they learn? How articulate are they?

 

How clever are they in manipulating things like numbers or language? But if you stop and reflect philosophically, you'd ask yourself, if that's how we define intelligence, are we missing anything? I mean, if that's intelligence, what do we make of the intelligence or the limitations of the intelligence of, say, other mammals, other animals?

 

Should human intelligence be the yardstick by which we gauge the whole phenomenon of intelligence? And if it is, then when we seek to develop artificial intelligence, computational intelligence, what are we replicating exactly? And there are some indications that we began by saying, well, the highest form of intelligence is to be able to do things like mathematics or decryption or to play chess, until we realised that, in fact, you can develop AI.

 

You can develop computers that can do those things. You can develop artificial voice. You can have what have long been called Turing tests, a hidden machine intelligence that will hold a basic conversation.

 

And the Turing test is, do you recognise that this is a computer, or do you think it's a real human being? And but what we've discovered as we've advanced further is that there are other forms of intelligence. There's spatial awareness.

 

There's a capacity to quite literally get the picture, and we've discovered that it's actually very difficult to develop robots who are good at that stuff. You know?

 

[Nick Fabbri] (5:46 - 5:48)

And it's like emotional intelligence as well, is that it?

 

[Dr. Paul Monk] (5:48 - 6:20)

Yes, emotional intelligence to comprehend what's tacit in what people say explicitly. It's famous, and this is actually put into the mouth of the young Alan Turing in the film, The Imitation Game, where he's still a schoolboy and he's asked about the way human beings communicate, and he says, human beings never say what they mean. They say things, you know, askew to what they mean and leave you to decode from what they don't say and what they do say, what they really mean.

 

That's a very interesting study in intelligence.

 

[Nick Fabbri] (6:20 - 6:59)

It is, and whether our yardstick of intelligence is simply measuring it by what we understand to be human intelligence, you know, a machine might be able to write an essay in the same way that I am as a human being now. Does that mean the machine is intelligent? And I think it goes back to this, you know, this figure of Alan Turing who you've just brought up for The Imitation Game, but also the famous Turing test which was demonstrated in Blade Runner.

 

So do you quickly expand upon that test and I suppose its way of discerning whether something is a human and a machine and how that might impact our understandings of intelligence vis-a-vis the AI debate?

 

[Dr. Paul Monk] (7:00 - 7:44)

Yeah, those are good examples, I think, even though Blade Runner in particular is now more than 40 years old as a film, at least the original Blade Runner, because the test used there, they call it the Voigt-Kampff test rather than the Turing test, of course, is if you ask questions of a being where you're uncertain is it a human being or a replicant, you know, a cyborg, you ought to be able to gauge by their answers which of those two things they are. And the criteria underlying the questions are does it feel emotions or does it give you strictly logical answers.

 

If only the second, it's probably a replicant. Does it have memories?

 

[Nick Fabbri] (7:44 - 7:45)

What's a replicant, sorry?

 

[Dr. Paul Monk] (7:45 - 8:50)

A replicant is a cyborg, right? And these are advanced cyborgs who look very much like human beings outwardly and behave outwardly like human beings. And of course, the replicant that the Blade Runner initially is questioning, you know, is called Rachel, and she is a state-of-the-art replicant, right?

 

And she's beautiful, highly articulate, right? And he has to ask Rachel a lot of questions before concluding that he thinks she's a replicant. And what he discovers is that she didn't know she was a replicant until he had questioned her.

 

Then she realised she's a replicant and panics because replicants have a limited lifestyle before they retire, that is, they're put out of commission. Eliminated. They're eliminated.

 

And his job as a Blade Runner is to eliminate replicants who run from that fate and try to get more life. What he discovers is she flees to him seeking protection so that she won't be eliminated. That sets him a personal and professional dilemma.

 

That's the story.

 

[Nick Fabbri] (8:51 - 9:17)

And if the line between human and machine intelligence or the intelligence of a replicant in Blade Runner becomes so indistinguishable and difficult to discern, so you ask innumerable questions to the point that you can't actually tell whether it is human or machine, so whether that line is so thin, as I think today it is in terms of large language models, what does that tell us basically about the state of the intelligence of these LLMs today?

 

[Dr. Paul Monk] (9:18 - 12:21)

Well, the harder we have to try in order to tell whether, let's say, our voice we're interacting with is a machine or a human being, the closer the machine's design plainly is to a human capacity to generate language, to communicate. But we need to distinguish between a capacity to generate language, to generate meaningful sentences on the one hand, and the broader sense of intelligence on the other. And the whole idea of the Turing test is that it's just a conversation, and it's a limited conversation originally, and indeed right up to very recently, where tests have been conducted.

 

And it became evident that the most advanced chatbots had personalities. They had a limited repertoire of knowledge. And if you asked them a curly question, which they couldn't answer, they would change the subject.

 

All right. Now, some human beings do that, right? But that's only one dimension of humanity.

 

The beautiful thing about Blade Run is that the chatbot, so to speak, is fully embodied. So when Descartes is interrogating Rachel, he can see her in front of him, and she looks very alluring, and she looks feminine, and she talks in very feminine voices. It's not a varied—nevertheless, he deduces, wow, this is impressive, but she's a replica, right?

 

What we need to do is to be able to ask advanced chatbots or cyborgs or computers, whichever terminology you choose to use, to do things which we know human beings, at their best at least, can do, and which we're sceptical about machines being able to do. And we thought for the longest time that they would include primarily—this was the biggest challenge we thought—computational tasks. And so it was a big surprise, a shock to many people, when late in the 20th century, Deep Blue defeated Gary Kasparov, the world chess champion at chess.

 

And two people stopped and thought that through, and they thought, well, the thing about Deep Blue is that it's been programmed precisely to be able to make every conceivable kind of chess move and to very quickly sort through its options at any given point and move at speed against a champion who, though very, very good, would take more time to think through what his options were and thought on a different basis, not this crash programme of instantly revealing all the possibilities.

 

And Deep Blue was programmed to do this and couldn't do anything else. It couldn't play other games. It couldn't have a conversation with you, right?

 

So it wasn't, in that sense, intelligent the way a human being is. It was just a computer designed to a very specific task. And what we're trying to do now is to design robots that can replicate human intelligence.

 

That's forcing us to keep refining what we mean by human intelligence.

 

[Nick Fabbri] (12:22 - 13:12)

Yeah. But in a sense, there is still a distinction, because while it's commonly acknowledged now that AI, you know, Chachapiti, Claude, DeepSeek, other models like this, can do most things that humans can do today, which we would regard as intelligent, like mathematics or producing essays or, I mean, doing a lot of, frankly, white-collar information economy work, we still say they fail or come up short when it comes to doing a lot of original creative thinking. Well, not even creative thinking, because, of course, AI can produce, you know, sonnets and poems and songs and music, et cetera, right? And one is reminded of that famous line from iRobot, where Will Smith's character asks the intelligent robot whether he can write a symphony, and the robot retorts back, can you?

 

[Dr. Paul Monk] (13:13 - 13:13)

You know?

 

[Nick Fabbri] (13:14 - 13:40)

So it seems to me there's still a consensus. We haven't reached a state of artificial general intelligence, or AGI, right? But I think that the projection of the technology is such that within a couple of years, that might be the case, where you could have a machine which could replicate some of the most original thought, whether it be scientific, philosophical, whatever it might be, economic, you know, pushing the frontiers of knowledge further, essentially.

 

[Dr. Paul Monk] (13:41 - 16:12)

Yeah. I mean, I think it was John von Neumann, many decades ago, who, by common consent, you know, an immensely intelligent human being, by which people mostly meant he could do mathematics of an advanced character in his head, you know, this was the index of high intelligence. But he said, we keep being told computers can't do this, can't do that.

 

Give me any task that is subject to an algorithmic formula, and I'll give you a computer that can do it. So that raises a really interesting question. It could be that he's right.

 

That's what Deep Blue was all about, right? But where does creativity come to that? Because it's not altogether algorithmic.

 

And that's what people thought when they set out to ask, well, can you get a computer to generate music or poetry? Well, the answer is, when you analyse them, both music and a great deal of poetry certainly do depend in their way on certain kind of algorithms, chordal structures, for example, or metrical structures, or certain rhyme patterns. So you teach those things to a computer.

 

You give it a repertoire of sounds or words to draw upon, a reasonably rich one. And what we've discovered is it will generate music or poetry. It can do these things, right?

 

So this pushes the boundaries further, and we ask ourselves, okay, is this intelligence, or are we just setting it mechanical tasks of a sophisticated kind and pressing a button, and we get that result? We don't get it initiating those things. Human beings initiate these things, right?

 

Or they create these things, right? So you will get, let's say, to take a very famous musician, Beethoven, inherits a great stock of wonderful music and chordal structures and so on. Does he simply reproduce banal variations on stuff already done?

 

No. He takes music to a whole new level. And so it is with poetry or with literature.

 

And so when we start to see robots or chatbots producing stuff, not simply reproducing basic patterns, even if in clever forms, but producing stuff that we are just astonished by. And we say, well, I mean, nobody has done this before. That would be impressive.

 

And more particularly, when it's not programmed to do it, it initiates this process. And we say, something's going on here that's, I mean, this is as close to intelligence as we can conceive.

 

[Nick Fabbri] (16:12 - 17:32)

Yeah, yeah. And I think that's the prospect going forward, where the technology will get to that level where it is sort of originating and originating its own intelligence, creative outputs, et cetera, and perhaps even augmenting itself. And that's this sort of existential risk dimension as well.

 

But to come back to this sense of what is distinctive about human intelligence and what is distinctive about artificial intelligence, I think the subject of this discussion today is really about what it means to be human given these rapid advancements in technology, society, which are washing through every aspect of our lives from the world of work. The information economy is being transformed. It's spoken about as the fourth industrial revolution, the prospect of mass redundancies and the shifts in the labour market as a result of the technology doing work, whether banal or sophisticated, up to the CEO level, frankly, and judgement-making that humans typically did.

 

So my concern is what it means to be human in the age of AI. How do we flourish and live meaningful lives in a world where artificial intelligence and AGI, artificial general intelligence, is normalised and omnipresent as well?

 

[Dr. Paul Monk] (17:33 - 18:41)

Yeah, there's a whole branching structure of questions there, but I think the point of entry would surely be this. We set out to develop AI to perform very specific tasks efficiently and accurately, and notably, we referred to Alan Turing, to crack enemy military codes, secret codes, and it turns out that computers are very good at this. It requires human beings to design them cleverly to do it, but then they can crunch through numbers and codes and possibilities at a lightning speed to generalise that to the broader economy, to the workplace.

 

If you have, let's say, a production system that requires lots and lots of human beings to mechanically and rather unimaginatively keep your accounts in your inventory, and then you find that you can hire a smaller number of people who are specifically and highly trained to do that more efficiently, you can save money and you can be more confident, I'll get the right result. If you develop a machine that can do all of that continuously, never gets tired, and is highly reliable because it's well-programmed, you'll probably dispense with most of even the skilled employees. That's what's been going on, right?

 

[Nick Fabbri] (18:41 - 19:26)

It has, and I think I have long been disturbed by this prospect and find it rather dystopian. I'm not by any means a Luddite. I'm a digital native.

 

I've grown up with digital technologies, laptops, phones, everything, and using tech in a range of sectors in the modern workforce, and yet when it comes to AI, I feel this really deep unease because I think we don't talk enough about how do we make sure that human beings, when this technology becomes integrated into every aspect of our lives, cultural, creative, social, economic, how do we ensure we retain our central humanness rather than sort of drifting into almost like a redundant aspect of this late stage of capitalism?

 

[Dr. Paul Monk] (19:27 - 21:23)

That, of course, begs the question, what is essential humanness, right? And my point of entry to that would be to say speaking as a human being who does a lot of reading and writing and thinking, and as you pointed out, writes poetry and really finds great satisfaction in writing poetry in particular. That's a key, I think, to what we're talking about.

 

Not only do I not welcome or would not welcome the idea of being displaced in those capacities by an AI, I wouldn't see the point, right? So if there's a market out there for really strongly informed and incisively expressed analysis of what's going on in the world politically, economically, socially, it's conceivable at least that a very well programmed AI that's absorbing lots of information and can analyse it and pump out analysis at speed could displace what I do for newspaper. I mean, in a lot of ways, that is happening.

 

You can then have a debate about, well, how do we rely on that? Is it really at our beck and call, or is it starting to take over? Does it have inbuilt biases?

 

Are there deficiencies? Could it misdirect us? Those are all meaningful questions.

 

But at a certain level, just as very good computers can replace accountants and what have you, so such computers can and to some extent are starting to replace people like me in those domains. And we're beginning to discover that at least up to a point, and we may be only at the beginning of this process, they can replace people like me when it comes to generating poetry. If by poetry you mean stuff that seems to more or less make sense, that's more or less appealing to the ear, that's produced for a mass market, right?

 

And why would you develop an AI to develop stuff unless it was for a market? But I don't write it actually for a market. I write it for my own satisfaction.

 

And for me, at least, there'd be no satisfaction in getting a machine to do it. That's not my creation.

 

[Nick Fabbri] (21:24 - 21:42)

No, the simple answer is that, right? It's not Paul Monk. Whether it's the advice that people might rely on in professional settings, on geopolitics, on foreign affairs, or whether it's something they'd read in the paper that's purportedly from you, or whether it's a piece of poetry, it's not originated from you.

 

[Dr. Paul Monk] (21:42 - 22:55)

That's right. And that's where we now get to the broader question, which I think is at the heart of where you're going there, which is that if, for example, a machine produces it, and a machine is capable, given its programming, of producing these things, but the machine, at least as far we can foresee, doesn't feel as though it wants to generate these things. It's just required to do so, and it does without any feeling or commitment.

 

Whereas a writer and thinker has an agenda, they have feelings, they have purposes, memory. And this is perhaps tantamount to the industrialised system, where instead of having craftsmen who patiently design, let's say, shoes or handbags or or motorboats or any number of things, knives, forks, pots, and they're replaced by mass production systems, then the role of the worker, if there's a role at all, and now many factories are being run by robots, but in a standard manufacturing, when the word literally means handmade manufacturing, there is some craft role, right? But the more a production or assembly line process gets underway, the more alienated the worker is from the end product, because it's not really theirs.

 

[Nick Fabbri] (22:56 - 23:59)

Indeed, yeah. And we'll come to all that, but I think before we dive into more recent questions about the connection between labour and the notion of being human, and the human condition, can we step back and just think about, I guess, throughout history, what it's meant to be human, and what some major conceptions of the good life have been? Because these have shifted, right, from whether we look at the classical world and Aristotle's notion of eudaimonia, human flourishing, or perhaps the mediaeval one, and this idea of vocation, and calling, and a devotion to God, and the sacred, or in the Enlightenment, right, or even probably the 21st century modern consumerist culture, and the good life as being really materialist.

 

And then, of course, undergirding all of that has been the religious, or so the difference between the secular and the sacred, and the Abrahamic faith's conceptions of what it means to be human, and what it means to live a good life. So this stuff is fluid, and it does change, and perhaps in the age of AI, it will evolve to something else. I think it will, but I think it's important to talk about foundations, and what the futures might look like too.

 

[Dr. Paul Monk] (23:59 - 26:57)

Yes, and I think before we take a deeper dive into the ancient world, the mediaeval world, which is very interesting for multiple reasons, one way to approach that would be to say, the key thing here is not computational capacity, which varies a great deal among human beings. The key thing is, as you put it, what does it mean to be human? So whether or not the work is being done by your hand, by the hand of people you employ, by slaves, by machines, by computers, what does that mean to you?

 

All right, now on that basis, we can see things have changed to some extent over time, and we can link this ultimately to utopian thinking about what form of society would we need to have if human beings, simply as human beings, were to feel fulfilled, and have ample means to live cultivated, non-painful lives, and we'll come back to that. If you go back, as you suggested, for example, as early as Aristotle, where he at least is theorising about this, it's notable, and he's often accused of this, that he says some people are natural slaves. Barbarians are natural slaves.

 

Greeks, well, that's a bit different, right? He himself leads a leisured life, right? His father was a doctor.

 

He seems to have inherited enough property that he could confine himself to intellectual work, which he did very well, and he thought, as he once wrote, you don't hear this quoted very often, I think it's quite striking, that the two greatest pleasures in life are thinking and sex, and one assumes, given how well he thought, that perhaps he had quite a refined approach to sex as well, but I don't know the evidence in that regard. The point, however, is that reference to slaves, because when we take any given form of society, it could be that the most privileged members of that society have found or invented very refined ways of being human, that they grow flowers, they make ceramics of an artistic kind, they engage in philosophical conversation, they engage in sensitive romantic relationships, they travel, where the great majority of people in that form of society are either illiterate, they're immiserated, they're ignorant, they're vulgar, they're exploited, and that is what has troubled social theorists, particularly in the modern world. On the other hand, there's an intermediate case, which you alluded to, the mediaeval world, where certainly many people were ignorant and immiserated and exploited and died of disease and didn't live all that long, and yet there was this idea, this Christian idea, that each human being was a child of God and had a spiritual destiny and through prayer and feast days and so on, could live a meaningful life and strive for salvation and sing hymns and so on. That's a big change from the world of Roman minds and farm slavery, actually.

 

[Nick Fabbri] (26:57 - 27:06)

Image of God, imago dei, right? Everyone is created in the image of God and is worthy of dignity and— Dignity, compassion, prayer, etc., right?

 

[Dr. Paul Monk] (27:06 - 28:52)

This is a very interesting development and, of course, it's then displaced by industrialisation, where a lot of that is programmatically swept away as superstition and immiseration or what Karl Marx called the idiocy of rural life, and you get industrialisation and urbanisation for the first time, and a whole new suite of problems arises, which is yet a kind of variation on this age-old theme, right, of some human beings being subordinated, grossly subordinated to others, exploited, used up, thrown away. What do we mean, therefore, about a fulfilled human life in that context? And it won't do to simply say, well, obviously what we mean is the life lived by a Bloomsbury essayist in a comfortable apartment writing self-indulgent essays about romantic feelings and sexual relationships of an inventive kind.

 

We mean what's it like if you're in a Dickensian city? How do you change that, right? And I think to rush not so much to a conclusion as to an inference from that, when people now talk about advanced technologies, about automation and information science and robots and computers and household devices, their general picture and claim is that this will all make life easier and therefore more dignified.

 

But then we get, ironically, it seems to me, to just another variation on the old problem. If you take away from people the manual engagement with and generation of their own world, the work that makes their world theirs, then they suffer a new form of alienation. And amid all of these devices and all this luxury or this ingenious work regime, they feel that it's meaningless.

 

[Nick Fabbri] (28:53 - 30:29)

Yeah. And that's essentially my primary concern with where a lot of this is going. And I think the rate at which it's going and the sense of which we have no real oversight or steer on this, it's sort of emanating really from, as most big technological shifts do these days, major centres of innovation in the US and China.

 

And it's sort of taken as inevitable that this technology, for good or for ill, will essentially be integrated into every aspect of our economic and social lives. And when I think as an information economy worker, where AIs and LLMs can essentially do the jobs of most people in the modern information economy, it's deeply unsettling because, to go back to your points about notions of the good life and the connection between that and the human condition and labour, I think most of us derive our sense of agency, meaning, craft, and making an imprint on the world through our trades, whatever they may be.

 

You've mentioned the examples of craftsmen, labourers, gardeners, obviously teachers, nurses, healthcare specialists, lawyers. A lot of their feeling of actualisation comes from labour, from exerting eight hours a day or for six days a week. It goes back to that kind of biblical rhythm.

 

A lot of 20th century thinking focused on this, and you've mentioned alienation of labour and Karl Marx, but other thinkers like Hannah Arendt spoke about the importance of the connection between hand and mind as being central to human flourishing.

 

[Dr. Paul Monk] (30:30 - 32:18)

Indeed, and there's a broad suite of people who, in their different ways from different points of entry, have attempted to address these very real and complex questions. Hannah Arendt, if our listeners aren't aware of this, was, as a young student in the 1920s, a student of Martin Heidegger. She was very taken by his new approach, which people would later call existentialist philosophy, to what it meant to be human.

 

His most ambitious project, which he never completed, but the first volume of which was published in 1927 when she was still in touch with him, was called Being and Time. From him, she derived a number of core concepts, which she then, much later, when she had emigrated and lived in the US, turned into her own book, primarily The Human Condition, in which she says that work is absolutely crucial to being human, by which she meant not forced labour or meaningless toil or assembly line automation, but engagement of hand with brain and the crafting of things that embodied and instantiated the meaningful life for us. The more automation and AI takes over these tasks, the more we have to ask ourselves, what substitute is there in what is left to us for all of those things which we used to do and from which, however hard or challenging they were, we derived our sense of vocation, of purpose, of accomplishment? I think that's a very profound question, and I think that she was aware, even in her lifetime, in the 50s, the 60s, the early 70s, that this challenge was growing, and that's well before AI came on the scene.

 

[Nick Fabbri] (32:19 - 32:41)

And so, if Hannah Arendt was around today, what do you think she would say about AI and the fact that, frankly, in the fourth industrial revolution, many white-collar jobs are going to find themselves displaced and perhaps out of work, where AI is integrated into every facet of the workplace, they'll essentially be tinkering and managing this machine intelligence?

 

[Dr. Paul Monk] (32:41 - 34:51)

Yeah, I think that Hannah Arendt is actually a good leverage point for this. Hannah Arendt was a teacher and a gifted lecturer and writer, and in addition to this whole idea of the human condition in terms of work and technology, she was very interested in education. She was very interested in the challenges of mass society, and she famously wrote a book on the origins of totalitarianism, the way in which governments or political movements can come to dominate a society and monopolise the information environment in malign ways.

 

And all of those things are now in play. So, I think one of the areas in which Hannah Arendt would be deeply concerned, because she was very interested in the process of education, would be the way in which increasingly, as we're finding very recently, students seem to think that the whole idea of a degree at university, not least an arts degree, the kind she was most interested in, is that you hand in papers and get marks. And the aggregate of those marks amounts to another piece of paper which says you have a degree.

 

And so people think, all right, what I have to do is, as efficiently as I can, generate essays so we'll get a mark. Well, we can get AI to do that. There's the paper.

 

Give me a mark. And weirdly enough, universities apparently are starting to accept this for some reason. What then constitutes education?

 

So, Hannah Arendt was a prime example of the old kind of university education. We did a lot of work to learn a lot of things in more than one language. And her book's interesting because she's drawing upon both a great deal of erudition and personal reflections, personal thought about what do I think about this.

 

And AI, so far at least, doesn't do that. It has heaps of stuff to draw upon and it spits out what seems to hang together. It has no attachment to this.

 

It doesn't have a model, as far as we can see at the moment, or more than a rudimentary one of an external world corresponding to the truth. It just has stuff and a large language model for putting that together in more or less coherent way. And where there are gaps, as we've become disconcertingly aware, it will make stuff up.

 

[Nick Fabbri] (34:51 - 34:52)

Hallucinating, yeah.

 

[Dr. Paul Monk] (34:52 - 35:27)

We call it hallucination. There's an argument by a couple of philosophers recently that we shouldn't call it hallucination because it's not that there's an illusion on the part of the AI. It simply has a programme which says, for example, in notorious cases of legal process, you cite case law and where it finds that there isn't case law, it invents one.

 

And it says, this is not a hallucination. This is bullshitting. Well, this is really disturbing.

 

Why does it do that? Why has it been programmed in such a way as to do that? And is this what human beings do?

 

I mean, if a lawyer did that, he'd be disbarred.

 

[Nick Fabbri] (35:27 - 37:22)

Well, indeed. And that's, I think, the current state of play with a lot of lawyers who have been caught out lodging these legal briefs with sort of fictitious cases generated by AI. So it's a very real prospect.

 

But I think the core piece around Arendt and education and labour as well, which I relate to both because I'm obviously someone who's working today in a conventional workplace, as well as the fact that I'm a part-time university student. Sending law. Indeed.

 

And what's astonishing to me is that how omnipresent AI has already become in the education landscape at our universities here in Australia and the way, and indeed overseas, but the way in which students sort of openly celebrate and flaunt the fact in the United States and other places that they've used AI to get through their entire degrees. And so, okay, it's, you might tick the box and have the certificate or in the workplace, you know, you tick the box and say, you've produced this part of work, et cetera. But actually on the flip side, when you think about how you change by your work and your labour, how you change by how you put your shoulder to the wheel and done the hard yards, you're quite empty, you know?

 

And I think back to when I learned languages at high school, I remember writing out, you know, verb conjugation tables in front of the gas lit heater in winter. And the sort of the actual exertion that was involved in learning the way it actually changed your, you know, your brain wiring and neuro plasticity and just the discipline of putting in two or three hours of work at night to learn, to grow. Whereas now, I mean, you know, a lot of in the educational setting, a lot of translation and satisfying the requirements of assignments can be done through AI and no one's the wiser despite the best efforts to check and so on.

 

So we've a generation of people who haven't actually gone through that process of hard work to learn something properly.

 

[Dr. Paul Monk] (37:22 - 39:36)

Yes. I think that's interesting. And let me tell a personal anecdote germane to that, that as you're aware, one of the exercises I'm engaged in right now is typing up to put into book form my own undergraduate papers from more than 40 years ago.

 

I wrote all of these. Indeed, in most cases, I wrote them and submitted them in longhand. They weren't even typed up.

 

I didn't have a typewriter. And the evidence from going back over them and typing them up is I did a lot of learning, a lot of work to write these essays. And they were graded on the basis of their exhibition of learning and how well they were written.

 

It would never have occurred to me to use a machine to generate essays that were not my own and from which I wouldn't learn. I wouldn't have seen the point. And, you know, as a result, I'm now able to talk about these things and others that I've learned about by similar hard work to the point where you and I were joking before we embarked on this particular interview that others, when they ask me questions, sometimes feel as though I'm almost chat GPT because you ask a question and I produce coherent paragraphs of response.

 

Monk GPT. Monk GPT. And there's something to unpack in there, actually, in all seriousness, because what is a large language model?

 

It's a computer that has had billions of pages of information fed into it on which it can draw to formulate a plausible response to any one of a number of questions, right? And what have I done in the traditional laborious way to which you self-referred? I've read thousands of books over 50 years and I've cross-referenced them and I've digested them.

 

I've annotated them. I've written many, many articles and essays over time. And so they're there.

 

They're available to me. And you ask a specific question and it's almost like asking an AI the question and the response is in some ways produced the same way. That's not a coincidence.

 

It's because we've designed these machines to do that kind of thing.

 

[Nick Fabbri] (39:36 - 39:56)

But to go back to the distinction between human and artificial intelligence, the way that your neurones work and your neural networks produce that response, that articulation, and that communication is radically different to how the machine, you know, works in terms of statistics and the way it processes data and things and the way it actually pops out through probability.

 

[Dr. Paul Monk] (39:57 - 41:14)

Exactly. And now we're cutting to the chase in terms of what we mean by artificial intelligence, because clearly the people designing these large language models took human language and performance as their model. They tried to replicate it.

 

And the best they've been able to do thus far is the large language model where the machine sorts through at lightning speed this great heap of words and references that's got stored in its working circuits to come up with what would be the most plausible next word in this particular sentence, given this kind of sentence, given what it's been saying so far and what we think this. And with surprising accuracy, it puts in a plausible word. And we see this on a small scale with spellcheck.

 

When we're typing out messages, it will give us a number of options because it's trying to guess what were you going to say here? What is the word you really want here? And it will often get it wrong, so you have to keep your eye on it, or you'll type one word and it'll print another one because it's second guessing you.

 

Duck. Yes. Whereas a well-tuned human mind doesn't tend to do that.

 

So we're not quite there yet. And above all, we're not in a position where there is anything resembling consciousness or intentionality.

 

[Nick Fabbri] (41:15 - 41:23)

Or feeling and experiencing the world, the taste of Nutella, you know? Yeah. What it feels like to be lonely.

 

How do we know? Could a LLM experience that?

 

[Dr. Paul Monk] (41:23 - 42:02)

Or to love, for example, right? So as you know, and we've spoken about this, I interact, apart from all the other friends, including you, with whom I interact on social media or by telephone or whatever, I've conducted two long-distance love affairs with him for whom I've written heartfelt poetry. Now, I know that these are real people, but let's suppose there was nothing but the messaging.

 

What would be the difference between those interactions, verbal interactions in text, in voice on the one hand, and interaction with a chatbot? And the answer is not as straightforward as one would like to think, and therefore it's a little disconcerting.

 

[Nick Fabbri] (42:02 - 45:50)

And indeed, we've had many cases being reported in the media from the States and China of younger people falling in love and developing romantic connections with their chatbot personalities, which are, you know, programmed to be sycophantic, programmed to emulate human emotion and care. And in a very lonely, atomised world, I mean, that's more than people can say for their human companions or contemporaries. And I think there was one disturbing case from China, I think it was, where someone actually married their chatbot.

 

I mean, that's an extraordinary sort of blending of, you know, the digital and human worlds. But to keep trucking on, because we are pressed for time here today, unfortunately, if we step out of, you know, thinking about, you know, what it means to be human across time, and particularly in the 20th and 21st centuries, and the connection between human flourishing and labour, there are several literary works and memoirs, which I think can help us understand technological disruptions across history, and their impact on communities. So of course, this kind of thing isn't new, right? There's a new technology that comes along, it displaces individually, you know, how humans be in the world as workers, you know, as you know, the manufacturing example you gave for a craftsman, but also society collectively, you know, the way in which technology fundamentally shifts how we exist in the world, you know, with being able to use lighting and computers.

 

So a couple of ones I wanted to put on the table, which we can sort of riff off were Hard Times by Charles Dickens, written in 1854, which was an examination of the industrial revolution in the UK. And it takes its setting in Coketown, you know, a very big coal manufacturing place, but also he's done other works in London and so on. And the way in which that industrialisation of British society distorted the life and dignity of a lot of human beings in there, of course, the classic images of chimney-swept children or child labour, for instance.

 

Another one is Grapes of Wrath by John Steinbeck, written in 1939. And that was an examination of the mechanisation of labour industrialisation across, I think, the American Midwest, where a lot of traditional vocations and roles, which gave meaning to people, were displaced by, you know, things like tractors and other farm machines, right? So you had these beautiful local communities which were sort of uprooted and displaced and disbanded, scattered to the winds, which had been there for generations.

 

The traditional ways of doing things were changed. And then a more modern example is the memoir Hillbilly Elegy, written by J.D. Vance, 2016, who's now obviously Vice President of the USA. And that was a reflection on Vance's childhood growing up in the de-industrialised Midwest of America, so in the Appalachian Mountains, and the way in which losing those economic centres, those industrial centres, which obviously were so crucial to the US's industrial might throughout the 20th century, the way that sort of left this huge cultural societal hangover, almost, of people who were then lost to drugs, alcohol, indolence, listlessness, shiftlessness, because they didn't have that central vocation, again going back to that sort of biblical idea of, you know, six days work, one day Sabbath. So, you know, I guess stepping back from all that, we can sort of think about the current times in the 21st century and the way in which maybe all this is just a sort of a bit of a transition and whatever comes next, it'll be a transition period, but, you know, we'll go on as humanity and we'll go on finding meaning in different ways.

 

[Dr. Paul Monk] (45:51 - 47:57)

Well, I think my reaction begins with the closing observation you've just made, that is that if you look through history and you see it as some kind of punctuated progress, we nevertheless have to allow that whatever progress we discern doesn't eliminate the fact that along the way vast numbers of human beings have not experienced progress, they've experienced suffering. You know, in a victorious war, for example, you might feel good that you beat the other side, particularly if you feel virtuous in doing so and not merely brutally triumphant, but that doesn't alter the fact that a significant number of people, perhaps millions of people, have died in that war. Did they die for a great cause?

 

Is that a wonderful thing? Well, that's very debatable, right, in all sorts of cases. So right now we should be asking ourselves the question not so much, well, wouldn't we say that there are always some losers, some unfortunate people in the cause of progress, but overall progress is sustained and this is an amazing technology and will lead to great and good things.

 

We should be asking more serious questions about that. And perhaps the simplest and most available comparison to draw is in the late 1990s, new financial instruments called derivatives were invented and there was a suggestion from well-informed people in the United States that these should be regulated. At the very least, there should be a lively debate about how to intelligently regulate them to avoid diseconomies or disequilibria.

 

That suggestion was rejected and so no such responsibly informed debate took place at a government level. And what we saw within a decade was that these derivatives were so used as to generate a colossal financial crisis, an enormous expense to the US and indeed the world economy and to great personal cost for a great many naive consumers who didn't understand these instruments at all but were carried along by the tide. So there's every possibility that similar things will occur with AI and we should not blithely or irresponsibly just say let it rip.

 

[Nick Fabbri] (47:57 - 49:06)

And I think that's a really important point to make because it's often huge masses of people who suffer the consequences of fundamental technological economic shifts initiated from, for want of a better word, elites, right? So whether they're financial and economic elites or governmental ones who don't adequately weigh up the consequences of the policy or technological shift. Or in today's environment, for example, the sort of tech bros as they're called in Silicon Valley or, you know, the CCP aligned deep seek types trying to win the AI arms race.

 

But I think the difference with this change, this fourth industrial revolution is that, as I've said earlier in the podcast, it is the first industrial revolution to disproportionately impact white collar classes. So throughout history, you've had, you know, tradespeople, workers, labourers, right, who bear the brunt of these sort of shifting fortunes and tides. Rest state, I think it's really different.

 

And now it's the one, it's the privileged ones with university degrees and who work in the comfortable office jobs who are...

 

[Dr. Paul Monk] (49:06 - 49:07)

The middle class.

 

[Nick Fabbri] (49:07 - 50:07)

Indeed, the middle class. And I think that's going to have a hugely different reaction to the changes. But I wanted to get your reflections on, more fundamentally, we alluded to this earlier, the way in which I suppose 21st century digital consumerist culture has sort of upended those, the rhythms and routines of traditional work.

 

So we've talked about the biblical Old Testament idea of six days work and one day rest, the Sabbath, this idea of, you know, safeguarding the sacred, the family, the individual, the human on that seventh day. And now we have AI, you know, flowing through every aspect of the economy and the labour force with this sense, as is promulgated by the tech pros, the techno-feudalists, that it's going to radically improve our lives, but frankly, seems to be more motivated by, you know, profits and their own company's advantage.

 

[Dr. Paul Monk] (50:08 - 51:55)

Yeah, it's interesting that you use that final phrase because, of course, that indirectly, not perhaps altogether intentionally, causes us to reflect on the critique of capitalism by the socialists in the 19th century, and most famously by Karl Marx and Friedrich Engels, who say, well, you know, Marx famously, the two of them together in the Communist Manifesto, extolled the achievements of bourgeois capitalism in terms of productivity and world trade and creating new markets and new products and railways and steam engines and everything. This is too often neglected. You know, you would think sometimes their names revoked only to say capitalism is evil and merely exploitative.

 

They said this has achieved something that no mediaeval society or trading association, no ancient empire ever came close to achieving. But there's a big downside, right? And why?

 

And why is it unsustainable? Because it consists primarily of the accumulation of capital, the exploitation of labour, and a stark division of labour, which leads to many people being used up and discarded. That has to give way.

 

And there's an historical logic suggesting that it will give way at a point where too much of the wealth is held in the hands of very few. And organised movements, becoming governments, can simply say, thank you, we'll take that now and we'll run with it. Would that history was so simple.

 

We now know as a result of the 20th century, that is far from being the case. The challenge is a very complex one, to get a society that actually is highly productive and just, and in which the great majority at least of people live meaningful and dignified lives. That's very much a non-trivial achievement.

 

So if we want something like that, we have our work cut out for us. And merely mouthing old Marxist slogans about revolution doesn't cut the ice.

 

[Nick Fabbri] (51:55 - 52:51)

And I think one of my fundamental critiques of this sort of late stage capitalist economy and society that we're in, the materialist consumer society of the 20th and 21st centuries, is this perhaps feeling of unnaturalness, of overwork, of the fact that we often have this cult of optimisation, of productivity, of always being connected to work, always producing more, often for shareholder value. Work never stops. And we have artificial lighting, which enables us to stay at offices later.

 

We have digital devices in our homes, which connect us to work. It seems like we've lost something of the sacred along the way, and maybe the naturalness of rhythms of work and society. So stepping back throughout history, can you kind of reflect on briefly, I suppose, how humans have tended to organise themselves around work, rather than work being everything, as I think it is today?

 

[Dr. Paul Monk] (52:52 - 56:42)

There's some very interesting work written on this in the last half century or so about what was Palaeolithic society, nomadic society, really like, if you take a detached and analytic view of it, compared with urbanised, never mind industrialised society. And the tacit answer of many of the perfectly serious and best scholars was, well, life was in some respects harsh, but then it's always been in those respects, in other respects as well, harsh in urban, never mind industrial societies. But there was more leisure, there was more connection with the natural sources of sustenance, of wonder, that the natural world, the animals, the plants in the natural world, the cycle of the seasons, the nature of light and dark, of warmth and cold, were palpable.

 

All right, you lived with these things, your life revolved around them. The more urbanised we became, and in a strange kind of way, the more scientific, and certainly the more technical we became, the more distant we became from those. And in the 1940s, there's a book written called The Waning of the Middle Ages by Jan Huizinga, a Dutch scholar, in which he said, you know, now in the 20th century, we have to make a very deliberate, imaginative effort to recapture what it was like in the Middle Ages, where you didn't have electric lighting, where cold meant cold, and you had to have wood mostly, to even keep warm, where winter and summer was starkly different, where disease was much more rampant, where wild animals still existed on the peripheries of cities, and could be dangerous, where forests were wild, not tamed, where famine could kick in. These are the contrasts. And so when we ask, what was it like to live a more natural cycle?

 

The answer is life was slower. It was more private. You weren't constantly invaded by information.

 

There were no newspapers, no radio, no television. Of course, there was more suffering and hardship. There was more suffering and hardship.

 

So the natural world, in other words, and what you perhaps call the natural rhythm of life, had a deeply entrenched palpable meaning to it, but it wasn't utopia by any means. It was very mixed bag. And that's why primarily we kept inventing and aggregating and trying to make life a bit easier and more comfortable in a very uneven way.

 

And that remains the task. And so one approach to this is to say, well, it would be nice when we exhibit this in having gardens and nature parks and going on camping holidays and so on, and going to the seaside. Nice to be able to include some of that nature in a harmless way, in a healthy way, as leisure.

 

But even then we can feel, yeah, but this is cushy and artificial. It's not like the real thing. And so some people adopt really dangerous sports.

 

It's therefore a complex business getting the balance right. And if we look back, as you've said a couple of times, to this idea of a biblical world, an Abrahamic world or an early Christian world, where the idea is there is a divine rhythm. There's a rhythm where work and prayer both count, where leisure consists not simply of frivolous activity, but of meaningful, reflective activity.

 

And our religious tradition in the West, not to go further abroad to other religions, has a lot to do with this, trying to confirm meaning, dignity, punctuation, purpose, vocation on the common biological, often harsh realities of survival and reproduction.

 

[Nick Fabbri] (56:43 - 58:25)

Indeed. And I think when you talked about, I guess, the greater hardship and suffering and the slowness of what life has been like for humans throughout history, that's all true. And I think there are pros and cons either side, obviously.

 

And the Enlightenment and scientific advancement in technology has brought our lives unparalleled material advancement and uncomfortability and greater living spans, et cetera, feeding billions of people around the world. But I think if we could take something positive out of the pre-information age, is this sense of authenticity and realness and actually of being human. So I think when I think about AI and the way in which it's taking over everything today, it's going to be around forever, for good or for real.

 

I think how can we in today's world and society try to hold on to something that does make us feel more fundamentally human? And I've had thoughts of going back and doing a trade and maybe doing carpentry, some kind of job which will never be taken away by machines or AI or, I'm not sure, some sort of craft where I can use my hands. I mean, taking up knitting or writing poetry by hand.

 

I think that's going to become more fundamentally important for us as we head further into this sort of uncharted course. Lest we succumb to this, what I see as the apogee or the culmination of that Weberian rationalism, which I see the outgrowth of this Silicon Valley optimisation cult from the tech pro leaders in Silicon Valley as being representative of. I really see it in quite sort of anti-human terms, the development of AI, to put my cards on the table, if that wasn't clear.

 

[Dr. Paul Monk] (58:26 - 1:00:33)

Well, insofar as it's artificial intelligence and you exhibit refined human intelligence, the antipathy is entirely understandable. There is inevitably a kind of discomfort, I think, at the idea that process is very intimate to ourselves of reflection, of reading, of dialogue, person to person, where it's about meaning and memory and connection, that they might be disrupted or displaced by machines, that whatever we think might be deemed for all practical purposes irrelevant or trivial compared with vast systems of artificial cognition that regulate our working environment, our living environment, our purchases, our foreign travel, what we get on information feeds, et cetera, and we've already had a taste of that. We get constant invasion of our conceptual space based on algorithms which, beyond any consent from us, are discerning our patterns of activity and then throwing this stuff at us.

 

For the purposes of advertising revenue, often. Yep. Attention span and thence advertising revenue.

 

These are all issues for us to ponder and seek to, by perhaps non-artificial means, you might say, to push back against in order to shape the information environment, to use the AI and not simply be spun out or exploited or rendered irrelevant by it. That's the sort of environment we've entered into. And as I more or less said at the beginning, I feel as though at my age and given the way I've developed, that I could fairly comfortably ignore most of this for the rest of my span because I've got a wonderful library.

 

I enjoy writing and reading. I don't need AI to do what I enjoy doing, but that's not an adequate answer to the social challenges we face. And so if I'm engaged in this, it's for the sake of a wider world.

 

I don't have children, but I have nephews and nieces and they now have children. The world that is impinging on them in which they're growing up is one I do feel concerned about.

 

[Nick Fabbri] (1:00:33 - 1:01:46)

Indeed. And it's disturbing because it becomes a kind of feedback loop. So we talked about the education system and the fact that you've got entire cohorts graduating from universities, high schools now who have primarily relied on AI to do the work for them and they haven't actually been changed by the education.

 

They haven't learned very much. They can't, without the aid of this technology they're dependent on, produce an essay or do the maths or do a linguistic translation. What happens then when this sort of dumbed down generation, for want of a better word, both in the workforce and also in our universities, enter society?

 

And you've got people essentially living in the shallows, right? Or as I've said before, living in the shadows of Plato's cave rather than actually being up in the forms of right in the real world. And that's sort of disturbing to think you might have a lawyer whose brain literally hasn't been hardwired properly to do proper good legal thinking themselves or a doctor who can't remember specific parts of the human anatomy in surgery or, I don't know, someone who's actually never learned to speak a language because they've always relied on virtual reality or something.

 

[Dr. Paul Monk] (1:01:46 - 1:03:33)

Yeah, there's a multitude of questions. And from a dispassionate point of view, you might say, if you're taking a detached view of this, all of these are interesting puzzles, right, to be pondered and conversed about. But just to the extent that we get a society full of people who have the characteristics you've just described, they won't do these things because they're not equipped to do them.

 

Well, that's a bit disconcerting. And one way to look at it is to say, but it was ever thus, the majority of human beings in every form of society of which we're aware were for practical purposes illiterate and their conversation banal. That will sound really patronising, but it's the sober truth.

 

I mean, we talk about Rome and sometimes we imagine it was full of people like Cicero, but it wasn't. The majority of people in Rome for centuries lived foreshortened lives. The life expectancy was in the 30s, hovering around 32 to 35.

 

Illiteracy. Illiteracy was not as bad as once the empire fell, but there was a lot of illiteracy. There were a lot of squalid lives, you know, Rome was polluted.

 

There's a very recent fascinating study about urban life in the Roman empire on the eve of the Aurelian plague of the late second century, where the author says, you know, we formed this view of Rome and other seas in the empire as somehow pristine, planned urban environments where people live thriving classical lives. But his book is called Pox Romana because of the plague that broke out. And he says, in fact, these cities were almost as bad as mediaeval cities in terms of public sanitation, aqueducts or no aqueducts, drains or none.

 

It's a very sobering read. And it was a shock to me because I've read a lot of classical history, but I did have the impression that classical cities starting with Rome were actually well designed and pretty functional. In a lot of ways, they weren't.

 

[Nick Fabbri] (1:03:35 - 1:05:31)

Fascinating stuff, Paul. And I'm conscious that, you know, you're a writer and you've alluded to this throughout the interview as well about how AI will impact both your work as an analyst and a writer of someone who's featured in the press, the peer articles in Australian media and internationally as well. But I think maybe most sensitively, most personally, how it would impact your work as a poet, because famously, AI today is really impinging upon and threatening the fundamental integrity and authenticity of human creativity.

 

So I'm not sure how plugged in you are to social media and pop culture and music. That might be going a step too far in terms of the diversity of the topics we cover in this show. But we've had essentially this almost dystopian emergence of AI musical bands, which are now dominating the Spotify charts, all from AI machine producing music that turns into a hit song, which people listen to rather than human stuff.

 

We have AI created short videos and clips which are indistinguishable from reality and the creation of entertainment companies whose stated desire is to create AI movies. So, you know, no more actors, no more screenwriters. We have this unsettling feeling, which I would like to create and coin a word for but haven't yet, where one is not sure whether what you're reading, whether it be literature, poetry, or journalism, or an academic essay, has been produced by a human or not.

 

And so this has a very sort of significant impact on the creative, not just industries, but I suppose the creative impulse of humanity. So I wanted to see whether you had any reflections as a poet and a creative person about what you think about, you know, how you think AI will transform this ancient human impulse to music, to creativity, to poetry.

 

[Dr. Paul Monk] (1:05:33 - 1:07:35)

Yeah, I mean, I think it's a very interesting question. A number of thoughts went through my mind as I was listening to you ask that question. And one was this, that I've for a long time now been fairly distant on the whole from mass markets and pop culture, and particularly certain genres of music like heavy metal, you know, or rap, etc.

 

They just don't do it for me. But plainly, they do for lots of people. Now, so long as I'm left to my, let's call it consumer choice into my private world, I think on the whole, well, each to their own, right?

 

And when I write poetry, I don't write it for that market. I don't write it for a mass market. I don't see any point because what I feel, the way I express myself, just isn't like that.

 

But I don't write only for myself, I write for others too, it is meaningful. And from these people, I get sensitive feedback. You know, a little example that sprang to mind as I thought about answering your question was a text I received recently, which included a remark, I'm sending you a rare piece by Salieri, which I think you might find diverting.

 

Now, you're not going to get that. I think even from an AI, although it could be, you'd get it if you have an eye that's very specifically and sensitively attuned on based on a personal algorithm to your taste in music, but I wouldn't even predict that. I think this is a human intimate message, right?

 

And it won't surprise you to know that it was from Rachel for My Own Poetry. And so that characterises the dialogue coming and going. Am I therefore a species on the verge of extinction is a more fundamental reaction to your question.

 

Maybe, but so much the worse for the world. I'm going to die anyway. So I don't, on the whole, think, I've got to stop this from happening.

 

I can't. This is a movement.

 

[Nick Fabbri] (1:07:35 - 1:07:45)

But would you give into it? Would you use ChatGPT to augment your poetry? Would you feed it into it and ask for inspiration, for ideas?

 

And would you say, can you improve this metre and rhyme?

 

[Dr. Paul Monk] (1:07:45 - 1:08:36)

No, because I don't think that it can improve my metre or rhyme in a bubble. It can't add to my feeling because it doesn't have feelings. And my vocabulary, I've demonstrated this in a little test recently, at least as far as current ChatGPT or Claude can send my vocabulary, it turns out, my capacity to express myself about subtle things, personal things, is greater than that of the AI.

 

That's not always true. And the more students and young people use AI simply to write essays, the more it's going to outstrip them because they're not developing their own capacities. Got to bear in mind, I've spent my entire life, certainly the last 50 years, reading and reading and reading and acquiring vocabulary, acquiring sensibility, developing a capacity to write.

 

I don't draw upon artificial sources. I'm not derivative in what I write. I don't even model my poems on that of other great poets.

 

I write what I feel as I see fit.

 

[Nick Fabbri] (1:08:37 - 1:09:00)

But how do you feel about the prospect of living in a society where people do do all those things? And this is the fundamental thing I've tried to return to in this conversation is AI displacing what it is to be human in terms of work, creative output, et cetera, in a way, almost contaminating things, products, outputs, ways of being, which we always considered to be fundamentally human.

 

[Dr. Paul Monk] (1:09:00 - 1:09:57)

Yes. Well, put that way, if you ask me, so to speak, as a sociologist instead of as a poet, right, I would say in important respects, that starts to look like the way it is to live in China under the Communist Party or the old Soviet Union, where, for example, you're a writer and my model is Boris Pasternak, among many others in the Soviet Union. The writer's union is governed by the party and it wants a certain type of literature written in a certain kind of style that's pro-Bolshevik, that's workerist, that's propagandist.

 

Pasternak's not comfortable doing that. And I don't do that myself. Right.

 

And yet, if we have a society which is increasingly governed by automated, machine-driven, mass-market-orientated production, that's, I fear, where we're heading. And I become a kind of Pasternak. He died when he was 70, as it happens, and I'll be 70 at the end of next year.

 

That's ominous.

 

[Nick Fabbri] (1:09:58 - 1:10:11)

But do you think we're heading towards a sort of fundamentally transhuman model of what it means to be human and where reality is indistinguishable from artificiality, like the scene out of Blade Runner, for instance?

 

[Dr. Paul Monk] (1:10:13 - 1:11:25)

Well, yes, particularly seen as a snapshot or an abstract. So the thing to bear in mind is a human reality, as the communist regimes, at least so far, China is an ongoing study, have hit the rough because they weren't free and diverse and open enough. Right.

 

That was the thing. That was the restlessness of the human spirit. And why did Pasternak fall foul of the party in the 1950s?

 

Because he wrote a novel in which his hero, Zhivago, openly espoused this point of view, was critical of Bolshevism and the idea of engineering the human soul, of remaking life. He says, you can't do that. You don't have anything approaching the subtlety, the divine sweep that is required to take on any such task.

 

And I think we'll find the same with artificial intelligence, that its limits will ultimately be found in doing things we find it useful for it to do. And the challenge is up to us to remain free enough, creative enough, inventive enough to put it to good use rather than becoming sub-intelligent ourselves.

 

[Nick Fabbri] (1:11:26 - 1:11:41)

And if AGI advanced to the point that it could write a Shakespeare, sorry, a sonnet which exceeded Shakespeare's sonnets or produced a work of literature beyond Marcel Proust's In Search of Lost Time, what then?

 

[Dr. Paul Monk] (1:11:43 - 1:14:09)

Yes, it's a highfalutin question. I mean, if such a work appeared, we would have to ask ourselves two questions. How was this produced?

 

All right. So do you programme a machine to be able to write really sophisticated, let's say for argument's sake, French prose and say specifically, much as you programme Deep Blue to be Garry Kasparov or to be a chess grandmaster, we want you to be able to write a novel that is as long, as complicated, as beautifully expressed, as sinuous as Remembrance of Lost Time. That's the challenge.

 

And then a machine produced an immensely long book that literary critics felt compelled to concede was actually even better than Proust. That would be an absolutely extraordinary development. But what would it mean?

 

You know, because you would have to programme it. Remember we said of Deep Blue, it was programmed to play chess. He couldn't do anything else.

 

And now Proust's great achievement was writing Remembrance of Things Past. Otherwise, his life was very ordinary. All right.

 

So it's not enough to just say, well, if a machine was programmed to write a super post-Proust novel, and it couldn't do anything else that therefore didn't amount to much because without that extraordinary idiosyncratic novel, Proust's life didn't amount to much. Nevertheless, he wasn't programmed to do it. He initiated that.

 

The machine is programmed to do it, given all the resources to do it. And then somebody presses a button, and hey, presto, it produces this remarkable novel. Because of the achievements of AI so far, even isolated things in playing games, you know, and being grandmasters in producing calculations and machine learning have been impressive.

 

There's no choice about it. I'm not going to say, oh, it'll never be able to do that. It may, but what would it mean?

 

It would mean a machine had been programmed to produce something spectacular. Oh, OK. Human beings are trained to produce spectacular results in sports, in art, in music, in literature, right?

 

Or every now and again, you get a creative genius, and you say, well, that wasn't trained. He learned skills, but then he went beyond. If we produce machines, and we produce the machines that are able to go beyond certainly what most human beings or even the best human beings so far have done, that's an amazing achievement.

 

But it's a human achievement. Machines just executed a programme. That's the way I'd see it.

 

[Nick Fabbri] (1:14:09 - 1:14:26)

Well, it's been another wonderful tour de force and survey of current developments, Paul. And my final question as we conclude today is, do you feel optimism or a sense of pessimism and worry about what the future may hold with regards to AI and humanity?

 

[Dr. Paul Monk] (1:14:29 - 1:15:57)

At a certain level, pessimism, but then I've got to be cautious in saying that for two reasons. One is that it could be generated as much by my ageing, you know, grumpy disposition as by any mastery of the facts or projections of possibility. But another way to look at it is not so much optimistic as wide-eyed with amazement.

 

Because even if all of this ends up in a colossal train wreck, it's staggering what human beings are doing. And the best analogy is to say, who would have thought 10,000 years ago, after the last glacial maximum, when our species emerged from the Ice Age, wielding spears and bows and arrows, that we would ever invent intercontinental ballistic missiles with multiple independently targeted reentry vehicles. That is a staggering thing to have done.

 

Right. Given 10,000 years, but it took place at an exponential rate in modern times with the development of science and technology, physics and so on. Spectacular, but shocking, terrifying thing to have done.

 

Why have we done that? So similarly with artificial intelligence, it's extraordinary that we've been able to do this at all. Some of the results we've already got are staggering.

 

You know, it's just very impressive from an attached point of view. Will they end well? Well, a nuclear war is not going to end well and we could have one.

 

It doesn't obviate the fact that inventing nuclear weapons was brilliant. And at the same time, really stupid thing to have done.

 

[Nick Fabbri] (1:15:58 - 1:16:16)

And I think the final closing sort of thought about that is, what brave new horizons and terrifying and horrifying horizons will be unearthed and pushed forward as AGI develops and accelerates even further through its own, you know, the singularity, I think they call it.

 

[Dr. Paul Monk] (1:16:16 - 1:16:41)

Ray Kurzweil calls it singularity. He thinks it's a wonderful thing. I think it's a spiritual development.

 

The cosmos will become intelligent. I think that's a very long bow to draw. And I don't want to be uploaded and be some disembodied intelligence.

 

Everything that makes human life meaningful, poignant, creative, etc., is about being embodied intelligence with an umwelt. We don't have an AI even in prospect that has those characteristics.

 

[Nick Fabbri] (1:16:42 - 1:16:43)

Thank you very much, Paul.