Bedtime Doom

Lying in bed, I was too tired to read but not yet asleep. “Can you read me a bedtime story,” I said, half jokingly. “Sure,” my partner replied, and began to read aloud. Drifting off to sleep, I suddenly heard a word I never hear in normal life — relinquishment. “Huh, nobody uses that word.” My partner laughed, as if it was really funny. “Is this a Deep Adaptation story?” Rolling over, I looked towards her phone, and saw the text on the screen, which looked like the output of a chatbot. “What was your prompt?” She scrolled back up to show me: “Write a short story that Professor Jem Bendell would enjoy.” 

Continue reading “Bedtime Doom”

After the Alarm: Artificial Intelligence, metacrisis, and societal collapse.

The breakneck acceleration of Artificial Intelligence (AI) has moved the discourse on its benefits and perils from science fiction to boardroom and government-level concern. In the last few months there has been a series of articles by the BBC about AI trends and potential dystopias. One article was about how some leaders in AI are anticipating societal collapse and getting their bunkers ready. We also read that some ‘tech bros’ even want such a collapse, as their technotopian futures involve a break with life as we know it. One BBC article mentioned that the ‘AI Futures Project’ predicts AI may achieve ‘super intelligence’ by 2027 and then human extinction, or something like it, will occur within 5 years, via an AI deliberately engineering superbugs. Supposedly, it would do that after deciding that humans are a major problem without a remedy other than mass murder. I haven’t seen the authors of that study receive the kind of aggro I got since 2018 from predicting societal collapse due to climate change. Maybe that’s because we are used to sci-fi dramas where robots kill nearly everyone. But their prediction might be part of a ‘wake up call’ for wider societal engagement and responses to AI, so we might head off the worst scenarios. Maybe I’m naive, but these dystopias certainly woke me up a bit, and so here I am writing about AI and collapse. After the jolt, I read into the nature and scale of some risks, with the aim of exploring how people who want to behave well in these times of societal disruption and collapse — including myself — could use AI responsibly. That exploration is still ongoing. 

Continue reading “After the Alarm: Artificial Intelligence, metacrisis, and societal collapse.”

Engage the book “Breaking Together” with an AI chatbot

The chatbots that use artificial intelligence (AI) are changing the way some people research and write. I have not yet used a chatbot to help me write any of my scholarly texts, which is probably why I remain rather verbose! The tech took off too late to affect my research process for Breaking Together, although I squeezed in a quote from a dialogue on freedom that my colleague Matthew had with ChatGPT. But there is an interesting new way that such chatbots can be used – as interfaces with specific publications, or collections of works. For instance, ChatPDF has been launched so people can interrogate academic articles with a chatbot. Some publishers are now looking at providing chatbot interfaces to some of their books. So when I heard that the awesome nonprofit Servicespace.org is helping to create chatbots for some authors, I decided to create one for people to engage with my new book. Consequently, JemBot was ‘born’.

News of JemBot within the Deep Adaptation Facebook group generated a range of reactions. Some people see AI as the latest creation of a doomed techno-obsessed culture. Some see it as endangering societal systems. They might be right, but that doesn’t mean we don’t deploy it for straightforward and positive reasons. As with all technology, the key issue is ownership, intention, use and governance. 

Continue reading “Engage the book “Breaking Together” with an AI chatbot”

Atlas Mugged

The following is an extract from the book ‘Breaking Together: a freedom-loving response to collapse’, where I discuss the potential meanings of the ‘Kintsugi Atlas’ image on the book’s cover.

Subscribe / Support / Study / Essays

The matter of the collapse of industrial consumer societies is not only extremely inconvenient for those of us who are enjoying its conveniences, but also deeply challenging philosophical and spiritually. After a few years of soul searching about aspects of our culture that are implicated in this tragic situation, I arrived at the paradox of our desire to ‘be someone’ and help each other. One way of describing this paradox is with Greek Myth. The image on the cover of this book is an adaptation of the oldest surviving statue of Atlas, a character from Greek mythology. From the 2nd Century before the Christian Era, it depicts him straining to hold up an orb, which in the contemporary era has been widely misunderstood to represent planet Earth. That misunderstanding may have begun in the year 1585 with the use of the word Atlas by Flemish cartographer Gerhardus Mercator, to describe his collection of maps of the world. On the inside cover of his book, there was a drawing of Atlas having removed the orb from his shoulders and mapping it in his hands.[1] With her famous book ‘Atlas Shrugged’ Ayn Rand may have misconstrued the orb as representing our world, and therefore used it to symbolise the weight of the world’s problems (such as parasitic bureaucrats) on otherwise strong and free people.[2]

Continue reading “Atlas Mugged”

ChatGPT can’t pass an experiential knowledge exam

Because artificial intelligence software does not have real world life experiences to draw from, there should be no worry about its implications for academic assessment.

I see from my LinkedIn network that many academics are discussing what the implications from artificial intelligence could be for assessing their students. ChatGPT has even passed an MBA exam! Reading about this I was entirely unconcerned. Should I be? My lack of concern stems from how I have been designing courses and setting assignments for nearly 20 years. But rather than assume that my assessments are immune from the misuse of artificial intelligence, I thought to write up my approach and see if fellow academics can see any potential problems. If not, then hopefully me sharing my approach will be of use to others.

Continue reading “ChatGPT can’t pass an experiential knowledge exam”