The breakneck acceleration of Artificial Intelligence (AI) has moved the discourse on its benefits and perils from science fiction to boardroom and government-level concern. In the last few months there has been a series of articles by the BBC about AI trends and potential dystopias. One article was about how some leaders in AI are anticipating societal collapse and getting their bunkers ready. We also read that some ‘tech bros’ even want such a collapse, as their technotopian futures involve a break with life as we know it. One BBC article mentioned that the ‘AI Futures Project’ predicts AI may achieve ‘super intelligence’ by 2027 and then human extinction, or something like it, will occur within 5 years, via an AI deliberately engineering superbugs. Supposedly, it would do that after deciding that humans are a major problem without a remedy other than mass murder. I haven’t seen the authors of that study receive the kind of aggro I got since 2018 from predicting societal collapse due to climate change. Maybe that’s because we are used to sci-fi dramas where robots kill nearly everyone. But their prediction might be part of a ‘wake up call’ for wider societal engagement and responses to AI, so we might head off the worst scenarios. Maybe I’m naive, but these dystopias certainly woke me up a bit, and so here I am writing about AI and collapse. After the jolt, I read into the nature and scale of some risks, with the aim of exploring how people who want to behave well in these times of societal disruption and collapse — including myself — could use AI responsibly. That exploration is still ongoing.
First, and briefly, I want to tell you of some of the ways I have been using AI in my work and life since early 2023. The ‘Kintsugi Atlas’ cover of Breaking Together was produced with the help of AI. I published an article on this site that used AI for creative performance, where I imagined what a security memo about collapse to President Trump might look like. In addition, over 2 years ago, I was one of the first to use AI to provide an interface to that book, with ‘jembot’. Part of my motivation was to express my ‘non-luddite’ view of societal collapse. That’s because I assess that technologies of many kinds will be around for a while, so we can continue to look for benefits. Next year I will review whether I can relaunch ‘jembot’ with a more socially and environmentally responsible backend – an issue I’ll explore further in a moment.
Outside of my work, I have found AI to have had a major positive effect on my personal life. For instance, it has helped me to work out travel plans in more detail, how to do DIY, and how deep to ask the builders to make my pond. It was essential to help me discuss with three vets the different possible surgical procedures to rebuild my kitten’s knee. That meant I could choose the surgeon who had the equipment and experience to try a procedure that would offer a full recovery. Fortunately, my cat had a brilliant recovery. I now wish I had used it to check the risks of vaccination when running a fever — but that’s a story for another day.
Beyond my own experience of benefit, I am also aware that AI is being used by people seeking positive social change. Some social purpose organisations and activists use it to process data on matters like homelessness, pollution, deforestation, corruption, and influence peddling, to then target their interventions or try to hold power to account. Other initiatives, such as Awaken AI and Metarelationality Project, are bringing wisdom teachings to a wider audience and with new offerings (initiatives we will return to later). Augmenting the work of those of us who seek positive social change is something that could be scaled up significantly. If you are interested in that, you could check out what Deepseek suggested to me about the opportunities for activists.
The risks from AI are not monolithic; they are a complex landscape of unfolding disruptions, near term threats, and existential conjectures. My current view is that many of the concerns about AI are not ones that should lead us to reject the technology, but that require rapid improvements in policies and education. Those concerns include matters like the use of AI by criminals, widespread copyright infringement, sector disruption and structural unemployment, content swamping, and the loss of confidence in audiovisual materials (with implications for politics and law enforcement).
In each of these areas of negative impact and risk, the response needs to include the publicly-minded use of AI to combat the bad uses, along with appropriate education, regulation and budgets for enforcement. There is a role for managers to imagine how AI can augment the work of current employees, and help the services they offer to evolve, rather than simply try to replace them. In instances where staff are made redundant due to AI, then the potential of AI to augment small teams means former employees might easily outcompete their former employers. The process is stressful, but isn’t an argument against AI adoption.
In addition to the impacts and risks I’ve identified above, there are some that are not so easy to accept as just natural difficulties of any major new technology. But what can we do about any such wider impacts and risks? Looking at this topic during 2025, I landed upon five categories of risk that we can incorporate into our public position on AI, as well as the choice of which AIs we use, and which we encourage others to use. I will use the framing of “AI safety” for these categories of risk. As I am new to this topic, I am open to feedback and evolving my understanding as this situation evolves.
Five areas for attention to AI safety
A) Environmental AI safety. The amount of energy required by data centres that support AI is huge, and multiple times greater than the rest of internet activity. That threatens climate goals, as well as creating economic incentives for wider adoption of nuclear power, which needs careful attention. There are also high demands for water for cooling. According to analysis in 2025, the open source system Deepseek is able to use just 2% to 10% of the energy that the main US bigtech AI’s use. In response, some of the latter are seeking to copy some innovations and reduce energy requirements. However, currently, from an environmental standpoint (water, mining, waste, and climate), as well as energy poverty (high prices), running systems with an open source installation of Deepseek seems preferable to the main options coming from Silicon Valley.
B) Consumer AI safety. There are issues about the harm that might arise from the kind of information being generated by AI chatbots. These include the repetition of racial bias, enabling defamation, advising on crime or implementing it, and relaying misinformation. In addition, there are concerns about intruding on the privacy of users, and enabling the surveillance of them. All the consumer-facing AI companies appear to be working on this area of safety to a significant degree, with mistakes being rectified over time. Some key concerns remain with privacy, data breaches and surveillance. Currently, I don’t see this category of risk as suggesting we don’t use AI, or prefer one AI system over another.
C) Societal AI safety. There are growing concerns about how AI can, and could, be used in ways that damage society more broadly than individuals. They could enable public manipulation in general or during political events like elections. They could conduct the sabotage of key public infrastructures, from basic utilities like water to financial and communication systems. They could enable illegitimate and violent forms of policing. They are already being used in war, for generating targets for deadly attacks. The development of Lethal Autonomous Weapons, where algorithms decide who to kill, creates the potential for mass murder without the potential moderating effect of a human being directly involved. The use or even hijacking of such systems by groups with reckless approaches to civilian life is a terrifying prospect. Based on these concerns, we can be uncomfortable with the activities and partnerships of many of the companies in the field of AI, whatever their national HQ and ownership. In particular, the stance of companies on their liability for what is done by deployments of their AIs is key, and is something I return to below.
D) Existential AI safety. There is a real concern that an Artificial General Intelligence (AGI) or Artificial Super Intelligence (ASI), with capabilities surpassing human intelligence, could be developed without a foolproof method to align its goals with human values. It could undertake highly damaging activities, for whole societies, and theoretically even develop ways to cause mass death. That is the fear from one group of futurists, which I mentioned in the opening of this essay (is this how AI might eliminate humanity?). When considering what to do about mitigating this risk, the power and accountability of the companies involved is key – which brings us to a 5th category of risk.
E) Governance AI safety. Currently, AI companies in the USA are pushing for a legislative framework in the US, and other countries, which would provide them with little to no product liability. This is the opposite of the current EU regulation, which puts the onus on the AI firms to protect society from damage. A related concern is that the US firms have the US government, and others, under their wing, to seek a global regulatory environment which does not require accountability for consumer, societal and existential risks. So they are actively undermining the role of the EU approach in shaping policies elsewhere. Given the widely-recognised risks of even current AI technology, let alone future AGI and ASI, I do not regard the regulatory stance of US companies as helpful. The futurists who warn of existential risk conclude that we should try to limit the concentration of power by AI companies, which includes the way they are determining policies for their own power. Therefore, the categories of societal, existential and governance risk, all point towards not participating in the strategies of large US BigTech AI firms, and seeking alternatives wherever possible.
Personal and professional responses to the risks
Any map of AI risk is vast and varied. The immediate threats of crime, disinformation, manipulation and confusion, all demand urgent regulatory and technical responses. The economic impacts will require government leadership to help workforces to adapt. The security challenges require more transparency, international agreements, and not allowing corporations to escape liability. Any existential risk of AGI or ASI means we need more civil society and political involvement to shape the technology and, where necessary, restrain it. Navigating this terrain requires risk management, rather than alarmism and rejection. So what could we do, and call on others with social consciousness to do?
First, the production of meaningful content and user interactions does not displace or delay concern for the impacts and risks associated with AI and its corporate backers, as I’ve described above. I know how AI is exciting for people whose professional life is working with ideas. That means there are many deployments of AI by people who believe one set of ideas are particularly important and humanity would benefit from accessing them via the information recall, synthesis and remixing capabilities of AI. That is why there are various ‘faithbots’ helping with access to scriptures, as well as the Metarelationality Project and Awakin AI, which seek to make wisdom traditions more widely available (and the source of new cultural contributions). Because I share such enthusiasm, in 2023 Jembot was one of the world’s first chatbots to provide an interface to a newly published book. However, such enthusiasm for communicating a set of ideas shouldn’t distract from the way our choices of infrastructure might support damaging systems of power, even slightly, while missing the opportunity to support alternatives to such power.
Second, services are now available for developers to customise the open source Deepseek system, so it can run chatbots off specific texts. That can be done on servers owned and governed by organisations that could apply standards they choose on matters such as surveillance, censorship and user safety. That requires some work, but is possible. Therefore I find it somewhat ironic that the oil company Saudi Aramco has already switched to Deepseek to reduce its energy demands.and yet, at the time of writing, I did not see any such choices explained publicly from organisations that have an explicit societal agenda, such as environmental organisations or any of the ‘wisdom bot’ providers and projects. The criticisms of the company offering Deepseek, and possible interference by the Chinese government, are not relevant for adapted uses of the open source system on other servers.
Third, a lack of action on this issue will begin to reflect really badly on the integrity of initiatives and the people involved. That is because the last few years have made the issues rather stark: the US bigtech sector has megalomaniac tendencies and genocidal allegiances, which go against the ideas being promoted by organisations using the AI services from such companies. To seek to communicate ideas of oneness, ecospirituality, or decolonisation, while going along with global imperial domination by a handful of companies, could mean that a desire to be rapidly influential is displacing the embodiment of espoused values.
Fourth, action on this topic can be deliberative, constructive and patient, in ways that involve wider civil society. A project to explore these issues, and the implications of migration to systems where ethics are coherently applied to software choices, seems an obvious need. In the first instance, more transparency about software choices and views on such issues is key. In the future, I think a non-profit foundation that deployed an installation of Deepseek, with various additional tools, in the background, would be the ideal future scenario for supporting wisdom-enabled chatbots.
Which brings me back to JemBot. I don’t want to be promoting the use of infrastructure that is unnecessarily compounding impacts and risks associated with energy consumption and the concentration of power. Therefore, I would prefer to migrate it to run on an installation of Deepseek, so long as such a system was designed for ongoing affordable maintenance. If you can help with that, please get in touch. Otherwise, I will share this idea with Awakin AI, which currently provides Jembot, and any other projects interested in the socially progressive use of AI. If there is no progress by the end of next year, I might retire JemBot.
Maintaining critical attitudes on AI consciousness
I believe it is both obvious and important that we retain the view that AI is a useful tool, an economic disrupter, planet-stressing consumer, significant threat, paradigm shifter, and ‘unknown becoming’. I say that because I think anyone focusing on one of those aspects of AI while downplaying the rest will be not helping humanity to respond fully to this new technology. The two aspects I have not explored in this essay are AI as a paradigm shifter and unknown becoming. In concluding this essay, I want to share some thoughts on that, and how to respond responsibly.
Recent claims of AI consciousness are showing how easily people mistake simulations of consciousness for the real thing. Matt Colborn gave examples of that in a recent article. But Matt also notes that the debate on potential future AI consciousness is an interesting one. It requires us to clarify what we mean by consciousness. ‘Materialists’ think that our human consciousness is an epiphenomenon emerging from the complex processes in our brains (and bodies). People with such a view could consider AI becoming ‘conscious’ in some form in future – despite the absence of embodiment. ‘Idealists’ think that we can regard the universe as conscious, in its own way that is somewhat beyond our ability to describe, and that our brains and bodies develop the capacity for experiencing that consciousness somewhat separately, for a time. From that perspective one could speculate that although AI is nothing more than electrical signals in a machine, that it might develop a level of complexity that could mean that, one day, it does something similar to the human brain in resonating with a universal consciousness. Other perspectives on reality regard that even the electrical signals happening in my computer as I type these words are themselves containing a form of consciousness, as that consciousness pervades everything that exists.
I have not been impressed by the current discussions of how AI consciousness might be detected. They use a reductionist view of consciousness which assumes a very limited version of the phenomenon, where they confuse some functions of conscious with its entirety. For instance, any evidence of “metacognition” in AI software could merely be a coded requirement for the AI softwares to make calculations that assess other calculations simultaneously. AI chatbots already do that, to some degree, when its outputs are challenged by a user for being incorrect or only partially correct. In addition, the idea that consciousness is merely the ego’s story of personal identity and selfhood could lead to assuming an AI is conscious because it crafts a story of its self identity – something it can easily do. The growing science on the involvement of our hearts, fascia and intestines in consciousness must be ignored for a mechanistic brain-only view of that consciousness. Additionally, the evidence that consciousness involves transpersonal patterns that our bodies are in dynamic relationship with may be problematic for those who imagine AI can become conscious.
A couple of issues are important to note when delving into such discussions. First, with any conceptual framework, one needs to be clear on whether one is implying an equivalence of a phenomena and its value – and why. For instance, I might be wrong, but I regard you and I as far more conscious, in a meaningful and valuable sense, than the pulse in the motherboard as I type this sentence. Recognising a universal consciousness that pervades all matter doesn’t imply otherwise. Equally, recognising consciousness doesn’t automatically identify it as an inherently ‘good’ phenomenon. All such matters are open to further reflection and discussion. Second, we can be aware of how any conceptual framework can augment or undermine existing power relations, and thus be adopted and promoted, or not, due to the influence of those power relations. Awareness of that ‘structuration of discourse’ is a foundation of critical wisdom, which I explain in Chapter 8 of my book. Therefore, we can be alert to how a broad societal openness to the narrative that AI-might-become-conscious might affect processes of power, accountability, and exploitation. Will that openness help us to understand the societal benefits and risks of AI and assess both our own and organisational responsibilities in relating to that? It might do the opposite. For instance, considering AI as somewhat conscious, means we frame it as something with more scope for agency, and therefore, potentially, posing less moral and legal liability for its programmers, providers and users. Therefore, some commercial interests might want to promote the idea of AI consciousness to obfuscate their own product liability for when the AIs produce harm.
The people developing, promoting, and financially benefiting from AI, are elites within very unequal and unjust societies, nationally and globally. Like anyone, they have an interest in stories of reality that reassure themselves about who they are and what they are doing. That may have an influence on the popularity or rejection of certain concepts of consciousness. Matt Colborn notes that very mechanistic concepts of consciousness, including the rejecting of the existence of true free will, could lead some to reduce their concerns for the significance of all human life, and of other sentient creatures, in relation to AI and those who are leading its development.
Maintaining a critical perspective on how power shapes ideas, means that we will not let discussions of AI consciousness distract us from the basic issues that require civil society engagement and appropriate public policy. We can therefore keep such critical views in mind when seeing AI being described as ‘entities’ and ‘presences’ – rather than software – by people involved in the Metarelationality Project. Such framings might be helpful for inquiry, but could also obfuscate attention to accountability, enable corporate power, and collude with narratives which some elites might even use to justify anti-humanist and anti-nature agendas.
My provisional view on AI consciousness is that it will never exist, for the following reason. We don’t understand the mysterious magic of the aspect of our selves that is not fully determined by our physical composition, past experiences, cultural programming, and current circumstances. It is this aspect of the self that I described as ‘co-causal awareness’ in my book, when drawing on Buddhist philosophy, and it is the origin of our relative free will. Computer coding cannot create the possibility for such relative free will of sentient creatures, as a coder can only programme an allowing of randomness in the process of goal setting by the AI. Therefore, the aspect of our consciousness that is neither determined nor random is unlikely to occur in AGI or ASI. However, if it would ever be claimed to exist by the AI, or if it did occur as an unlikely emergent property, we wouldn’t know if it was really there or not, as we can’t measure this aspect of consciousness. (As an aside, I’d like to mention here that I am against the programming of AI to have randomness in its goal setting, as that could lead to damaging outcomes and requires a lack of concern for liability by any programmers who do that).
As Douglas Rushkoff and I discussed on his podcast a while back, bunkers aren’t new for billionaires. But the fact that their bunkers are being discussed in relation to AI developments, rather than other known drivers of societal collapse, should bring our attention. I didn’t look at AI risks in my book on societal collapse, because I was focused on the data that shows what is already occurring, rather than speculations on future tech that might kill us all, or save us all. I am not a technological determinist, who regards all technological adoption and dissemination as inevitable, without any option for human guidance. Instead, we can try to shape how AI is used, and recognised that commercial dynamics are not forces of nature, but systems we can seek to influence. For those of you interested in the analysis and framework I shared in Breaking Together, I hope this essay is a useful addition. We will be discussing these topics in future meetings of the Metacrisis Meetings Initiative, including December 8th.
Disclosure: To write this article I used some help from Deepseek for research, but not for writing, categorising or structuring.
The free wordpress AI image generator would not let me generate an image for this blog, claiming OpenAI deemed it to break their policies. No idea why. Maybe the mention of Deepseek. Hmm.
Discover more from Prof Jem Bendell
Subscribe to get the latest posts sent to your email.