The Two Biggest Threats: Climate Change And AI


by Malcolm Murray

In the 21st century, only two risks matter – climate change and advanced AI. It is easy to lose sight of the bigger picture and get lost in the maelstrom of “news” hitting our screens. There is a plethora of low-level events constantly vying for our attention. As a risk consultant and superforecaster, I try to pay disproportionate attention to the bigger picture, in order to separate the signal from the noise. With that bigger picture in mind, I would argue that climate change and advanced AI are the only two risks where parts of their probability curve include singularities where change is so fundamental that we cannot forecast beyond it. All other risks are a continuation of the status quo, to a greater or lesser extent.

The interesting thing is that not only are these two risks joined together at the hip, towering over this century like the Norns in Norse mythology, they also share many similarities – both in terms of the nature of the risks and how they are mitigated against. It can therefore be instructive to study these similarities to see what we can learn and apply across the two risk areas. The risk equation – think Lady Justice with her two competing scales – has the nature of the risk on one side and the effectiveness of its mitigation on the other. No risk goes fully unmitigated (except perhaps in a Don’t Look Up scenario). Subtracting one from the other leaves us with residual risk. This is what matters at the end of the day. Starting with the nature of the risk, two similarities stand out.

Through a Screen, Darkly

The shape of the curve, or to be more technical, the probability density function, of the two risks are similar. They are characterized, dominated even, by their fat tails. As Nassim Nicholas Taleb would say, they are part of Extremistan, where what matters are the outcomes in the tails rather than the average outcome. For both risks, there are events that are more or less certain to occur in the middle of the curves. With extremely high likelihood, we will see changes to society from advanced AI and climate change. But these are smaller changes, such as increased automation of work tasks and water scarcity in warmer regions of the world.

However, what is more foreboding is that the tails of both risks arguably contain singularities. These are singularities with a small s, i.e. not “The Singularity”, but singularities in the sense that forecasting breaks down and we can not see beyond these points. For advanced AI, it is the oft-discussed point of AGI. The term has somewhat lost its meaning, as it is defined in too many different ways by too many people. But using one definition of AGI, as the point where an AI agent can autonomously, and over time, effectuate changes on society similar to those of a human, there can be no doubt that this is a (small-s) singularity. Everything in society is shaped by humans as intelligent agents. The day we have new intelligent agents (and likely in very large numbers given the low cost of duplication), our ability to model and foresee what will happen breaks down. For climate change, it is the extreme scenarios. Climate models are noted for their high levels of uncertainty even in the middle of the curve, but toward the tail ends of the curve, that uncertainty becomes pure coin-tossing. Climate scientists suspect that there will be emergent systemic effects at global average temperature increases above 4 degrees which could result in unpredictable feedback loops. However, there is complete uncertainty as to how these would play out.

Tragedy of the (ML) Commons

A second similarity between the two risks is that the effects of the risk are dominated by externalities. The adverse effects of the risks do not fall on the initiator of the risk nor are the adverse effects priced in. With climate change, it has long been clear that the producers of fossil fuels will largely not bear the costs of the risk. For advanced AI, the effects are similar. OpenAI, who will potentially be the ExxonMobil of AI risk, looks set to unleash advanced AI on society without being responsible for the outcomes (although some regulation currently in the works, such as SB-1047, could have a positive influence here).

Despite having a unit of measurement in the form of the tons of carbon emissions, it has proven fiendishly difficult to price in climate change into consumer decisions. With AI, this looks to be an even more uphill battle, since the risks are more diverse and there is no clear unit of measurement that could measure adverse outcomes. Measuring “inference units” is less attractive since most use of AI is dual-use.

In addition, it is the Global South that seems likely to, again, disproportionally suffer the consequences and not reap the benefits. With climate change, the Global South came too late to the fossil fuel party to be able to burn their share without suffering the consequences. Similarly, the advent of advanced AI will likely prevent countries in the Global South from following the industrialization playbook that the rich countries followed, since there will be less need for human workers for low-level tasks. It’s hard enough to set up a universal basic income in rich countries – it’s even harder in countries where the wealth and the infrastructure have not even been put in place yet.

Gradiently, then Suddenly

On the side of risk mitigation, how we need to tackle these risks, there are also two instructive similarities. First, both risks have tipping points. As we know from both Gladwell and Hemingway, tipping points can be very impactful. For climate change, a tipping point often discussed is that of melting Arctic ice, since it both reduces the Albedo effect and releases stored methane.

For AI, a tipping point will be when AI can do recursive self-improvement, i.e. when an AI can do AI research. That could be a tipping point where timelines are suddenly highly compressed. This is something we inched closer to with the launch this week of OpenAI o1/Strawberry/Q-star, which is able to reason through more complex tasks at a sometimes superhuman level. The two risks also share their accumulative effects. Like CO2 unknowingly built up in the atmosphere during the 20th century, the groundwork for the AI revolution was built during the AI winter when the cloud led to unprecedentedly large data centers being built and the internet began generating previously unimaginable vast amounts of data. It was just the third piece of the triad – the algorithms – that needed to be added. With the enormous data centers that are being built and planned, this hardware overhang will continue to contribute to the risk.

Global (Neural) Networks

Second, we have the interconnected nature of the two risks. Coal burnt in India will impact the temperature in Norway since CO2 knows no national boundaries. Similarly, an AI agent built in China can wreak havoc with U.S. critical infrastructure as the internet knows no national boundaries. This means both risks need global solutions, in the form of strong global governance. We have the beginnings of an IPCC for AI in the form of the State of the Science report (which I have been lucky enough to have some small involvement with). But we will also need an IAEA for AI and a WTO for AI. The odds for global solutions might be slim in today’s geopolitical climate, but there might still be hope, even the Economist recently argued that Xi might be a doomer.

Enjoying the content on 3QD? Help keep us going by donating now.



Source link

About The Author

Scroll to Top