It’s easy to see the future as unpredictable. Lately, I feel as if we are being blindsided again and again by crises. There was COVID-19, of course, but it’s only one of many events which have left the world reeling. In New Zealand, we’ve had extreme weather events such as the Auckland floods and Cyclone Gabrielle, and other countries have faced similar disasters. And then there are the various horrific wars going on – I feel as if Sudan deserves a special mention, because it gets overshadowed by Ukraine and Gaza, but the Sudanese people deserve to live in peace and security as much as anyone else.
Even though I feel blindsided by the crises of the last five years, I realise that this feeling is more shock than surprise. I have a long-standing interest in infectious diseases and can say that COVID-19 was neither unpredictable, nor unpredicted. Nor can we really say that floods and cyclones were unpredicted in New Zealand in 2023 when we had been experiencing a long period of La Niña conditions. I’m less familiar with global conflicts, but I can’t say that recent events have been out of character for those responsible (here’s something which will help explain the awful situation in Sudan).
It’s one thing to predict another new disease, or cyclone or war when we’ve had so many before. But knowing exactly what a new technology will do is a different kind of challenge. It’s something I’ve been thinking about lately, for a whole range of reasons. Partly, it’s because I’ve been thinking about disinformation and social media. Partly, it’s because I’m looking at artificial intelligence with a great degree of anxiety. Partly, it’s because I know that we need to have an urgent conversation about genetic technologies. And partly it’s because a chance conversation ended up with me learning some valuable information about lithium batteries which I’ll be bringing you soon (I actually intended to write that article this week, but I got sidetracked).
I’ve been planning to write about genetic technologies and artificial intelligence for a while – I’ve already started the research and have some interviews. But I realised that if I wrote about another topic first, it would help to put these issues, and others, into context. And it’s a topic I find fascinating, since it’s been at the core of my work for the last 25 years. For once, it’s not some aspect of botany. It’s something more conceptual – risk.
In everyday conversation, risk can be either a noun referring to something harmful or dangerous, or a verb referring to the act of endangerment. But it’s also one of those confusing jargon words which has a very specific meaning, subtly different from the everyday usage, for those working in particular fields. In fact, during my career, I encountered three different definitions of the word risk, and I believe that there are more. I know little about financial risk, for example, but I do know that people who work in finance see things a little differently.
Jargon is often seen as a language of exclusion which prevents people from understanding important issues which might affect them. It’s true that it can be used in this way, and I’ve certainly encountered those who seemed to delight in making themselves seem more knowledgeable by creating confusion with their jargon. But jargon can also be precise and convey meanings that are lost in common language. I’ve written about scientific names, for example, which unlock a world of information about plants and animals.
Understanding the jargon meanings of risk gives us a way of understanding what we might face with technologies such as artificial intelligence and genetic modification. Risk is more than simply danger. It can be broken into different parts which say different things about the danger. How many parts depends on who you ask, but I’m going to talk about five which cover the way I’ve looked at risk in my work.
The first part of risk I’ll talk about is the most obvious. It’s an event, something bad which might happen. For much of my career, I was looking at the possibility that a valued plant might get damaged by an insect or a disease. Sometimes, this could result in a crop losing its value or even being lost entirely. Sometimes plants could be severely damaged or killed, affecting other species which depended on them. Sometimes, the leaves or stems might have a few spots or blotches, but nothing much else was expected to happen.
These events differ depending on the kind of risk, but there’s always a logical cause and effect. An earthquake could damage buildings. A virus could make people sick, or even kill them. A severe storm could cause flooding. A clever idea could cause your foot to fall off… okay, that last one might not seem logical, but I can’t resist a reference to that masterwork of comedy, Blackadder. In series two, Nursie tells Queen Elizabeth that she is so clever, she should be careful her foot doesn’t fall off. She goes on to explain that her brother once had the clever idea to cut his toenails with a scythe, and his foot fell off. Cause and effect, again.
The problem with this logic of cause and effect is that sometimes the effects are hard to predict. When the first cars were being manufactured, who could have foreseen that the combustion of petrol would warm the planet to a dangerous degree? In fact, that turns out to be a poor example, since a Swedish scientist, Svante Arrhenius, was beginning to join the dots between fossil fuels, carbon dioxide and global temperatures at the end of the nineteenth century. So perhaps there were a few people who could have predicted where we would be today. But it’s certainly true that when coal came into widespread use in the early days of the Industrial Revolution, nobody could have foreseen its role in climate change today.
The second part of risk I want to talk about is the size or scale of a predicted event. People who work in risk assessment often talk about this as the impact. A small, deep earthquake might rattle a few items on shelves and give people a bit of a fright. A large earthquake might cause widespread destruction. But here’s where things get complicated. How, exactly, should we describe the size of that event and the resulting harm? And who decides how we describe it?
The second question – who decides – really should come first, because who decides affects how we describe the size of the event. A medical doctor will describe it in terms of injury and death. An economist or insurer will want to put a dollar value on it. A marine biologist might point to the loss of coastal seaweed forests caused by the uplift of land. But there’s something else, too. Think of the small earthquake again. When I experience one of those, I look around and think oh, it’s an earthquake. If it’s especially sharp or long, I might wonder if I should get under my desk, but the chances are it will be over before I’ve done anything. Not everyone feels the same, though. Someone who lived in Christchurch in 2011 might have a very different experience.
Risk assessment is often presented as a scientific and objective process. It’s true that science is valuable for understanding the cause-and-effect and making predictions. But it can’t tell the whole story. People are affected in different ways and have different views on what is important. It means that everyone looks at risk differently.
But there’s more to risk than just the size and scale of the events. It’s one thing to predict something terrible. When I was younger, I was very good at creating catastrophic scenarios in my mind, with the result that I could get very anxious. But, on the whole, those catastrophic scenarios weren’t particularly likely to happen. I later learned to manage my fear by reminding myself that my imagined catastrophes weren’t likely.
People who work in risk assessment sometimes term this likelihood, but in everyday language, we more often refer to chances or odds (as in the odds of getting tails when you flip a coin). It’s a crucial part of understanding risk. Some events are likely – if we drop a glass on a concrete floor, it’s almost certain to shatter. Some events, such as a volcano erupting in Auckland within the next year, are not. The chances aren’t zero, but they are small. On a longer time scale, though, the chances increase. On a time scale of 2000 years, a volcano erupting in Auckland is quite likely. Still, it’s a lot less than the chance of a major earthquake in Wellington or somewhere else along the alpine fault.
One of the most important ways we can understand likelihood is by looking at the frequency of events in the past. Staying with earthquakes for a moment, we have great data on earthquakes in New Zealand since 1960 (I’m not sure why this date is significant, but I assume it has something to do with systematic record collection). The data tell us that we can expect an earthquake in the order of 7-7.9 on average every 4 years. But what about larger earthquakes? Since 1960, there have been none. That certainly doesn’t mean they don’t happen (I’ll return to this topic another time, because I’m keen to learn more about earthquakes).
Using the past as a guide, however, can only take us so far. What about events which have never happened before? A few weeks ago, I wrote about myrtle rust, the disease which has brought a number of native plants in both Australia and New Zealand close to extinction. In New Zealand, we had little past experience of diseases affecting native plants which might have indicated what myrtle rust would do. But we did have the experiences of Australia and Hawai’i to indicate what we could expect. This isn’t always the case, though. We don’t always have such clear evidence from other countries. One of the challenges with moving plants, animals and microbes from one part of the world to another is that we can say, with great certainty, that some of them will become serious problems, damaging crops, livelihoods and biodiversity. But it’s often difficult to know which species will cause problems and which will be innocuous or beneficial.
How certain can we be about risk is also important. We may be uncertain whether we are predicting the right events. We may be uncertain about the size of the predicted events or we may be undertain how likely they are to happen. This level of certainty is the fourth part of risk I want to talk about. Going back to my example of dropping a glass on a concrete floor – not only can we say that it’s very likely to break, but we can also be very certain about it. But for many risks we struggle to get clear estimates for how bad they will be and how likely they are to happen. If we are introducing new species to an area or unleashing new technologies on the world, we can do an assessment, but we may still be very uncertain. Or we might be very certain of some things and uncertain of others.
Reports such as those from the Intergovernmental Panel on Climate Change state how certain the experts are about the various aspects of climate change. So, for example, they state with high confidence that human activities, largely through releasing greenhouse gases, have warmed the climate by 1.1 degrees since the latter part of the 19th century. Most of the main points of science about climate change are reported with high confidence, as are many of the impacts, such as increasing drought and heatwaves. Unfortunately, confidence is also high that our present policies and mitigation efforts are insufficient. There is less confidence, however, in areas such as the degree of sea level risk in the longer term.
Understanding the risks we face is a process of working out what the events might be, then working out the size of the events, how likely they are to happen and how certain we can be. But there’s one more part of risk I want to talk about. It’s often missed by people who work on risk, but it’s important. This is how frightened something makes people feel.
Some years ago, researchers noted that there wasn’t a clear link between the degree of risk as defined by a scientist and how afraid people were. The best-known illustration of this is the widely quoted example of the safety of commercial air travel compared to travelling in a car. Even though I know I’m very safe flying, I can’t stop the thought of crashes popping into my mind when I get on a plane and at various times during the flight. This doesn’t happen when I’m driving. Nor does it happen when I take the overnight bus between Auckland and Wellington, even though I have no idea what the actual numbers say, and I can remember news stories about fatal bus crashes. My experience isn’t unique, in fact many people are terrified of flying.
What the researchers found is that there are a number of factors which influence how people feel about the risks. These factors include how much control people have, whether the risks are voluntary or imposed upon them and whether they can remember particularly horrific examples, but there are many more. Peter Sandman, the American risk communicator whose work had a big influence on me, calls these factors outrage. People differ in how afraid they are of different risks just as they differ in how they define the level of impact. However, overall, outrage is consistent and predictable.
I’ve written about outrage previously, although it was a long time ago – before there was a vaccine for COVID-19. At some point, I’ll give a fuller explanation of it, because I find it fascinating, and it’s an important part of how we make good decisions about risk. But I won’t do that now.
If we want to make good decisions about issues as diverse as climate change, artificial intelligence and genetic modification, we need to understand the risks. This is far from a rational and scientific process – everyone will see things differently. A wise invasive species expert once said to me that a pest is only a pest because someone perceives it as a pest and it’s affecting values they see as important. We could say this about risk too. But by breaking risk into different parts and understanding how they interact, we can take a step towards better conversations about the challenges we currently face.
Great writing, thank you Melanie.
Rage factors "how much control people have, whether the risks are voluntary or imposed upon them and whether they can remember particularly horrific examples," explain how a huge number of people are feeling about our current government. Vulnerable people particularly, are at economic and wellbeing risk through deliberate policies designed to benefit others who have more than sufficient.
It is outrageous.
Then there is relative risk , and absolute risk - and the risk continuum… and proportionate risk.
You sure have opened a can of worms ! Great stuff!