At some point in June of this year, I realised that I had completely forgotten the birthday of The Turnstone. I published my first article in mid-May of 2020, just as New Zealand was emerging from the first COVID-19 lockdown. I knew that I wanted to write about science and the environment, but I wasn’t sure of the details. However, since plagues were on my mind, that’s mostly what I wrote about. COVID-19 featured, of course, but I also looked at other plagues, such as Ebola and locusts. I wrote about what interested me, but I didn’t have a strong sense of purpose in my writing.
Within a few months, though, I found myself more and more drawn to one specific area – vaccines. Vaccines were being developed against COVID-19, but they were already a contentious issue for many people. Outrage was smouldering away, and I could see people pouring on the fuel. Some of that fuel came from vested interests overseas and the deep divisions in American politics. I recognised that there was nothing I could do about that. I was more troubled by the fuel added by people who should have been doing the opposite. I could see people acting like the scientist I wrote about last week, who wanted less regulation of gene technology, but spoke in a way likely to encourage more regulation. I saw people who supported vaccination talking about it in ways which would only undermine confidence in vaccines, and probably medicine in general.
Through my work, I’ve spent a lot of time talking with people about risk and looking at how people make decisions when faced with risk. I’ve looked closely at the research and practice of risk communication. My work has been mostly about invasive species, an area which includes contentious ways of controlling those species, especially with pesticides. In mid-2020, I wondered whether the lessons I had learned could help with conversations about COVID-19 vaccination. I started to look at what puts people off vaccination, when it has saved an estimated 154 million lives over the last 50 years according to the World Health Organisation.
I encountered a common belief among vaccine supporters that a fear of vaccines is something recent, a luxury of those privileged to live in countries where diseases such as diphtheria and polio have been tamed. I often came across the statement that in the 1950s, people lined up around the block to get their children vaccinated against polio, and that something has gone wrong since then. But as I looked into the history of vaccines and vaccination, I realised that the 1950s was an aberration. From the earliest days of smallpox vaccination, some people preferred to take their chances with a prevalent and deadly disease.
The reasons for this are complex, and I’m not going to go back over them here. If you are interested, you can go back to my series of articles from August 2020, The company of the dead, which look at the history of vaccination, what affects confidence in vaccines and the role of good risk communication, as practised by people such as Peter Sandman and Heidi Larson. I also did another series in 2021, Confidence, complacency and convenience, which looks further at the barriers to vaccination.
Vaccines and gene technology are very different issues in many ways, not least in my opinions on them. I’m an enthusiastic supporter of vaccines and have had more of them than I can remember, including things like typhoid, rabies and yellow fever. I’m much more ambivalent about gene technology. But there are important lessons from vaccines which are relevant to gene technology. The things which upset people about vaccines are similar to the things that upset people about gene technology. We need to understand what these are in order to have a meaningful, constructive conversation about how we regulate the technology.
All of us make decisions about risk every day, and most of us don’t give them much thought. I just drove to the supermarket, and never considered the possibility of crashing my car. I use sharp knives, handle potting mix, clean up after my dog, spend time in the sun, cross roads, charge lithium batteries… all of these commonplace activities have risks, but I’m used to living with them.
We also live with risks which are out of our hands. I live on a steep hill in a shaky city, but I don’t lose sleep over earthquakes. There really isn’t anywhere in New Zealand which isn’t vulnerable to one or more natural disasters, but we get on with our lives.
On the whole, these kinds of risks don’t upset us. They don’t prompt what Peter Sandman calls outrage. Why are some risks more likely to upset people than others? And why do people have such different views about these risks?
If you’d asked me these questions thirty years ago, I would have struggled to answer them. I’m pretty sure I would have said that people would be more upset about risks that they thought were more dangerous. I’m certain I would have assumed that people had different views because they had different information about the risk.
In the late 1990s, a friend lent me Peter Sandman’s book about risk communication. When I read it, my views on risk were changed forever. The book explained so many things I had observed about the way people responded to invasive species and pesticides. It also explained what I observed within myself and among experts when people didn’t believe what those experts and I said about risks.
The book made a distinction between how likely a risk was to kill people, termed the hazard, and how much a risk would upset people, termed the outrage. Hazard and outrage are largely independent. People don’t get outraged over the risks of hang gliding or SCUBA diving or mountaineering – they either choose to do them or they don’t. Insurance companies, however, pay close attention if you decide to take them up, and may charge you more for insurance. Statistically, these are perilous pastimes.
On the other hand, there are things which are far less likely to kill us which tend to worry us more. I really don’t like nuclear power, and it seems to me like a bad idea in a country so prone to earthquakes. But I don’t worry about all the massive dams holding back huge quantities of water in our earthquake-prone hills, even though I know that statistics say that hydro power has killed many more people than nuclear power.
When I first saw these figures, there was a reference to a specific disaster, the catastrophic failure of the Banqiao Dam in China in 1975. The resulting floods killed 26,000 people directly, and nearly 150,000 more died from disease and famine triggered by the floods. I assumed that this horrific disaster skewed the statistics. But as I looked more closely, I realised that there have been many disasters and close calls involving hydro power, some relatively recently. In 2009, 75 people were killed at a Russian hydro station. In 2017, there was a near-disaster in the USA, which prompted the evacuation of more than 180,000 people. In April this year, seven workers at an Italian hydro station were killed. There have also been some shocking close calls.
For the people caught up in them, all of these disasters must have caused unimaginable suffering. But even though I recognise them as terrible, none of them evokes the horror of the Chernobyl nuclear disaster, even though it was nearly 40 years ago. This isn’t about the statistics – it’s about outrage.
My point here is not to discuss the relative merits of different forms of energy, although it’s a topic I’d like to explore at some stage. I wanted an example to illustrate how outrage works, using my own responses as an example. Far from being some kind of random, irrational response, outrage is something which is reasonably consistent and predictable, even though there are individual differences.
Nuclear power has a number of characteristics which make it more likely to provoke outrage than hydro power. The most obvious one, I believe, is that radiation is something we dread. My generation grew up with books such as Sadako and the thousand paper cranes, about a girl from Hiroshima dying of leukaemia. We sang songs like What have they done to the rain, a protest song against nuclear testing. We remember French nuclear testing at Moruroa, the protest flotilla, the bombing of the Rainbow Warrior. I even wrote to world leaders urging nuclear disarmament. Well before Chernobyl, everything I learned about nuclear weapons emphasised radiation as something insidious and deadly. The dread of radiation is magnified, too, because it is particularly harmful to children, and the harm can be passed on to future generations.
Whether a risk is dreaded or not is one of the documented factors which affect outrage. So too is whether we perceive something as familiar, memorable or industrial as compared to natural. I’m used to hydro power, I haven’t seen lots of memorable stories about hydro disasters, and it doesn’t seem unnatural, even if I rationally understand it is environmentally destructive.
Control is another factor which affects outrage. I have very little control over whether I am exposed to radiation or not, because I can’t see it. At least I would know if there was a flood happening around me. Control is also the main reason we don’t get outraged about our perilous pastimes, and why opposition to vaccines is increased by mandates.
Gene technology has some parallels with nuclear power. It’s unfamiliar to us, we can’t see it and it isn’t within our control. I don’t think we perceive it as natural. We have a body of literature, starting with Mary Shelley’s Frankenstein, which lead us to fear what we perceive as artificially-created life. Because it involves changes to DNA, it connects to the fear of a harm perpetuated on future generations.
There are other factors connected with outrage which are also worth closer examination. When specific decisions about risk are being made by regulators, companies or communities, the decision-making process has a major influence on outrage. Whether people are given the opportunity to contribute, whether people feel as if their concerns are heard, whether the experts on the topic behave in a trustworthy manner — all of these influence how we feel about a risk.
Fairness and moral relevance matters too. If a company wants to do something which makes them a profit, but it doesn’t offer wider benefits and a community bears the risk if something goes wrong, it feels unfair. When I consider how I feel about Roundup Ready soybeans as opposed to insulin produced by genetically modified bacteria, I can see the influence of fairness and moral relevance.
Does this mean I think we should base all our decisions about regulating risks on their capacity to provoke outrage? I don’t, but I don’t think we should base all our decisions purely on science or economic analysis either. Scientists and regulators need to recognise and acknowledge outrage, including their own, as a part of risk. In fact, that was one of the most important revelations I learned from Peter Sandman’s book — in the face of public outrage, scientists and regulators often respond with outrage of their own. Such a situation, with lots of upset people talking past each other about the risk, is unlikely to result in good decisions. Scientists and regulators can have blind spots too, and they need to have the input of a wide range of people for good decisions.
I also think it helps us if we understand that we have responses to risks which are conditioned by our culture and experience — just as I learned years ago that Europeans have a cultural perception of rats which is not shared by Māori. For me, this understanding has helped me to listen better to other perspectives.
Perhaps the most important lesson we can take from understanding outrage is that it tells us what we should expect from our scientists and regulators. Decisions about how to manage risks, whether those risks relate to electricity generation, invasive species, artificial intelligence or gene technology, are decisions which relate to values. Because we have different values, it means that everyone will have different views, and we won’t all agree. But we need to have processes which allow constructive, meaningful conversations about risk, and we need to hold our regulators to that.
"But we need to have processes which allow constructive, meaningful conversations about risk, and we need to hold our regulators to that." Exactly. Were there meaningful conversations about the relative risk/benefit of the Covid vaccines? No - there was a lot of fear generated (in order to make people comply) and many doctors who attempted to do honest informed consent met with severe reprimands from the medical establishment (my own GP being one of them). Informed consent and risk/benefit analysis were missing from the public discourse. I'm sorry you lumped the Covid vaccinations in with vaccinations in general - they are very different from the 'traditional' vaccines and some would call them gene therapy because of the DNA - affecting component - a very different beast from what has been used for years to stimulate the immune system against disease. These factors are what caused my outrage, plus that, once I had decided that the risk of being vaccinated far outweighed the benefit in my case, I was labelled as anti-vaccination (I'm not) denigrated, isolated and punished for that decision. Other people's outrage towards my decision was palpable - woe betide anyone who does not comply with the official narrative, or stands up for their own beliefs!
Food for thought here, Melanie, as I work on PFAS in drinking water, which causes much more outrage then arsenic or e-coli.