Two-faced
How social media is being used to take, and save, lives (13 minute read)
When I’m walking my dog, Donna, one of her favourite spots is a small area of weedy ground bordered by a bus stop and a retaining wall. It seems to accumulate windblown leaves and rubbish, which is probably why she finds it so appealing. There’s always something interesting to smell there.
A few days ago, I was looking at the ground while Donna investigated her favourite spot, when I spied something camouflaged among the dead leaves. There were a couple of round shapes which I realised were mushrooms. I reached down and plucked one, so I could see underneath, since the important features of a mushroom are all underneath.
The top of the mushroom was an unassuming shade of pale brown, almost the exact colour of the dried pohutukawa leaves around it. But the underside was extraordinary – a vivid violet.
It made my day. I was as excited as Donna when she found the wrapper from someone’s fish and chips in the exact same spot. I’d never seen such a remarkable mushroom. I took a photo of it – here it is, so you can see that I’m not kidding.
Although I was sure I had never seen it before, I did think I’d seen pictures of it. I was sure it would be easy to identify. When I got home, I did a quick internet search on purple mushrooms. A few possibilities came up, but I don’t have the same skills with mushrooms as I have with plants. I wasn’t certain of what I was seeing. So, I decided to ask Facebook.
This might sound like a questionable way to identify something. After all, social media is full of dodgy information and people who are overconfident in their own skills. But there are some glimmers of good there too. I’ll get to that, but first, I want to look at the role of social media in spreading disinformation.
I’ve considered writing about social media before, but I’ve hesitated. I find the topic overwhelming. Even though I write about climate change and water quality, and these can be heavy, depressing topics, something about social media bothers me in a different way. There’s something so insidious about it. However, it’s an important topic related to disinformation, so I’ve decided to face up to it.
Disinformation was around long before social media. In Europe, from the 14th to the 17th century, tens of thousands of women, and some men, were killed following accusations of witchcraft. Among the wild claims, these women were accused of killing children to make magical ointments from their fat, eerily similar to some of the claims from QAnon conspiracy theorists. Lurid claims about the danger of vaccines date back to the widespread use of Jenner’s smallpox vaccine in the 1800s. And when I was a child, I believed that a people called Moriori were the original inhabitants of New Zealand, but were wiped out by Māori. The story makes for a convenient justification for colonialism, which perhaps explains why it persists, but the truth is much more complicated. I recommend this article by Maui Solomon for a very much not-extinct Moriori perspective.
Technology has long been the fuel of disinformation. European attacks on alleged witches were first ignited by the publication of a book called Malleus maleficarum. Without Gutenberg’s printing press, few people would have ever seen it, and the book would never had attained its bestseller status.
Books don’t frighten me, in fact I adore them. My house has so many that I keep running out of book cases. Yet books proved to be a lethal technology. People believed what was published and acted on it. Never mind that some of the Pope’s top theologians dismissed it, the Malleus maleficarum provided an appealing ideology, and a moral panic over witchcraft swept through north-western Europe.
In theory, then, I shouldn’t find social media so frightening. But I’m old enough to remember when it didn’t exist. Not only that, I remember quite precisely the moment that Facebook began trying to manipulate me. When I first used it, what I saw when I logged on was a list of the most recent posts from my friends, in reverse chronological order. One day, I noticed that the posts were in the wrong order. I looked more carefully, and I saw that there was a small box which indicated that I was seeing what Facebook deemed to be the “top posts”. But I still had the option to revert to the “most recent” posts. Every time I used Facebook, I made sure I was seeing the most recent posts. I didn’t trust them to choose for me.
Before long, though, that option disappeared. Soon, all I could see were the posts that Facebook wanted me to see, in the order they wanted me to see them. By that time, Facebook was a convenient way for me to keep in touch with friends and family, so I kept using it, even though I was unhappy about the way they were controlling what I saw. I pushed that discomfort to the back of my mind, as many of us did, I suspect.
At the time, I didn’t understand what was going on – all I knew was that I didn’t like it. Years later, I learned about how social media directs information to us and all became clear. When Facebook (and Instagram) originally began, what we saw was governed by a subscription model. We saw what people we subscribed to posted. Some types of social media, such Twitter in its early days, introduced the ability for people to share posts from others, a network model. Before long, Facebook began doing this too. But, increasingly, what we see on every platform is governed by a set of rules which predicts what we will like, based on what we interact with on, and off, the site. The set of rules is called an algorithm. These days, when I open Facebook, it’s a stream of sponsored and recommended content I didn’t sign up to.
Social media companies are basically advertising companies. They deliver our attention to their customers. Their goal is to keep us online as long as possible. Material which triggers our emotions does that – the stronger the emotion, the better. Whether the material makes us happy or angry is irrelevant, as is whether that material is true or false. If disinformation which gets us upset keeps us online, then it’s good for the advertisers. And there is published evidence that highly partisan political statements and inaccurate information do get more attention than accurate or more measured statements.
The corrosive impact of a system which promotes emotive disinformation has never been more apparent than in Myanmar. It’s a country which seldom makes the news, but the situation there is dire. For most of the last fifty years, the country has been tightly controlled by the military – the most recent repression began with a coup in 2021. However, there was a brief period from around 2013 when Myanmar had something resembling democracy. It opened up its telecommunications and suddenly everyone had a smartphone. Facebook was one of few companies to support Burmese text. Everyone in Myanmar loved Facebook.
In 2014, rioting erupted in the city of Mandalay. The violence was inflamed by the Buddhist nationalist organisation Ma Ba Tha, fronted by a monk who had been stoking anti-Muslim sentiment with racist disinformation for years. But the riots were sparked by a post on Facebook alleging an attack on a Buddhist woman by Muslim men. The government shut down Facebook and an executive from the company travelled to Myanmar to discuss the situation. At that time, Facebook’s community standards, which set out what is not permitted on the platform (such as selling weapons and calling for people to be exterminated) were not even translated into Burmese, according to David Madden, a technology entrepreneur who was then working in Myanmar. He says that it took them nearly a year after the Mandalay riots to do so.
Facebook had no Burmese speakers monitoring content until 2015, and even then only a couple were employed. Madden was concerned enough that he travelled to Facebook’s headquarters in 2015 and spoke to Facebook executives. I’ve linked to a fascinating interview with him from 2018, which is well-worth watching (there’s also a transcript).
David Madden | FRONTLINE (pbs.org)
In 2017, the Burmese military attacked the Rohingya in Rakhine State, killing thousands and driving nearly a million refugees into Bangladesh, in what the US Government has called genocide. In 2018, the role of Facebook came to light, when a Reuters investigation showed more than 1000 posts attacking the Rohingya on Facebook, some dating back to 2013. Although people debate what exactly can be defined as hate speech, it’s hard to argue that statements such as “we need to destroy their race”, suggestions that Rohingya be fed to pigs, or sharing a photo of a journalist and urging people to kill him are anything else.
It's difficult to prove how much of a role social media played in this conflict, or in other instances of polarisation. However, one thing is clear – there are those who seek to exploit the potential of social media to aggravate divisions. We’ve probably all heard claims about Russian efforts to influence the 2016 US election. However, some of their other activities are less well-known.
Russia Today is a news network owned by the Russian government. Government ownership in itself isn’t necessarily a problem – many countries around the world have state-owned media. However, the difference between the content in Russian and the content in English points to the network’s intent. In late 2021 and early 2022, Russian language news stories spoke of the seriousness of COVID-19, the importance of lockdowns and vaccination, and the safety of the Russian-made vaccine. During the same period, English language stories emphasised the ineffectiveness of vaccination, the dangers of vaccination and conspiratorial allegations against Pfizer and Moderna. Also disturbing is the huge spike in Russian disinformation in New Zealand during the same period – far greater than in Australia or the USA. We’re a long way from Russia, but social media brings us close.
There is so much more I could look at with social media and disinformation. But, instead, I want to go back to my stunning violet mushroom, and look at cases where social media is being used for good.
The group where I shared my picture of the mushroom is a Facebook group for identifying mushrooms found in New Zealand. I joined it more than four years ago, during New Zealand’s first COVID-19 lockdown. There were various mushrooms growing at the park across the road from me, and I was curious to learn more.
It wasn’t the first identification group I joined. I was already a member of a New Zealand plant identification group. I enjoy being a part of that group, but I have to be honest here. It’s the kind of group which values participation over ability. When someone posts a photograph, it seems as if everyone with an opinion chimes in. I’ve seen as many as four completely different answers for what to me seemed like a fairly simple query. A beginner, or someone in a panic because their child just stuck a random plant in their mouth, wouldn’t know who to believe.
The mushroom group is better than that. It’s rare to see many contradictory answers and there are some reliable identifiers there. I’ve learned a lot from the group. But I wouldn’t trust my life to it.
There are, however, Facebook groups that I would trust my life to. One of the most well-established is a US-based group called National Snakebite Support, which gives expert advice to snakebite victims or pet owners whose animals have been bitten. The group is private, so unless you’re a member, you can’t see what’s being discussed, but I joined because I wanted to understand what they were doing. Central to the group’s work is correcting widespread misinformation about treating snakebites. At least half of the people requesting help are doing things which are, at best, ineffective and, at worst, going to make the situation worse. Here, then, is a Facebook group that’s fighting the good fight against misinformation. (It’s worth joining if you are in a part of the USA where venomous snakes are common, but there’s not much point if you are in New Zealand).
Another potentially life-saving Facebook group is called Poisons Help: emergency identification for mushrooms and plants. It is for identifying unknown plants and mushrooms in emergency situations, usually when they have been eaten by a human or an animal. It is a public group, so you can see what is done there even if you aren’t a member of Facebook.
The group involves a few hundred experts from around the world, including professional scientists, peer-educated foragers, horticulturalists, and self-confessed plant and mushroom nerds. When someone’s child, or dog, or cat, or in one case a pig, eats a plant or mushroom, they can submit a photo to the group. As soon as someone submits a picture, a member of the triage team checks the right information has been given and sends an alert to the relevant group of experts. Most of the time, the mystery plant or mushroom is identified within minutes.
The group is a good case study in managing the accuracy of information on social media. Some of the experts know each other in-person, some from online identification groups. In most cases, new experts are added to the Poisons Help group when other experts vouch for their ability. Only the group’s experts are allowed to give an identification. Because alerts go to all relevant experts (either botanists or mycologists for fungi), there are usually multiple people answering a query. The group works on consensus – ideally at least three people should be in agreement.
Some of the queries are relatively simple. Every Valentine’s Day and Mother’s Day there are numerous posts where a lily in someone’s bouquet has been chewed by a cat. It’s crucial to distinguish between the different types of lily, as there are some which can be deadly to cats (although they are harmless to dogs and humans). Fortunately, the deadly ones are almost always easy to distinguish for those who know what to look for, even from a few mangled petals, or the stems left behind when the flower withers. Sometimes the queries are more difficult. I’ve seen house plants from which every leaf has been chewed off, fragments of leaf or mushroom collected from a dog’s vomit and photographs which looked like they were taken with Vaseline smeared on the camera lens. Sometimes there are behind the scenes discussions between the experts to try and get an answer, but it’s very rare that the group can’t work things out.
The group is valuable because although many countries have phone lines which can provide advice in case of poisoning, they can’t do much if they don’t know what the poison is. These emergency poison lines aren’t staffed by plant and mushroom experts, they are staffed by health professionals. So, a group of experts who volunteer their time on social media fills that gap. If you want to read more, there’s an article on Vox and an interview with co-founder Kerry Woodfield on Science Friday which are worth looking at.
I sometimes despair about social media, and I do believe that it is dangerously under-regulated. It gives people who want to spread chaos and hatred a new weapon, and even without malicious intent it can cause significant harm. I don’t believe that tech companies can be trusted to do the right thing.
But then I remind myself that people have found a way to use a potentially dangerous weapon to do something good. We have to find a way to tip the balance in the right direction.
A great post, Melanie. Thanks for that on Myanmar, I have a Rohingya friend, and I had no idea of the role of Facebook. I wish we had an alternative.
Re mushroom ID ( or any other organism, don’t forget iNaturalist. Real experts and no spam.