Blind Trust in Social Media Undermines Climate Action

Social media has the power to amplify misinformation to millions of users, what does this mean for climate activism?


0

As of 2023, there are an estimated 4.9 billion people using social media in the world, and this number is projected to increase to around 5.85 billion by 2027. Unfortunately for climate action, these platforms have not proven to be beneficial.

As social media continues to grow, so does the need for climate action across the globe. We are seeing drastic changes around the world with eastern Antactica seeing temperatures 70 degrees higher than normal, and the Paris Agreement’s 1.5 degree global warming threshold fast approaching. According to the United Nations Our Common Agenda Policy Brief 8 released this June, the hashtag #climatescam’ increased on Twitter from less than “2700 posts per month in the first half of 2022” to “199,000 in January 2023”. This illustrates how platforms that inform the world of relevant news can also elicit the spread of lies, enable hate, and undermine positive movements. 

Like many tough to solve issues, people find it easier to place blame on individuals or specific social media platforms, when in reality it lies somewhere in between. To adequately address the negative impact social media has on climate action, we need participation from both platforms and their users. 

Misinformation and disinformation play large roles in this issue. Misinformation is false or inaccurate data that is spread, while disinformation is the intentional spread of wrong information to push a certain agenda. Although disinformation may be the worse of the two, both can create negative consequences like climate skepticism, denial, and outright confusion for the public.

The reason social media has such a strong hand here is because of its ability to amplify misinformation so easily. Through a click of a button, misinformation can be shared from one user’s 900 followers to another user’s 10,000 followers and so on. For example, one study completed in 2021 found that climate misinformation posts on Facebook were receiving up to 1.36 million views per day.

Algorithms can also trap misinformation. A study conducted in 2022 found that Facebook’s algorithm targets people who have interacted with climate-skeptical posts, and then pushes and suggests groups and posts supporting this narrative. It’s fair to say this goes the other way too, but fighting an algorithm that is helping to spread climate misinformation is no small task.

This information being spread also paves the way for harmful outreach and attacks. As misinformation spreads, it is often at the expense of legitimate science. Like with the quick rise of the hashtag #climatescam, it can gain a following fast. The support can cause people who are spreading or believing the misinformation to be empowered and feel they are in the right. Oftentimes this can give them the chutzpah to attack scientists or individuals. In a 2022 survey from Global Witness that looked into online hate towards climate scientists, of 468 scientists that completed the survey, 183, or 39.1%, experienced online harassment or abuse, which led to negative effects on their health and productivity. The abuse worsened for scholars with higher research output and media exposure.

While it’s easy to see how abusing people online and spreading misinformation can have a negative impact, it’s often a larger circle than expected. For one, this has an impact on environmental scientists’ drive to gain media exposure. Receiving abuse just from publishing facts, while trying to keep your personal opinions out of it, is certainly demotivating.

Secondly, as a young person who is often on social media, I know this also has a negative impact on individuals and their incentives towards climate action. When people see the conflict and uncivilized discussions that people have over social media on the topic of climate change, it makes it more intimidating to learn about and engage with. 

One way to approach this growing issue is for people to stop blindly trusting social media when it comes to climate change. There are 95 million posts shared per day on Instagram, and they certainly are not all fact checked. When people see information online, it’s important they take a minute to look a little deeper into it before sharing it with friends.

Additionally, newcomers to climate action should consider looking to other platforms to begin their journey. The sheer amount of information can be overwhelming, let alone determining what is true or not. This can be tough — a large reason people go to social media is for easy-to-follow headlines and quick access to information. But even finding pages or people that can be researched to ensure they are posting legitimate content could be a good bet. 

Social media platforms also have a part to play. It’s clear that changing algorithms like the one Facebook has that push misinformation to people who are already in that space is a good starting point. Currently, organizations like Meta use certain engagement signals to flag posts that might be misinformation, but these still have to be manually fact checked before action is taken. This means the posts can still be seen by a large viewership beforehand.

One option to consider is placing automatic warnings on any climate-related post where the information hasn’t been fact checked or could be untrue. Automatic warnings were already done during the COVID-19 pandemic, where platforms like Instagram would place an unremovable banner to agencies like the CDC, WHO, or local health ministries. Banners on posts surrounding climate change could be very beneficial, by providing cues for individuals or businesses to avoid blindly sharing and backing the wrong information. 

Many argue that any intervention from platforms infringe upon one’s freedom of speech. While I obviously support climate science, I still share concerns about censorship, as I’m sure many do. Some platforms have already stepped back from the battle against misinformation. For example, in November of 2022, Twitter announced they would be ending their policy against COVID-19 misinformation leading to drastic differences in many people’s opinion. That said, reminders to do further research on a subject wouldn’t cross the line into censorship. Platforms can also take different approaches depending on if a post is misinformation or blatant harassment.

When it comes to intervention, another common discussion arises about who is making the decisions around fact versus misinformation. There is a large spectrum between scientific fact and disinformation. Not everything has been peer-reviewed yet, but that’s not to say it’s inaccurate. And not every “hot take” on climate change would cross the line into misinformation. Again, automatic warning on posts mentioning climate change or other related words could be the best of both worlds. This would take out the subjectivity, remind users of misinformation, and place the onus on individuals to take the issue seriously and find other sources that back information they find on social media. 

In short, combating misinformation is going to take effort from both individuals and social media platforms to have a chance at meaningful change. Without some form of collaboration, misinformation will continue to influence people’s perceptions, stimulate abuse and confusion, and potentially stop individuals from beginning their climate action journey.


Like it? Share with your friends!

0
Owen Reith

0 Comments

Leave a Reply

Loading...