A Look at Pinterest and LinkedIn’s Attempts to Stop Misinformation

Today I’ll be comparing the policies that Pinterest and LinkedIn have in place to protect their communities from misinformation. I chose to focus on two platforms that I personally use to gain information from rather that use in a social aspect. I use Pinterest whenever I need inspiration and ideas or infographics. I use LinkedIn when I’m looking for jobs, updating my work experience, and occasionally scrolling through the home feed.

I’ve always associated misinformation risks with platforms like Facebook, Instagram, and Twitter (X), but I hadn’t considered that I could run into misinformation on Pinterest or LinkedIn. This is why I chose to look into these platforms for this post.


Pinterest

red and white google logo
Photo by Brett Jordan on Pexels.com

Pinterest is known for aesthetically pleasing images and the ability to create “Boards” that you can “Pin” these images to, creating a variety of digital mood and vision boards. By definition, Pinterest is a visual discovery engine, the algorithm can choose what you see on your main feed and it can be used as a search engine to find specific results like “plants non toxic to cats“.

Pinterest has implemented more policies to curb dis- and misinformation that I had expected. The company focuses on civic misinformation, climate misinformation, conspiracy theories, health and medical misinformation, and misinformation sent in private messages between community members. These policies prohibit misleading content that may harm or deceive their members.

Here’s a brief breakdown of each policy within Pinterest’s community guidelines.

Civic misinformation: This policy protects the integrity of civic participation in elections, such as voting. It also prohibits sharing fabricated and manipulated content and using using intimidation.

Climate misinformation: Pinterest is the only platform to have established guidelines that prohibit and remove any content that denies climate change, contains conspiracy theories, and misrepresents scientific data.

Conspiracy theories: Actively removes content that encourages harassment, violence, and hate speech.

Health and medical misinformation: Prohibits and removes content containing unsupported claims, fake cures, and anti-vaccination propaganda.

Private messages: Pinterest will warn or suspend accounts for violating any of the platform’s guidelines in messages sent between members.

Based on these policies, Pinterest seems to be ahead of other platforms in terms of curbing misinformation. The platform takes an immediate proactive approach to any content containing violations by deactivating the “Pins”. Pinterest uses three methods of moderating content – automated (machine learning/AI), manual (human reviewed), and hybrid deactivations (human and AI review).

It’s difficult for me to find anything lacking in Pinterest’s misinformation policies besides enhancing their already effective methods. They are very detailed and the platform works hard to prevent users from ever seeing misinformation in the first place. One example of the platform preventing misinformation is shown in the screenshot below. I searched “covid vaccine” on Pinterest and the site displayed this information bubble at the top of the results page.

The results, as you can see here, contain a variety of infographics with the majority being verified content from the World Health Organization (WHO) rather than just any content creator on the internet.

Screenshot I took from my Pinterest.com engine search for “covid vaccine”.
Screenshot I took from my Pinterest.com engine search for “covid vaccine”.

Pinterest’s commitment to combating misinformation through their vigorous policies and enforcement should be the leading example for other platforms. With continuous refinement of its strategies, Pinterest can continue to be the leader in promoting accurate information online.


LinkedIn

close up of a smartphone displaying linkedin application
Photo by Bastian Riccardi on Pexels.com

LinkedIn is considered a professional networking and job search platform for the global workforce. It’s popular for connecting with potential employers and other people in your industry, learning new skills, and has become a prime social platform for companies to share news and updates on. It’s also a search engine for job openings and a portfolio for your work and education experience.

You might not expect misinformation to be on such a professional platform like LinkedIn, but the reality is that misinformation is everywhere in the digital world. Let’s dig into the policies that LinkedIn has implemented and see how the company is attempting to curb misinformation.

First, I looked at LinkedIn’s professional community policies and search for any mention of the words misinformation or disinformation. To my surprise, there’s nothing. What the policy does mention, is “do not share false or misleading content”, and specifies this as content intended to deceive, which should be classified as misinformation or disinformation. I find it strange that LinkedIn doesn’t explicitly say “we prohibit any content containing misinformation” in this main policy page.

Screenshot I took from LinkedIn’s Professional Community Policies.

After scouring the help pages of LinkedIn, I came across a page for misinformation and inauthentic behavior. This page briefly discusses that LinkedIn works with Microsoft (Microsoft owns LinkedIn) to provide its members the tools to be literate LinkedIn users, yet I couldn’t find the tools it’s referring to or any definitive information on this claim. Perhaps they are referring to LinkedIn Learning courses that are in partnership with Microsoft, but there’s no specific link provided confirming this. At the end of the page, LinkedIn lists three websites that users can use to improve their information literacy, including the News Literacy Project, The Trust Project, and Verified.

I found another page within the help section, this time for false or misleading content. Finally, I see details on their policies against misinformation. Specifically it says the platform removes misinformation on elections, emergencies, manipulated media, conspiracy theories and false claims, and health and medical claims including attempts to contradict the WHO’s data.

The actions that LinkedIn takes to remove this type of content is through automated models and users reporting and flagging content to be reviewed through their “review systems“. It’s not clear if they use automated systems throughout their review processes or if it’s done manually by a team.

In comparison to Pinterest, the only policy that LinkedIn is missing is against misinformation on climate change issues. I think this is something LinkedIn needs to follow Pinterest in since there are many environment-focused companies on LinkedIn as well as people who work in the science and sustainability industries.

Below is a screenshot of a chart from July-December 2023 that shows the amount of content violations that were removed from LinkedIn’s platform.

We can see in this screenshot that there were over 50,000 pieces of content with misinformation that LinkedIn removed during those 6 months.

Screenshot I took from LinkedIn’s Transparency Community Report.

Lastly, I would like to see LinkedIn improve on implementing policies for job listings as a means to prevent fake and spam jobs posts. Here is an article I found on how to spot fake jobs on LinkedIn. There’s no mention of what LinkedIn does to prevent the fake jobs from being posted in the first place, it only instructs the user how to spot a fake job. This is incredibly disappointing to me. Why do I, the job seeker, have to weed through jobs and figure out what’s legitimate or not? LinkedIn should have an automated technology that does this for me.

Leave a Reply

Your email address will not be published. Required fields are marked *