An update on how we keep members safe on LinkedIn
On LinkedIn, real identity matters. We require each member profile to represent a real person and their identity, to represent themselves accurately, and to contribute to the community authentically. As the professional community where 810 million people come to share and gain expertise, hire, learn new skills, and connect to economic opportunity, it’s critical that LinkedIn remains a place for authentic conversations with people you can trust.
Our policies prohibit fake profiles and our Trust and Safety teams work every day to identify and restrict inauthentic activity. We’re regularly rolling out scalable technologies like machine learning models to keep our platform safe. As inauthentic behavior gets more sophisticated, we’re improving our detection. Here are some of the latest actions we’ve taken on fake profiles to help keep you safe while engaging in our community:
Over 97% of fake accounts we remove from the platform are caught and removed by our automated defenses, with almost 98% of the fake accounts being restricted proactively before members report them. We also use member reporting to help us address fake accounts. Our Transparency Report expands on this data, with the latest report covering the first half of 2021.
We have a dedicated team of data scientists, software engineers, machine learning engineers, and investigators who are constantly analyzing abusive behavior on the platform and improving the technology we use to combat it. Our team develops automated defenses that analyze risk signals and patterns of abuse and take automated action, and constantly improve them to adapt to new threat patterns.
To evolve to the ever changing threat landscape, our team is investing in new technologies for combating inauthentic behavior on the platform. We are investing in artificial intelligence technologies such as advanced network algorithms that detect communities of fake accounts through similarities in their content and behavior, computer vision and natural language processing algorithms for detecting AI-generated elements in fake profiles, anomaly detection of risky behaviors, and deep learning models for detecting sequences of activity that are associated with abusive automation.
Our team is committed to ensuring the safety of the LinkedIn platform to help members connect to economic opportunity. Our work here is ongoing and we look forward to sharing more about our progress.