Facebook will use a new machine learning algorithm to moderate content posted by users. The first task of the algorithm will be to find the most traumatic publications for their subsequent removal.
Image Source: Business Insider
Today content that violates social media rules (spam, hate speech, propaganda of violence, etc.) are usually tagged by the users themselves or by machine learning algorithms. The system handles the most obvious cases automatically, for example, by removing posts that violate the platform’s rules or blocking the account of the user who published this content. The rest of the cases are queued for further more detailed verification by the moderators.
Currently, about 15 thousand moderators work on Facebook around the world. The social network has been repeatedly criticized for the fact that the administration of the platform does not support them enough and does not hire more staff. The task of the moderators is to sort through the posts with complaints, as well as decide whether they are breaking the rules of the social network or not.
Previously, the moderators reviewed posts as they were published. Facebook has decided to change its approach in order to first look at posts that are gaining more reach and, therefore, are capable of doing more harm. The Artificial Intelligence (AI) model will use three criteria to select the most harmful posts: virality, the post’s storyline, and the likelihood that it breaks the rules. After that, the AI will mark such posts so that they rise higher in the order of consideration by the moderators.
“All content that violates social media rules will continue to be viewed by people, but using the new system will allow for more efficient prioritization in this process” , – commented on Facebook.
According to the social network, the new approach will help to respond faster and deal with breaking the rules of publications that have a wide reach.