Snitches get stitches
If you use Facebook, and especially if you administrate a Facebook Group, you are probably well aware that users can flag one another’s posts for inappropriate or offensive content. When a member of a group flags someone else’s post, the post is sent to the group administrator, who then assesses the post and decides whether or not it needs to be removed.
Pretty simple, right? But last month, quietly and without fanfare, Facebook began using an algorithm to automatically report suspicious content before it gets published. If you administrate a Group, you may have already received posts “auto-reported by Facebook,” with the option to publish the post or delete it, as well as the option to block the group member who made the offending post. Facebook’s watchdog robot automatically detects inappropriate content, such as pornography, then reports it to group administrators.
Still in need of some tuning
At this time, it appears that the auto-reporting robot only works on Groups and not on the site in general or on Pages — but that could change. Some administrators are celebrating the new innovation, saying it will save them lots of time moderating, since they won’t have to go hunting for offensive posts. FBtutorial.com, who reported the change on their site after receiving auto-reported posts for their Facebook Group, says that auto-reporting is “definitely well needed in Facebook Groups and is a time-saver.”
Others, however, complain that the auto-report feature actually wastes their time, because Facebook often reports posts that are actually completely acceptable. After receiving many questions and complaints, Facebook added an explanation of the feature to their Help page, and also attempted to fine tune the algorithm.
Facebook still has the right to remove content
Facebook says that if administrators continue to approve similar auto-reported posts, the robot will eventually learn not to flag such posts. However, any posts approved by administrators must still comply with Facebook’s Community Standards, and the company retains the right to remove offensive content, even if group administrators approved it.
Note from the Editor: We’ve seen several “auto-reported by Facebook” posts in various groups we have on Facebook, but many aren’t NSFW, they’re more in line with posts we’ve deleted from members reporting to us as admins in the past. It is our belief that the algorithm is smarter than just “oh, that’s porn,” and based more on the administrators’ history of deleted and/or reported posts. This feature could save us a lot of time in our rowdier and heavily populated groups.
What about your Facebook Group? Have you received any auto-reported posts? Has it saved you time, or is it a nuisance?