Today I decided that this website will be my main publishing platform. Facebook and Instagram are where the audience is, but I hate being at the mercy of censorship decisions on a platform where enforcement of content guidelines is arbitrary and inconsistent. Wondering if my account will be suspended because I have posted a photo of people running around joyously, covered in mud (and somewhere under that mud is a female nipple) seems like a game I don’t want to play.
Facebook requires that users must be at least 13 years old before they can create an account, but their censorship decisions regularly remove content that would be permissible under a G rating in Australia. I am not too keen on the idea of deciding that adults can’t do something harmless because children may have strayed into their space. Imagine that you are throwing a barbeque at your house for all of your friends. One of your friends decides to bring their kids along. Do you declare that none of your guests can enjoy a beer or two because children shouldn’t drink alcohol, or do you just not give any beer to the children (or…and I know this is a bit of a stretch…leave it up to the parent to be responsible for their own children).
I see other people having problems with Facebook’s policies all the time; from the person who is unable to buy Facebook ads for their tantra workshops, to the friend with a swimwear label who recently had a photo of herself in a swimsuit removed.
But thinking of Facebook as if it was a single person with wildly inconsistent standards is the wrong way to look at the problem. Facebook content moderation is a complex system, composed of thousands of people, terabytes of data, and millions of lines of code. The system is steered by external incentives – from a vocal minority of religious conservatives, to the DMCA, to FOSTA/SESTA, to advertisers who love to use sex to sell products but are somehow allergic to their advertisements being shown anywhere near even mildly sexual content.
Trying to filter the firehose of images that are uploaded to a social media platform every minute is a difficult task that doesn’t scale well and is the kind of problem where, for simple cost reasons, involving a human decision-maker is a last resort.
Here is how content moderation might work on a social media platform:
- One of the most important foundations for content moderation is to have *vague* public guidelines because having *clear* public guidelines draws you into a public debate both about the details of the guidelines, and about whether a guideline has been applied correctly. Maybe this sounds a little cynical but, as a company there are no upsides to having clear public guidelines, only PR risks and staff costs. Obviously clear, detailed guidelines are still needed internally.
- Fingerprinting of images and comparison with a database of known-bad images. A blacklist keeps content from being posted, and can be applied to older posts to remove content recently identified as bad. A whitelist of images that have been flagged in the past, but have been through a human review process prevents that same content from being removed or reviewed again.
- Automatic image analysis. I remember when the algorithm a computer used to decide if an image might contain nudity was to look for a large cluster of “skin tones” (funnily enough, the software didn’t pick up images of naked people with dark skin). These days this is done with Machine Learning (artificial intelligence).
- User flagging. If images slip through the automated filters, it can always be reported by users. Except now there is the problem of abuse when people report content that meets guidelines but that they just don’t like (or content which has been posted by someone they don’t like). Reported content probably gets a second pass through the automated filters, but wouldn’t get passed to a human moderator unless it passes a threshold of being reported x times.
- Human moderators. Humans are expensive, so a real person is only going to look at an image as a last resort. The economics of a social media platform probably break completely if 99.99% of content cannot be filtered automatically. To keep costs down, most of the content moderation work would be outsourced to a low-cost provider, probably in a low cost of living country, and staff would be managed the way you would expect in a digital sweatshop. Expecting someone from the Philippines who has a target of 600 image reviews per hour (5 seconds per image, with a 10-minute break every hour) to spend multiple minutes to read and understand your reasoning for why your flagged photo actually complies with the vague content guidelines is not very realistic, so outcomes will be pretty arbitrary. It would be desirable to have some kind of escalation process, but undesirable (in a business sense) to make it too easy to escalate (time is money). Post-review, images are probably added to the training set for the Machine Learning algorithm, and added to the blacklist/whitelist database to prevent double-handling.
Anyway, after rambling for 900 words about how social media feels like another step towards a boring dystopia, the upshot of all of this is that I am going to be spending some time and money making this website look a little more professional, I will be slowly publishing my back catalogue of images, and I will be spending a bit less time on social media. I will still be publishing some of my photos on Facebook and Instagram (and maybe Flickr), but if those accounts are suspended it is not a world-ending catastrophe for me. Please excuse the disruption while I revamp my website.
Leave a reply