ROTTERDAM, August 24 -- But they use that software technology, called, only to identify images of child sexual exploitation.
The database assigns a unique ID to images, which technology firms are then obligated to report to the National Center for Missing & Exploited Children. (Google, for instance, recently used this technology to tip off law enforcement about a Houston man who had )For the human beings reviewing these images, making judgment calls about which pictures to remove or keep on the site quickly becomes tricky. Del Harvey, Twitter's vice president of trust and safety noted in a TED talk earlier this year that even messages that look clearly like spam or abuse can turn out to be something different when you take a closer look.
"We don't want to gamble on potentially silencing that crucial speech by classifying it as spam and suspending it," she said. "That means we evaluate hundreds of parameters when looking at account behaviors, and even then, we can still get it wrong and have to reevaluate."
(The full talk is 10 minutes long, but lays out the basics of Twitter's philosophy pretty well.)
While companies continue to debate what they can and should do at the administrative level to stop certain kinds of images, there are some things that individual users can do to take action on their own accounts. On Facebook, for example, you can block users, apps or pages if you don't want to see the content they publish.
Twitter offers users the option to change their media settings themselves. Here, if users want to see images that could be considered "sensitive" and skip seeing Twitter's warning message, they can opt to do so. Users can also decide to mark their own media as sensitive by default, if they think their pictures could upset others.