Pages

Wednesday, 12 August 2020

Facebook bans ALL blackface images as firm cracks down on 'racial stereotypes' - after blaming coronavirus for significant drop in removal of suicide and self-injury posts

Facebook is to ban all blackface images as it cracks down on 'racial stereotypes' being shared across the site.
The social media giant announced yesterday it is introducing restrictions on caricatures of black people as well as dehumanising depictions of Jewish people.
It comes as the company blamed coronavirus for a significant drop in the number of suicide and self-injury posts it has removed in recent months. 
The new plans will largely hit Europe, and see images of Black Pete, a Dutch Christmas character, among those to be taken down, while pictures of English Morris dancers could also face the axe.
Pictures of Morris dancers could be removed from Facebook as part of a blanket ban on images that use blackface and 'stereotyping characteristics'
Pictures of Morris dancers could be removed from Facebook as part of a blanket ban on images that use blackface and 'stereotyping characteristics'
News of the ban, which comes into force later this month, has already caused a stir in the Netherlands due to the prominence of Black Pete.
Zwarte Piet, as he is known locally, is a sidekick of Sinterklaas, the Dutch version of St. Nicholas, a Santa-like character who brings children gifts in early December.
White people often don blackface makeup, red lipstick and curly black wigs to play Black Pete during street parties honoring Sinterklaas.
The character has been at the center of fierce and increasingly polarised debate in recent years between opponents who decry him as a racist caricature and supporters who defend him as an integral part of a cherished Dutch tradition. 
As a result, some towns and cities have phased out blackface at street parties.
An organisation called Netherlands Is Improving welcomed the news, saying: 'Aug. 11 is a happy day: From today, Black Pete is officially no longer welcome worldwide on Facebook and Instagram.'
Others were less inclined to celebrate. Populist lawmaker Geert Wilders tweeted a photo of a Black Pete shortly after the Facebook announcement accompanied by the text: 'Facebook and Instagram ban images of Zwarte Piet. The totalitarian state of the intolerant nagging left-wing anti-racists is getting closer.'
Images of Black Pete, a Dutch Christmas character used at street parties, are among those set to be removed under the new plans
Images of Black Pete, a Dutch Christmas character used at street parties, are among those set to be removed under the new plans
Facebook is still deciding how its new rules will apply to English Morris dancers, who perform based on rhythmic stepping, wear bell pads on their shins, often wield instruments such as swords or handkerchiefs, and have in the past blacked their faces. 
However, a troupe leader told the Telegraph the prospect of the axe would not 'make a blind bit of difference' to whether they would continue the custom.  
Monika Bickert, Facebook's rulemaker-in-chief, said: 'Our policy is designed to stop people from using blackface to target or mock black people... [but] there could be circumstances where somebody might happen to be sharing images but they're not doing it for hateful reasons.
'Those are exactly the sorts of nuances, including the examples from the Netherlands and the UK, that we are looking at.'
The Joint Morris Organisation has previously pledged to distance itself from troupes that carry out the practice, which is most common along the border between England wales, claiming its history is irrelevant along with its 'potential to cause deep hurt'.
Facebook is also looking to ban dehumanising depictions of Jewish people that include images or other depictions of Jewish people running the world or controlling major institutions such as media networks, the economy or the government. 
It comes as the social networking giant has blamed coronavirus for hampering efforts to remove posts about suicide and self-injury from its platforms. 
The company revealed it took action on less material containing such content between April and June because fewer reviewers were working due to Covid-19.
Facebook sent moderators home in March to prevent the spread of the virus but Mark Zuckerberg warned enforcement requiring human intervention could be hit.
The firm says it has since brought 'many reviewers back online from home' and, where it is safe, a 'smaller number into the office' to work on moderation.
In its latest community standards report, Facebook said 911,000 pieces of suicide and self-injury content was actioned compared to 1.7 million the previous quarter. 
The company revealed it took action on less material containing such content between April and June because fewer reviewers were working due to Covid-19
The company revealed it took action on less material containing such content between April and June because fewer reviewers were working due to Covid-19 
 Meanwhile on Instagram, steps were taken against 275,000 posts between April and June compared with 1.3 million from January and March.
Action on media featuring child nudity and sexual exploitation also fell on Instagram, from one million posts to 479,400, the company confirmed.
Facebook estimates that less than 0.05 per cent of views were of content that violated its standards against suicide and self-injury.
'Today's report shows the impact of Covid-19 on our content moderation and demonstrates that, while our technology for identifying and removing violating content is improving, there will continue to be areas where we rely on people to both review content and train our technology,' the company said.
'With fewer content reviewers, we took action on fewer pieces of content on both Facebook and Instagram for suicide and self-injury, and child nudity and sexual exploitation on Instagram.
'Despite these decreases, we prioritised and took action on the most harmful content within these categories.
'Our focus remains on finding and removing this content while increasing reviewer capacity as quickly and as safely as possible.'

The tech giant's sixth report does suggest the automated technology is working to remove other violating posts, such as hate speech, which went from 9.6 million on Facebook in the last quarter to 22.5 million now.
Much of that material, 94.5 per cent, was detected by artificial intelligence before a user had a chance to report it.
Proactive detection for hate speech on Instagram increased from 45 per cent to 84 per cent, according to the quarterly report.
The data also suggests improvements on terrorism content, with action against 8.7 million pieces on Facebook this time compared with 6.3 million before.
Facebook says only 0.4 per cent of this was reported by a user, while the vast bulk was picked up and removed automatically by the firm's detection systems.
'We've made progress in combating hate on our apps, but we know we have more to do to ensure everyone feels comfortable using our services,' Facebook said.
The firm says it has since brought 'many reviewers back online from home' and, where it is safe, a 'smaller number into the office' to work on moderation
The firm says it has since brought 'many reviewers back online from home' and, where it is safe, a 'smaller number into the office' to work on moderation
'That's why we've established new inclusive teams and task forces including – the Instagram Equity Team and the Facebook Inclusive Product Council – to help us build products that are deliberately fair and inclusive,' the firm explained.
They said it is also why they are launching a Diversity Advisory Council that will provide input based on lived experience on a variety of topics and issues.
'We're also updating our policies to more specifically account for certain kinds of kinds of implicit hate speech, such as content depicting blackface, or stereotypes about Jewish people controlling the world.'
Children's charity the NSPCC said Facebook's 'inability to act against harmful content on their platforms is inexcusable'.
'The crisis has exposed how tech firms are unwilling to prioritise the safety of children and instead respond to harm after it's happened rather than design basic safety features into their sites to prevent it in the first place,' Dr Martha Kirby, child safety online policy manager at the NSPCC said.
'This is exactly why Government needs to urgently publish an Online Harms Bill that holds Silicon Valley directors criminally and financially accountable to UK law if they continue to put children at risk.'
Facebook has also revealed it removed more than seven million pieces of harmful coronavirus misinformation from Facebook and Instagram over the same period.

No comments:

Post a comment