fbpx

Spotify ‘Hate and Hateful’ Policy Crisis Is About Social Responsibility [Mark Mulligan]

Spotify ‘Hate and Hateful’ Policy Crisis Is About Social Responsibility [Mark Mulligan]

Spotify newAfter backlash over the removal of tracks by R Kelly and XXXTentacion from official playlists, Spotify has been forced to rethink its new ‘hate speech and hateful behavior’ policy.  But there are no easy choices. “It would be as wrong for Spotify to opt for the ‘neutral platform’ approach,” writes Mark Mulligan of MIDiA, “as it would be ‘arbitrary’ censorship.”

________________________________________________

By Mark Mulligan of MiDIA from his Music Industry Blog

Spotify has been forced into something of a rethink regarding its hate speech policy. Spotify announced it was removing music from playlists of artists that do not meet its new policy regarding hate speech and hateful behavior. R.Kelly, who faces allegations of sexual abuse and XXXTentacion, who is charged with battering a pregnant woman, were two artists that found their music removed. Now Spotify is softening its stance following push back externally and internally, including from Troy Carter who made it known that he was willing to walk away from the company if the policy remained unchanged. Spotify had good intentions but did not execute well. However, this forms part of a much bigger issue of the changing of the guard of media’s gatekeepers.

Facebook has been here before

Back in late 2016, Facebook faced widespread criticism for censoring a historic photograph of the Vietnam war in which a traumatized child is shown running, naked, away from a US napalm attack. Facebook soon backed down but it got to the heart of why the “we’re just a platform” argument from the world’s new media gatekeepers was no longer fit for purpose. Indeed by the end of the year, Zuckerberg had all but admitted that Facebook was now a media company.The gatekeepers might be changing from newspaper editors, radio DJs, music, film and TV critics and TV presenters, but they are still gatekeepers. And gatekeepers have a responsibility.

Social responsibility didn’t disappear with the internet

Part of the founding mythology of the internet was that the old rules don’t apply anymore. Some don’t, but many do. Responsibilities to society still exist. Platforms are never neutral. The code upon which they are built have the ideological and corporate DNA of their founders built into them – even if they are unconscious biases, though, normally, they are anything but unconscious. The new gatekeepers may rely on algorithms more than they do human editors, but they still fundamentally have an editorial role to play, as the whole Russian election meddling debacle highlighted. Whether they do so of their own volition or because of legislative intervention, tech companies with media influence have an editorial responsibility. Spotify’s censorship crisis is just one part of this emerging narrative.

Editorial, not censorship

As with all such debates, language can distort the debate. Indeed, the term ‘censorship’ conjures up images of Goebbels,but swap the term for ‘editorial decisions’ and the issue instantly assumes a different complexion. Spotify was trying to get ahead of the issue, showing it could police itself before there were calls for it to do so. Unfortunately, by making editorial decisions based upon accusations, Spotify made itself vulnerable to being accused of playing the role of judge and jury for artists who live in countries where innocence is presumed in legal process, not guilt. Also, by implementing on a piecemeal rather than exhaustive basis, it gave itself the appearance of selecting which artists’ misdemeanors were considered serious enough to take action upon. Spotify had the additional, highly sensitive, risk of appearing to be a largely white company deselecting largely black artists on playlists. Even if neither semblance was reflective of intent, the appearance of intent was incendiary.

Lyrics can be the decider

Now Spotify is having to rethink its approach. It would be as wrong for Spotify to opt for the ‘neutral platform’ approach as it would be ‘arbitrary censorship’. An editorial role is necessary. In just the same way radio broadcasters are expected to filter out hate speech, tech companies have a proactive role to play. A safer route for Spotify to follow, at least in the near term, would be to work with its Echo Nest division and a lyrics provider like LyricFind to build technology, moderated by humans, that can identify hate speech within lyrics and song titles. It wouldn’t be an easy task, but it would certainly be an invaluable one, and one that would give Spotify a clear moral leadership role. In today’s world of media industry misogyny and mass shootings, there is no place for songs that incite hatred, racism, sexism, homophobia or that glorify gun violence. Spotify can take the lead in ensuring that such songs do not get pushed to listeners, and thus start to break the cycle of hatred.

Powered by WPeMatico

Please follow and like us:
Tags:
No Comments

Sorry, the comment form is closed at this time.