12 June 2019 | By Michael Thaidigsmann
Clamping down on hate speech
Pressure on social media platforms to delete, or at least block, hate speech and defamatory statements is mounting, both from courts and governments. An important European court ruling is due soon. Tech giants are reacting with stricter user guidelines.
In a case before the European Court of Justice (ECJ) involving an Austrian politician who has sought an injunction against Facebook, the Advocate General Maciej Szpunar now published his opinion, which – although not binding on the judges – is often adhered to by them.
The case arose in April 2016. A Facebook user in Austria made a disparaging comment about then Green Party leader Eva Glawischnig, calling her a “corrupt oaf”, a member of a “fascist party” and a “lousy traitor of the people.”
After Glawischnig asked a local court to order the deletion of the post, Facebook blocked local access to the user’s comment but did not delete it. Nor did it block access for users based outside Austria.
The country’s Supreme Court was ultimately seized with the matter, and it asked the ECJ to determine whether under EU law a court order to remove content can be extended beyond the jurisdiction it covers, and whether identically worded statements, or largely similar language, posted either by the same user or other users also fall within the scope of an injunction.
According to the E-Commerce Directive of the European Union, an internet service provider is, in principle, not liable for the information stored on its servers by third parties if it is not aware of the illegal nature of that information. However, once made aware of its illegality, a provider must delete that information or block access to it (“geo-blocking”).
The directive also provides that a host provider cannot be placed under a general obligation to monitor the information which it stores, nor a general obligation actively to seek facts or circumstances indicating illegal activity.
Because the public interest on information access varied from one country to another, however, Advocate General Szpunar concluded that “there is a danger that [removal]will prevent persons established in states other than that of the court seized from having access to the information.” Hence, a fair balance between the fundamental rights involved and the protection of freedom of expression needed to be struck, he wrote.
Advocate general: ‘Identical content must also be removed or blocked’
In the context of an injunction, an internet platform may also be ordered to seek and identify information equivalent to that characterised as illegal, but only among the information disseminated by the user who posted that illegal information. Szpunar concluded that no sophisticated techniques were required by companies such as Facebook to identify identical or equivalent content posted by the same user: “In view of the ease with which information can be reproduced in the internet environment, this approach is necessary in order to ensure the effective protection of privacy and personality rights,” he wrote.
A court adjudicating on the removal of such equivalent information had to ensure that the effects of its injunction were clear, precise, foreseeable and proportional, he held in his opinion.
However, the advocate general recommended that the EU’s top judges “limit the extraterritorial effects of its junctions concerning harm to private life and personality rights” because “the implementation of a removal obligation should not go beyond what is necessary to achieve the protection of the injured person.”
In Szpunar’s opinion, an obligation to identify equivalent information originating from any other user of a platform would go too far and be both too onerous on the companies and too restrictive with respect to freedom of expression and information.
As the E-Commerce Directive did not regulate the territorial scope of an obligation to remove information disseminated via a social network, it did not preclude a host provider from being ordered to remove such information worldwide, Szpunar concluded.
Both the question of the extraterritorial effects of an injunction imposing a removal obligation and the question of the territorial scope of such an obligation should be analysed by reference to public and private international law.
A judgment by the ECJ is expected in the coming months.
YouTube tightens guidelines
Meanwhile, the world’s leading video platform YouTube, which is a subsidiary of Google, announced changes to its user guidelines on hate speech, which it called “one of the most complex and constantly evolving areas we deal with.” YouTube said its policy, introduced in 2017, tightening the rules regarding racist videos had led to a 80 percent reduction in views of such content.
On 5 June 2019, the platform announced that going forward, videos alleging that a group was superior in order to justify discrimination, segregation or exclusion based on age, gender, race, caste, religion, sexual orientation or veteran status, would be banned. This would include videos that promote or glorify Nazi ideology. Content denying well-documented events such as the Holocaust would no longer be tolerated and also be removed.
“As always, context matters, so some videos could remain up because they discuss topics like pending legislation, aim to condemn or expose hate, or provide analysis of current events,” YouTube announced. In addition, it will delete videos that are considered “borderline content” or “harmful misinformation, such as videos promoting a phony miracle cure for a serious illness, or claiming the earth is flat.”
The company said its systems were getting smarter about what types of videos should get flagged and removed. “We’ll also start raising up more authoritative content in recommendations, building on the changes we made to news last year. For example, if a user is watching a video that comes close to violating our policies, our systems may include more videos from authoritative sources (like top news channels) in the ‘watch next’ panel.”
The Google subsidiary also pledged to apply stricter criteria to user channels which “repeatedly brush up against our hate speech policies”. These would be suspended from the YouTube Partner program and could in future no longer run ads on their channels.
Voluntary codes of conduct
In May 2019, the world’s leading tech companies Amazon, Facebook, Google, Microsoft and Twitter pledged to intensify work with governments, NGOs and each other to combat violent extremism on the internet and to establish “crisis protocols” for responding to events such as mass shootings so information is shared and acted upon faster. The call to action was unveiled as part of a meeting of government and industry leaders in Paris.
In 2016, the tech giants signed a voluntary code of conduct in Brussels to tackle illegal hate speech online. The European Commission is monitoring their compliance on a regular basis. In February 2019, the fourth evaluation was published. It concluded that companies are now assessing 89 percent of flagged content within 24 hours while 72 percent of the content deemed to be illegal hate speech is removed. This is compared to 40 percent and 28 percent respectively when the code was introduced.
Germany went a step further and enacted the controversial Network Enforcement Act (“NetzDG”) in January 2018 which obliges social media and other larger online platforms to delete illegal content promptly from its websites, or face fines up to €50 million. While the law cause enormous outrage among the internet community, it has so far not led to significant change. In the first months of 2018, Facebook received only 600 takedown requests under the law, compared with 2.5 million content items the company removed for violating its own community rules.
In April, the government of then Chancellor Sebastian Kurz in Austria presented a bill which, if adopted, would require larger social media companies to verify the identity of their users in order to facilitate prosecution of illegal hate speech. With the downfall of the Kurz government, it remains unclear if that measure will even proceed to the legislature and end up on the statute books.
Nevertheless, attempts to fight hate speech, defamatory statements and “fake news” on the internet through legislation and stricter law enforcement are likely to increase in Europe in the coming months and years.