In our latest issue, Gill Grassie's editorial reflects on the applicability of the online IP infringement framework to the policing of extreme comment and hate speech by social media platforms. Which are the valuable lessons that the experience of IP enforcement online can teach to the legislator? What is their relevance in the field of hate speech? Do technological advancements provide potential solutions to this issue? Should law and policy intervene at all in this context? Gill touches upon all these questions, suggesting that the IP online experience should inform policy in this new field.
Control of content on social media - technology as judge, jury and executioner?
Gill Grassie
Can the web regime for IP protection teach us how best to deal with extremist and hate crime materials posted online?
An enquiry by the UK Commons Home Affairs Committee at the end of April 2017 accused social media companies of putting “profits before safety” when it comes to extreme and hate crime materials posted online.
MPs called for content platforms such as Facebook, YouTube, Twitter and Google to take action to actively remove content that may be considered extremist or hateful. This may not be as simple to achieve as it first appears. One of the principal challenges is deciding who should be responsible for identifying the posts that should be removed and what parameters they should employ. Also: how can this be achieved without eroding freedom of speech on the web?
Forced removal of content by such platforms is an issue that is not unfamiliar in the IP sphere. Indeed, the speed with which extremist and hate crime material is being removed by content platforms has been contrasted with the comparatively swift action that has been taken to remove materials which infringe IP rights. However, removal in the latter case takes place usually only after the platform becomes aware of alleged infringements further to a notice by the concerned rightholder. There is no active monitoring obligation.
Infringing content that has been made available online by users has been an issue that has plagued the holders of IP rights for a number of years and has been a constant theme in many court rooms involving third party platforms such as eBay, Napster, Facebook, Twitter and Google. As a result, there is now at least some clarity around the circumstances in which intermediaries like ISPs and platform providers are required to remove and/or may be liable for the content uploaded by users to their platform.
When it comes to online protection of IP, the main legal framework lies in several pieces of UK legislation. These include the e-Commerce Directive Regulations (“the Regulations”). The Regulations provide certain limitations on the provider’s liability when it comes to unlawful activities taking place on its platforms. In essence, caching, hosting or acting as a mere conduit will provide a safe harbour defence. However, upon obtaining “actual knowledge” or awareness of relevant unlawful activities, the platform provider must act expeditiously to remove or disable access to the information concerned or face potential liability. The notion of “actual knowledge” stops short of the monitoring that MPs are now calling for in relation to extremist speech online.
Can this kind of approach to IP infringement also be considered appropriate in the context of extreme comment or hate speech online? The UK Government appears not to think so. However, it could face difficult challenges if it tries to impose monitoring obligations on these platforms. Is it up to the platform itself to decide whether the content is sufficiently offensive to justify its removal? What parameters are to be applied? Can technical measures be meaningfully deployed to identify such material?
In some ways, deciding what constitutes an IP infringement might be considered less of a subjective exercise and possibly less emotive than judging what is acceptable speech online. If control is to be with the ISPs they will hold significant power to decide how and whether to remove material as extreme or not, which is arguably a wholly subjective decision. If the IP model is to be followed, and there is to be a safe harbour-type defence with actual knowledge being required, then at least there might be some scope for debate on the issue. Possibly the third parties who notified might be required to set out some objective reasons for the demanded removal.
Can technology resolve these issues? As regards technical solutions, there are already examples of these, such as YouTube’s Content ID, an automated piece of software that scans material uploaded to the site for IP infringement by comparing it against a database of registered IPs. The next challenge may be how these types of systems can be harnessed by online platform providers to address extreme and hate crime content. Again the dilemma for policy- and law-makers may be the extent to which they are prepared to cede control over content to technology companies, which will become judge, jury and executioner.
In addition, it is important to address the question of who should bear the cost of monitoring and removal. The vexed question of who should pay has been revisited in the context of IP blocking injunction cases. These deal with infringing content that has been made available online, where the rights holders have successfully obtained injunctions against ISPs (and not specifically social media platforms) to block access to websites where infringing content has been hosted. In Cartier International AG & Ors v British Sky Broadcasting Ltd & Ors [2016] EWCA civ 658 the Court of Appeal concluded that it is entirely reasonable to expect ISPs to pay the costs associated with implementing mechanisms to block access to sites where infringing content has been made available. In the court’s view, the relevant safe harbour immunities from infringement support and benefit the businesses of the intermediaries. Thus the cost of implementing the order could therefore be regarded as just another overhead associated with ISPs carrying on their business. While this case was limited to the much narrower context of the technical measures that needed to be put in place to comply with an IP blocking injunction - and remains the subject of an appeal to the UK Supreme Court - it may still offer some insight into the likely approach of the courts.
As pressure mounts on social media platforms to take rapid and effective action to remove inappropriate material, stakeholders will have to consider how comfortable they are for technology companies to provide solutions to these issues. Ultimately, when should the law and policy intervene? Clearly there will be challenges here for any legislator to get the balance right and the IP experiences to date might usefully be borne in mind.
The author would like to thank Anoop Joshi for his valuable assistance in preparing this editorial.
No comments:
Post a Comment