This article was first published on Lexis®PSL IP & IT on 31 October 2016. Click for a free trial of Lexis®PSL.

IP & IT analysis: A number of technology companies recently formed a ‘Partnership on AI’ which aims to deal
with questions of ethics and best practice surrounding developments in artificial intelligence (AI). Piers
Strickland, partner at Waterfront Solicitors, explains the new alliance and considers some of the issues of
regulation in this area.

What are the details of the partnership and how does it seek to promote collaboration and innovation in artificial intelligence (AI) technologies?
The partnership is at a nascent stage, so many of details appear not to have been worked out as yet. The overall highlevel objectives of the partnership, as stated on its website, are ‘to study and formulate best practices on AI technologies, to advance the public’s understanding of AI and to serve as an open platform for discussion and engagement about AI and its influences on people and society’.
In order to promote collaboration, information is intended to be shared among members via an open licence. The organisation’s founding members will also contribute financial and research resources. The leadership of the partnership is intended to be heterogeneous and not be dominated by one specific type of interest group. As a result, it promises an equal representation of corporate and non-corporate members on the board. It is currently seeking to involve academics and non-profit research groups, with further details promised soon.

What are the best ethical practice standards they seek to espouse?
The partnership does not set forward any specific ethical standards at this stage. However, it recognises that AI is already starting to pose ethical issues for society and those issues are inevitably going to become starker as AI develops.

What significance do you see best ethical practice and ‘AI advisory boards’ having in the future?
Whether the partnership itself is effective in reaching its goals remains to be seen. What this looks like is an attempt by the industry to get ahead of a problem of which it is well aware.
Historically, the innovators of new technologies also act as their loudest cheerleaders and other sections of society are the ones who are left to raise concerns and push for legal protection and regulatory oversight. What is striking about AI is that some of the innovators are the ones who are issuing the sternest existential warnings about AI. Elon Musk, one of the funders of OpenAI (a non-profit AI research company) was quoted in 2014 as saying that:
‘I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence. Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out.’ 

Other tech leaders, such as Microsoft’s Bill Gates have also raised their concerns about powerful AI.
In such circumstances, the partnership can be seen as an attempt by industry to collaborate in order to reduce the risk of something going wrong. After all, if the starkest existential warnings are valid, all of the major players have a shared interest in not making mistakes. Furthermore, while the general public are perhaps slightly unaware of the extent to which current relatively low-powered AI is already playing a fundamental part in their day-to-day lives, there is likely to become a point (perhaps a specific accident or disaster) when public awareness is raised and concerns are voiced.
To the extent that industry manages to self-regulate effectively, by developing widely adopted ethical best practices at the outset, then the risk of the harshest and most restrictive regulation being introduced upon them should be lessened.

Are there any limitations to this partnership—the absence, at the time of writing, of Apple and Elon Musk for instance—and what are the legal barriers to sharing best practice and research?

Apple has a long history of doings things alone, so this is not a surprise. As mentioned above, Elon Musk is already involved in OpenAI, which may explain why he is not currently involved with the partnership.
In terms of the legal and practical barriers to sharing best practice and research in such a competitive tech field, with many large tech companies spending ever-increasing chunks of their budgets on AI, it is to be expected that there will be a certain nervousness and reluctance about ‘over-sharing’ proprietary confidential information as part of the partnership, which then could be copied by competitors.
To the extent that certain AI technologies have been patented (or are patent-pending), there should be little by way of barriers to sharing that knowledge. This is because the essence of the patent system is to mandate the publication of sufficient details about the invention in return for a time-limited monopoly from the state. As such, those inventions will have already been ‘shared’ by publication on the relevant patent registers.
Patents are generally less effective in protecting computer software, as opposed to novel hardware, with many jurisdictions taking a very sceptical approach to allowing software patent monopolies.
Often software developers are not able to obtain patent protection and therefore have to rely on a combination of copyright and confidential information to protect their proprietary software. Such rights can be difficult to enforce. In the context of a loose information sharing partnership, this is not an ideal situation for encouraging liberal knowledge sharing.