The movement of reception for AI and psychological advances proceeds unabated with far-reaching, across the globe, rapid adoption. In any case, this development of adoption brings strain to the system, as existing guideline and laws battle to manage rising difficulties. Thus, governments across the globe are moving at a rapid pace to guarantee that current laws, guidelines, and legal constructs stay important despite innovative changes and can manage new, developing difficulties presented by AI.[1] In the last article, we had discussed the inspiration and basics of AI. As an expansion of it, let’s now discuss the legality of AI and its status as a person under the Constitutional Law and Patent Law.
Constitutional Law and AI
The discourse on ethics for AI has already identified the endless challenges to fundamental rights and the rule of law which AI poses. It is intelligible before this important body of work that AI cannot and will not serve the public good without strong rules in place. The potential capabilities of AI simply prohibit repeating the risk-taking which led to the lawlessness within the emergent internet age, as these capabilities can create major and irreversible damage to society.
In a genius move, the interested corporations have started to finance multiple initiatives to work on ethics of AI, thus, while pretending best intentions, effectively delaying the debate and work on law for AI. Any effort to replace or avoid the necessary law on AI through ethics must be rejected, as this effectively cuts out the democratic process. It is also intelligible that the endless conflicts of interests which exist between the corporations and the public in general as to the development and deployment of AI cannot be solved by unenforceable ethics codes or self-regulation.[2]
The design of AI for autonomous development and its very widespread use could cause far more catastrophic impacts than those the unregulated internet has already produced. There is, therefore, a strong argument, only looking at the experience with the internet and the potential capabilities of AI as well as its potential widespread use, to attenuate in favour of a preventative legal framework, setting down the basic rules necessary to safeguard the public interest in the development and deployment of AI.[3]
Over and over society has made the experience that law, and not the absence of law, relating to critical technology serves the interests of the general public. The work on Ethics for AI has already identified not only numerous very important challenges AI poses for the rule of law, democracy and individual rights but also already led to numerous catalogues of ethics rules for AI and autonomous systems. The German Government has appointed an independent committee to advise on Data Ethics.
There is thus no paucity of proposals of ethical principles for AI, and the ongoing work of the High Level Group on AI, created by the European Commission in its Communication on Artificial Intelligence has certainly worked through all this material and developed a catalogue of proposals by the end of 2018. With all this material now on the table, it's time to manoeuvre on to the crucial question in democracy, namely which of the challenges of AI are often safely and with good conscience left to ethics, and which challenges of AI are to be addressed by rules which are enforceable and supported the democratic process.
Another important component to consider in answering this question will be whether, after the experience with the lawless internet, our democracies can yet again afford the risk of a new, ubiquitous and decisive technology, which is unregulated and thus is likely to produce substantial negative impacts like the internet before it.[4]
AI is now in contrast to the internet from the outset, not an infant innovation brought forward mainly by academics and idealists, but largely developed and deployed under the control of the most powerful internet-technology corporations and it is these powerful internet-technology corporations which have already exhibited that they cannot be trusted to pursue public interest on a grand scale without the involvement of the law and its rigorous enforcement setting boundaries and even giving directions and orientation for innovation which are in the public interest. In fact, some representatives of those corporations may have themselves recently come to the present conclusion and involved legislation on AI.
There are functional advantages of binding law for dominant players as they'll be better situated than others to influence the content of the law and binding law may allow them to stay free-riding competitors and new market entrants in restraint during a level playing field of common binding rules which are properly enforced against all.
Patent and AI: Inventorship/ Ownership
Patenting an AI machine raises questions of inventorship and ownership. The patent systems worldwide only recognise individuals as inventors, not companies or machines. Inventorship is determined by conception. The use of AI, particularly deep machine learning or self-evolving and coding AI, raises questions on who (or what) conceived of the invention and will thus be named as an inventor.[5]
Indeed, AI has already advanced to the point where the AI itself is generating new inventions, as opposed to a human programmer or developer. Recently, both Google and Facebook have seen their respective AI systems develop new languages to perform the assigned tasks, giving-up on known human languages in favour of a more efficient means of communication. As the use of AI grows in medicine and therefore the life sciences, it's more and more likely that the AI is going to be the entity taking the inventive step, drawing new conclusions between the observed and therefore the unknown, and creating new programming to further identify and exploit those connections.
As AI continues to advance, the PTO (Patent And Trademark Office) will receive more patent applications in which AI could be considered the inventor, or at least a co-inventor. The PTO and therefore the courts will need to decide whether the present Patent Act encompasses computer-based inventors. Some have already advocated that computers should qualify as a legal inventor. Some have contended that AI will soon replace humans from the inventive process altogether and thus no patent protection should be given unless a human provides a material contribution to an invention. Of note, in copyright law, regulation prevents copyright protection is granted to works produced solely by a machine “without any creative input or intervention from a human author.” We can only wait to see whether the PTO will adopt this strict requirement of human intervention or collaboration.[6]
If the PTO and courts establish that patent protection will not be granted to an AI, then who among the humans responsible for the development of the AI should be considered an inventor? The list of possible human inventors includes the AI software and hardware developers, the medical professionals or experts who provided the info set with known values or otherwise provided input into the event of the AI, and/or those that reviewed the AI results and recognized that an invention had been made.
The predictability of the inventive concept can also be an element. If a programmer develops an AI with a certain goal in mind, and it was predictable that the AI would generate the result, then the person is likely to have had the inventive concept, using the AI as a tool to scale back the thought to practice. If the result is not predictable, the question remains if it is sufficient for inventorship that the person recognized the significance of the result and recognized it as a novel and patentable.
Similarly, an AI may get confused over the ownership for medical inventions generated by the AI itself. Patent ownership often activates the question of inventorship and thus are going to be equally complicated when AI develops its own code and conceives its own inventions. One approach would be to allow AI-inventors to be designated as the first owner, requiring assignment and licensing of all inventions. Another approach would be to permit the computer’s owner or the algorithm’s owner to be the primary owner, separating inventorship from ownership from the start.
Given that AI can continue to advance after its initial programming, the question of inventorship and ownership may have to be answered years after the initial system programming. Development, assignment, and employment contracts will have to account for this possibility of continued and ongoing AI invention and thus ownership. In the future, as humans increasingly work together with AI, the challenge for us is to ensure that we anticipate any negative health and safety consequences, assess the risks, and share this knowledge to benefit the world at large.
[1] Jack Balkin, Rebecca Crootof, Bethany Hill, Anat Lior, & George Wang, Law and Artificial Intelligence, (September 29, 2020, 09:17 PM)
[2] Intelligent to a Fault: When AI Screws Up, You Might Still Be to Blame, (September 29, 2020, 09:23 PM), https://www.scientificamerican.com/article/intelligent-to-a-fault-when-ai-screws-up-you-might-still-be-to-blame1/
[3] ibid.
[4] James B. Kobak Jr., Artificial Intelligence and Constitutional Law, (September 29, 2020, 09:28 PM).
[5] Artificial Intelligence Collides with Patent Law, (September 29, 2020, 09:34 PM), http://www3.weforum.org/docs/WEF_48540_WP_End_of_Innovation_Protecting_Patent_Law.pdf
[6] Patenting Artificial Intelligence: Issues of Obviousness, Inventorship, and Patent Eligibility, (September 29, 2020, 09:38 PM), https://www.finnegan.com/images/content/1/9/v2/197825/PUBLISHED-The-Journal-of-Robotics-Artificial-Intelligence-L.pdf