Legal Personhood for Artificial Intelligences
There are innumerable examples of other ways in which information technology has caused changes in the existing legislative structures. The law is naturally elastic, and can be expanded or amended to adapt to the new circumstances created by technological advancement. The continued development of artificial intelligence, however, may challenge the expansive character of the law because it presents an entirely novel situation.
They kept hooking hardware into him – decision-action boxes to let him boss other computers, bank on bank of additional memories, more banks of associational neural nets,’ another tubful of twelve-digit random numbers, a greatly augmented temporary memory. Human brain has around ten-to-tenth neurons. By third year Mike has better than one and a half times that number of neuristors. And woke up.
― The Moon is a Harsh Mistress, Robert A. Heinlein
Following Google I/O, Google's annual developer conference, where the company revealed the roadmap for highly-intelligent conversational AI and a bot-powered platform, as artificial intelligence disrupts how we live our lives, redefining how we would interact with present and future technology tools by automating things in a new way, it is inevitable we all have to imbibe the automated life gospel. One of the steps into that life is trying to unify the scope of the current technological advancements into a coherent framework of thought by exploring how current law applies to different sets of legal rights regarding artificial intelligence.
Artificial intelligence may generally be defined as the intelligence possessed by machines or software used to operate machines. It also encompasses the academic field of study that is more widely known as computer science. The basic premise of this field of study is that scientists can engineer intelligent agents that are capable of making accurate perceptions concerning their environment. These agents are then able to make correct actions based on these perceptions. The discipline of artificial intelligence explores the possibility of passing on traits that human beings possess as intelligent beings. These include knowledge, reasoning, the ability to learn and plan, perception, movement of objects and communication using language. As an academic field, it may be described as being interdisciplinary, as it combines sciences such as mathematics, computer science, and neuroscience as well as professional studies such as linguistics, psychology and philosophy. Professionals involved in the development of artificial intelligence use different tools to get machines to simulate characteristics of intelligence only found in humans.
But artificial intelligence only follows the lead of the already omnipresent challenges and changes to the existing legal frameworks. The twenty first century is undoubtedly the age of information and technology. Exciting scientific breakthroughs continue to be experienced as innovators work to create better, more intelligent and energy efficient machines. Rapid information technology development has posed challenges to several areas of law both domestically and internationally. Many of these challenges have been discussed at length and continue to be addressed through reforms of existing laws.
The trend towards reform of law to keep up with the growth of technology can also be illustrated by observing the use of social media to generate content. As social media has continued to grow and influence the world, international media law has recognized citizen journalism. The traditional role of journalists has been to generate and disseminate information. As the world’s population has gained increased access to smart devices, ordinary people have been able to capture breaking stories that are then uploaded to the internet through several platforms. This has eroded the sharp distinction that previously existed between professional journalists and ordinary citizens, as the internet provides alternatives to traditional news media sources.
There are innumerable examples of other ways in which information technology has caused changes in the existing legislative structures. The law is naturally elastic, and can be expanded or amended to adapt to the new circumstances created by technological advancement. The continued development of artificial intelligence, however, may challenge the expansive character of the law because it presents an entirely novel situation. To begin with, artificial intelligence raises philosophical questions concerning the nature of the minds of human beings. These philosophical questions are connected to legal and ethical issues of creating machines that are programmed to possess the qualities that are innate and unique to human beings. If machines can be built to behave like humans, then they must be accorded some form of legal personality, similar to that which humans have. At the very least, the law must make provision for the changes that advanced artificial intelligence will cause in the society through the introduction of a new species capable of rational, logical thought. By deriving general guidelines based on the case law of the past, it should aid the lawmakers to close the gap on technological singularity.
Legal personality endows its subjects with the capacity to have rights and obligations before the law. Without legal personality, there is no legal standing to conduct any binding transactions both domestically and internationally. Legal personality is divided into two categories. Human beings are regarded as natural or physical persons. The second category encompasses non-living legal subjects who are artificial but nonetheless treated as persons by the law. This is a fundamental concept in corporate law and international law. Corporations, states and international legal organizations are treated as persons before the law and are known as juridical persons. Without legal personality, there can be no basis upon which legal rights and duties can be established.
Natural persons have a wide array of rights that are recognized and protected by law. Civil and political rights protect an individual’s freedoms to self-expression, assemble, information, own property and self-determination. Social and economic rights acknowledge the individual’s fundamental needs to lead a dignified and productive life. These include the right to education, healthcare, adequate food, decent housing and shelter. As artificial intelligence continues to develop, and smarter machines are produced, it may be necessary to grant these machines legal personality.
This may seem like far-fetched scientific fiction, but it is in fact closer to reality than the general population is aware of. Computer scientists are at the frontline of designing cutting edge software and advanced robots that could revolutionize the way human live. Just like Turing’s machine was able to accomplish feats that were impossible for human mathematicians, scientists, and cryptologists, during World War II, the robots of the future will be able to think and act autonomously. Similarly, the positive implications of increased capacity to produce artificial intelligence, is the development of powerful machines. These machines could solve many of the problems that continue to hinder human progress such as disease, hunger, adverse weather and aging. The science of artificial intelligence would make it possible to program these machines to provide solutions to human problems, and their superior abilities would make it possible to find these solutions within a short period of time instead of decades or centuries.
The current legal framework does not provide an underlying definition of what determines whether a certain entity acquires legal rights. The philosophical approach does not yet distinguish between strong and weak forms of artificial intelligence.
Weak artificial intelligence merely facilitates a tool for enhancing human technological abilities. A running application comprising artificial intelligence aspects, such as Siri, represents only a simulation of a cognitive process but does not constitute a cognitive process itself. Strong artificial intelligence, on the other hand, suggests that a software application in principle can be designed to become aware of itself, become intelligent, understand, have perception of the world, and present cognitive states that are associated with the human mind.
The prospects for the development and use of artificial intelligence are exciting, but this narrative would be incomplete without making mention of the possible dangers as well. Humans may retain some level of remote control but the possibility that these created objects could rise up to positions of dominance over human beings is certainly a great concern. With the use of machines and the continual improvement of existing technology, some scientists are convinced that it is only a matter of time before artificial intelligence surpasses that of human intelligence.
Secondly, ethicists and philosophers have questioned whether it is sound to pass on innate characteristics of human beings on to machines if this could ultimately mean that the human race will become subject to these machines. Perhaps increased use of artificial intelligence to produce machines may dehumanize society, as functions that were previously carried out in the society become mechanized. In the past mechanization has resulted in loss of jobs as manpower is no longer required when machines can do the work. Reflections on history reveal that machines have assisted humans to make work easier, but it has not been possible to achieve an idyllic existence simply because machines exist.
Lastly, if this advanced software should fall into the hands of criminals, terrorist organizations or states that are set against peace and non-violence, the consequences would be dire. Criminal organizations could expand dangerous networks across the world using technological tools. Machines could be trained to kill or maim victims. Criminals could remotely control machines to commit crimes in different geographical areas. Software could be programmed to steal sensitive private information and incentivize corporate espionage.
The "singularity” is a term that was first coined by Vernor Vinge to describe a theoretical situation where machines created by humans develop superior intelligence and end the era of human dominance that would be as intelligent or more intelligent that human mind, using the exponential growth of computing power, based on the law of accelerating returns, combined with human understanding of the complexity of the brain.
As highlighted earlier, strong artificial intelligence that matches or surpasses human intelligence has not yet been developed, although its development has been envisioned. Strong artificial intelligence is a prominent theme in many science fiction movies probably because the notion of a super computer with the ability to outsmart humans is very interesting. In the meantime, before this science fiction dream can become a reality, weak artificial intelligence has slowly become a commonplace part of everyday life. Search engines and smart phone apps are the most common examples of weak artificial intelligence. These programs are simply designed and possess the ability to mimic simple aspects of human intelligence. Google is able to search for information on the web using key words or phrases inserted in by the user. The scenario of dominance by artificial intelligence seems a long way off from the current status quo. However, the launch of chatbots points towards the direction artificial intelligence will take in the near future using weak artificial intelligence.
Chatbots are the next link in the evolution chain of virtual personal assistants, such as Siri. Siri is the shortened version of the Scandinavian name Sigrid which means beauty or victory. It is a virtual personal assistant that is able to mimic human elements of interaction as it carries out its duties. The program is enabled with a speech function that enables it to reply to queries as well as take audio instructions. This is impressive as it does not require the user to type instructions. Siri is able to decode a verbal message, understand the instructions given and act on these instructions. Siri is able to provide information when requested to do so. It can also send text messages, organize personal schedules, book appointments and take note of important meetings on behalf of its user. Another impressive feature of the program is its ability to collect information about the user. As the user gives more instructions Siri stores this information and uses it to refine the services it offers to the user. The excitement that has greeted the successful launch of Siri within the mass market is imaginable. After Siri, came the chatbots. Chatbots are a type of conversational agent, a software designed to simulate an intelligent conversation with one or more human users via auditory or textual methods. The technology may be considered as weak artificial intelligence, but the abilities demonstrated by the program offer a glimpse into what the future holds for artificial intelligence development. For legal regulators virtual personal assistants' features demand that existing structures be reviewed to accommodate the novel circumstances that its use has introduced. As more programs like Siri contitnue to be commercialized, these new legal grey areas will feature more often in mainstream debate. Intellectual property law and liability law will probably be the areas most affected by uptake of chatbots by consumers.
Intellectual property law creates ownership rights for creators or inventors, to protect their interests in the works they create. Copyright law in particular, protects artistic creations by controlling the means by which these creations are distributed. The owners of copyright are then able to use their artistic works to earn an income. Anyone else who wants to deal with the creative works for profit or personal use must get authorization from the copyright owner. Persons who infringe on copyright are liable to face civil suits, arrest and fines. In the case of chatbots, the owner of the sounds produced by the program has not been clearly defined. It is quite likely that in the near future, these sounds will become a lucrative form of creative work and when that does happen it will be imperative that the law defines who the owner of these sounds is. Users are capable of using chatbot's features to mix different sounds, including works protected by copyright, to come up with new sounds. In this case, the law is unclear whether such content would be considered to be new content or whether it would be attributed to the original producers of the sound.
Another important question that would have to be addressed would be the issue of ownership between the creators of artificial intelligence programs, the users of such programs and those who utilize the output produced by the programs. A case could be made that the creators of the program are the original authors and are entitled to copyright the works that are produced using such a program. As artificial intelligence gains popularity within the society and more people have access to machines and programs like Siri, it is inevitable that conflicts of ownership will arise as different people battle to be recognized as the owner of the works produced. From the perspective of intellectual property, artificial intelligence cannot be left within the public domain. Due to its innate value and its capacity to generate new content, there will definitely be ownership wrangles. The law therefore needs to provide clarity and guidance on who has the right to claim ownership.
Law enforcement agents must constantly innovate in order to successfully investigate crime. Although the internet has made it easier to commit certain crimes, programs such as the ‘Sweetie’, avatar run by the charity Terres des Hommes based in Holland, illustrate how artificial intelligence can help to solve crime. The Sweetie avatar was developed by the charity to help investigate sex tourists who targeted children online. The offenders in such crimes engage in sexual acts with children from developing countries. The children are lured into the illicit practice with promises that they will be paid for their participation. After making contact and confirming that the children are indeed underage, the offenders then request the children to perform sexual acts in front of the cameras. The offenders may also perform sexual acts and request the children to view them.
The offenders prey on vulnerable children who often come from poor developing countries. The children are physically and mentally exploited to gratify offenders from wealthy Western countries. In October 2014, the Sweetie avatar project experienced its first successful conviction of a sex predator. The man, an Australian national named Scott Robert Hansen admitted that he had sent nude images of himself performing obscene acts to Sweetie. Hansen also pleaded guilty to possession of child pornography. Both these offenses were violations of previous orders issued against him as a repeat sexual offender. Sweetie is an app that is able to mimic the movements of a real ten year old girl. The 3D model is very lifelike, and the app allows for natural interactions such as typing during chats, nodding in response to questions asked or comments made. The app also makes it possible for the operator to move the 3D model from side to side in its seat. Hansen fell for the ploy and believed that Sweetie was a real child.
According to the court, it was immaterial that Sweetie did not exist. Hansen was guilty because he believed that she was a real child and his intention was to perform obscene acts in front of her. Although Hansen was the only person to be convicted as a result of the Terres des Hommes project, researchers working on it had patrolled the internet for ten weeks. In that time, thousands of men had gotten in touch with Sweetie. Terres des Hommes compiled a list of one thousand suspects which was handed over to Interpol and state police agencies for further investigations. The Sweetie project illustrates that artificial intelligence can be utilized to investigate difficult crimes such as sex tourism. The biggest benefit of such a project is that it created an avatar that was very convincing and removed the need to use real people in the undercover operation. In addition the project had an ideal way of collecting evidence through use of a form of artificial intelligence that was very difficult to contradict. Thus, in a way, artificial intelligence provided grounds for challenging the already existing legal rights of the accused
Presently the law provides different standards of liability for those who break the law. In criminal law, a person is liable for criminal activity if they demonstrate that they have both a guilty mind (the settled intent to commit a crime) and they performed the guilty act in line with this intent. In civil cases liability for wrongdoing can be reduced based on mitigating factors such as the contributory negligence of the other party. There is currently no explicit provision in law that allows defendants to escape liability by claiming that they relied on incorrect advice from an intelligent machine. However, with increased reliance on artificial intelligence to guide basic daily tasks, the law will eventually have to address this question. If a user of artificial intelligence software makes a mistake while acting on information from the software, they may suffer losses or damages arising from the mistake. In such cases the developers of the software may be required to compensate the user or incur liability for the consequences of their software’s failure. If machines can be built with the ability to make critical decisions, it is important to have a clear idea of who will be held accountable for the actions of the machine.
Autonomous driverless cars represent an interesting example of the inception for such decisions to be made in the future. Florida, Nevada, Michigan, and D.C. states have also passed laws allowing autonomous cars driving on their streets in some capacity. The question to how autonomous cars might lead to the change of the liability and ethical rights stands upon software ethical settings that might control self-driving vehicles to prioritize human lives over financial or property loss. The numerous ethical dilemmas revolving around autonomous cars choosing to save passengers over saving a child’s life could arise. The lawmakers, regulators and standards organizations should develop concise legal principles upon which such ethical questions will be addressed by defining a liable entity.
Turing, one of the fathers of modern computer science and artificial intelligence, envisioned a world in which machines could be designed to think independently and solve problems. Modern scientists still share Turing’s vision. It is this vision that inspires countless mathematicians and developers around the world to continue on designing better software applications with greater capabilities. The scientific community and the society at large, have several positive expectations concerning artificial intelligence and the potential benefits humankind could reap from its development. Intelligent machines have the potential to make our daily lives easer as well as unlock mysteries that cannot be solved by human ingenuity. They also have the potential to end the dominance of human beings on this planet. The need for law to be reformed with regard to artificial intelligence is apparent. As the world heads into the next scientific era with both excitement and fear, the law must find a way to adjust the new circumstances created by machines that can think. As we involve artificial intelligence more in our lives and try to learn about its legal implications, there will undoubtedly be changes needed to be applied.
Cortical Interface: ‘Conscious-Competence’ Model
The unmitigated accuracy in inputting and outputting data through different medium interfaces (as well as our own technological fluency in using and utilizing information resources in itself) signals the multiplicity of subjectivities we easily form, participate in and are subjected to in our everyday lives. Humanity is on the path to significantly accelerate the evolution of intelligent life beyond its current human form and human limitations.
"Tank: …now, we're supposed to start with these operation programs first, that's a major boring shit. Let's do something more fun. How about combat training.
Neo: Jujitsu? I'm going to learn Jujitsu?... Holy shit.
Tank: Hey Mikey, I think he likes it. How about some more?
Neo: Hell yes. Hell yeah."― The Matrix
The unmitigated accuracy in inputting and outputting data through different medium interfaces (as well as our own technological fluency in using and utilizing information resources in itself) signals the multiplicity of subjectivities we easily form, participate in and are subjected to in our everyday lives. Humanity is on the path to significantly accelerate the evolution of intelligent life beyond its current human form and human limitations.
Kernel, IBM, Neuralink, Facebook—all work to develop some kind of cortical interface by implanting microscopic brain electrodes that in the future may upload and download thoughts to enhance human abilities. Even smallest advancement of this technology would trigger bio-technological enhancement of human beings in automation and cyberinteraction, enabling data access to web networks and wireless communication in real-time, directly from our minds.
As alerting of what impact these advancements might have on human consciousness, in the end, all existing technologies work to fulfill our innate human desire—to stay closely connected and be a part of a known and a similar. The increased connectivity and provided time-space distanciation , where for the first time we have the ability to be connected instantaneously and aware of all other people at all times, has been already shaping a “global neural net”, along which ideas spread and come to fruition at previously unmatched rates, for years. If human beings, looking to implement connectivity to the furthest reaches of technological development, continue to make advances (and we will) in interpersonal technology, defining our social individuality neither starts nor ends in the boundaries of our synthetic skin. Not only through the current social networks, that are now the primary platforms of self-making and self-representation, individuals in data age sustain their integration into the society through the means of smart phones, online banking systems, internet profiling, or ‘digital nomad’ occupations. Considering that a non-credit card holder can not even book rooms in hotels anymore and a non-credit score individual can not obtain a credit card, we can say that our subjective positioning within the social, economic, and political systems are now almost strictly digital. In entering this infinitesimally deep and complex, venturing-too-far-down ‘rabbit hole’, humanity is ready to seek and adopt its hybrid anatomy (artificial and organic), enabling a far greater utilization of the means of communication technologies than any modern human has.
With significant difference in capabilities to process data in our hybrid state, such technology inevitably leads to rapidly advancing artificial intelligence, possibly, to a level where the difference between individual consciousness and artificial intelligence is blurred. Seeking the ability to enhance our parts infers that soon we will be offered options to alter our bodies through high-functioning prostheses. Whatever that initial reason would be (evening the odds with artificial intelligence, finding a cure for physical and mental disorders, or just continuing our natural evolution), we are indeed moving towards a future of iteratively reproduced congregate bodies.
Technological advancement in bringing human intelligence to a new level reminds me of a reverse ‘conscious-competence’ model. The model represents human desire to beat the “unknown unknowns”, whatever the means are. It views human learning along two dimensions, consciousness and competence, moving in reverse through a four-stage progression from unconscious competence (the individual has had so much practice or was born with a skill that it has become "second nature" and can be performed easily; as a result, the skill can be performed while executing another task and the individual may be able to teach it to others, depending upon how and when it was learned) to conscious competence (the individual understands or knows how to do something, however, demonstrating the skill or knowledge requires concentration; it may be broken down into steps, and there is heavy conscious involvement in executing the new skill) to conscious incompetence (though the individual does not understand or know how to do something, he or she does recognize the deficit, as well as the value of a new skill in addressing the deficit; the making of mistakes can be integral to the learning process at this stage) to, finally, human unconscious incompetence (the individual does not understand or know how to do something and does not necessarily recognize the deficit; they may deny the usefulness of the skill; the individual may never recognize their own incompetence, and the value of the new skill, because the stimulus to learn is unknown and cognitively unreachable). But we will never accept our 'human' unconscious incompetence.
In a typical human-like way, the possibility of extending our thinking beyond a physical body has been out there for a while and can be already found in the works of Aristoteles, Plato and Descartes, assuming that the rational self had an ‘inner’ relationship with the mind and an ‘outer’ relationship with the body. This ensured that the body was perceived as part of environment and not as part of the individual self. Consequently, the ultimate dream in cartesian dualism is disembodiment. Elon Musk's ‘neural lace’ would be the ultimate dream for Descartes. The possibility of escaping the body would pave the way for entirely pure thought reasoning.
But as much as drastically technology progresses and alters socially, politically, economically, scientifically acceptable frameworks (involving human biology), it is, in the end, consistent with the history and nature of society and constitutes just a simple advancement supply (creating connection) that depends on human demand (being connected), as any other advancements do.
Understanding the Theory of Embodied Cognition
Embodied cognition is a research theory that is generally all about the vast difference of having an active body and being situated in a structured environment adept to the kind of tasks that the brain has to perform in order to support adaptive task success.
“We shape our tools and thereafter our tools shape us.”
― Marshall McLuhan
Artificial intelligence (AI) systems are generally designed to solve one traditional AI task. While such weak systems are undoubtedly useful as decision-making aiding tools, future AI systems will be strong and general, consolidating common sense and general problem solving capabilities (a16z podcast “Brains, Bodies, Minds … and Techno-Religions” brings some great examples of what general artificial intelligence could be capable of). To achieve general intelligence—a human-like ability to use previous experiences to solve arising problems—AI agents’ “brains” would need to (biologically) evolve their experiences into a variety of new tasks. This is where Universe comes in.
In December, OpenAI introduced Universe, a software platform for training an AI's general intelligence to become skilled at any task that a human can do with a computer. Universe builds upon OpenAI’s Gym, a toolkit designed for the development and comparing of reinforcement learning algorithms (the environment acts as the tutor, providing periodic feedback/“reward” to an agent which in turn will either encourage or discourage subsequent actions). The Universe software essentially allows any program to be turned into a Gym environment by launching it behind a virtual desktop avoiding the requirement for Universe to have direct access to the programs source code and other protected internal data.
OpenAI perceives such interaction as a validation for artificial intelligence: many applications are essentially micro-virtual worlds and exposing AI learning techniques to them will lead to more trained agents, capable of tackling a diverse range of (game) problems quickly and well. Being able to master new, unfamiliar environments in this way is a first step toward general intelligence, allowing AI agents to “anticipate,” rather than forever getting stuck in a singular “single task” loop.
However, as much as Universe is a unique experience vessel for artificial intelligence, it is a unique visual experience vessel, enabling an agent to interact with any external software application via pixels (by using keyboard, and mouse), each of these applications constituting different HCI environment sources. It is the access to a vast digital universe full of variety of visual training tasks.
But isn’t it missing out on all the fun of full tactile experience? Shouldn’t there be a digitized training somatosensory platform for AI agents, to recognize and interpret the myriad of tactile stimuli to grasp onto the experience of a physical world? The somatosensory system is the part of the central nervous system that is involved with decoding a wide range of tactile stimuli comprising object recognition, texture discrimination, sensory-motor feedback and eventually inter-social communication exchange—for our perception and reaction to stimuli originating outside and inside of our body and for the perception and control of body position and balance. One of the more essential aspects of general intelligence that gives us a common sense of understanding the world is being placed in the environment and being able to interact with things in the world—embedded in all of us is the instinctual ability of telling apart any mechanical forces upon the skin (temperature, texture, intensity of the tactile stimuli).
Our brain is indeed the core of all human thought and memory, constantly organizing, identifying, perceiving the environment that surrounds us and interpreting it through our senses, in a form of the data flow. And yet, studies have taught us that multiple senses can stimulate the central nervous center. (Only) estimated 78% of all perceived by brain data flow is visual, while the remaining part originates from sound (12%), touch (5%), smell (2.5%), and taste (2.5%)—and that is assuming that we deciphered all of the known senses. So by training general AI purely via its visual interaction, will we be getting a 78% general artificial intelligence? Enter the “embodied cognition” theory.
Embodied Cognition
Embodied cognition is a research theory that is generally all about the vast difference of having an active body and being situated in a structured environment adept to the kind of tasks that the brain has to perform in order to support adaptive task success. Here I refer to the team as the existence of a memory system that encodes data of agent’s motory and sensory competencies, stressing the importance of action for cognition, in such way that an agent is capable to tangibly interact with the physical world. The aspects of the agent's body beyond its brain play a significant causative and physically integral role in its cognitive processing. The only way to understand the mind, how it works, and subsequently train it is to consider the body and what helps the body and mind to function as one.
This approach is in line with a biological learning pattern based on “Darwinian selection” that proposes intelligence to be only be measured in the context of the surrounding environment of the organism studied: “…we must always consider the embodiment of any intelligent system. The preferred embodiment reflects that the mind and its surrounding environment (including the physical body of the individual) are inseparable and that intelligence only exists in the context of its surrounding environment.”
Stacked Neural Networks Must Emulate Evolution’s Hierarchical Complexity (Commons, 2008)
Current notions of neural networks (NNSs) are indeed based on the known evolutionary processes of executing tasks and share some properties of biological NNSs in the attempt to tackle general problems but as architecture inspiration thus without necessarily closer copying a real biological system. One of such first design steps is the advancement to develop AI NNSs, that can closely imitate general intelligence, follows the model of hierarchical complexity (HC), in terms of data acquisition. Stacked NNs based on this model could imitate evolution's environmental/behavioral processes and reinforcement learning (RL). However, computer-implemented systems or robots generally do not indicate generalized higher learning adaptivity—the capacity to go from learning ability to learning another without dedicated programming.
Established NNs are limited for two reasons. The first one of the problems is that AI models are based on the notions of Turing machines. Almost all AI models are based on words or text. But Turing machines are not enough to really produce intelligence. At the lowest stages of development, they need effectors that produce a variety of responses—movement, grasping, emoting, and so on. They must have extensive sensors to take in more from the environment. Even though Carpenter and Grossberg's (1990, 1992) neural networks were to model simple behavioral processes, however, the processes they were to model were too complex. This resulted in NNs that were relatively unstable and were not highly adaptable. When one looks at evolution, however, one sees that the first NNs that existed were, for example, in Aplysia, Cnidarians (Phylum Cnidaria), and worms. They were specialized to perform just a few tasks even though some general learning was possible.
Animals, including humans, pass through a series of ordered stages of development (see “Introduction to the Model of Hierarchical Complexity,” World Futures, 64: 444-451, 2008). Behaviors performed at each higher stage of development always successfully address task requirements that are more hierarchically complex than those required by the immediately preceding order of hierarchical complexity. Movement to a higher stage of development occurs by the brain combining, ordering, and transforming the behavior used at the preceding stage. This combining and ordering of behaviors thus must be non-arbitrary.
Somatosensory System Emulation
Neuroscience has discovered classification of specific regions, processes, and interactions down to molecular level for memory and thought reasoning. Neurons and synapses are both actively involved in thought and memory, and with the help of brain imaging technology (e.g. Magnetic Resonance Imaging (MRI), Nuclear Magnetic Resonance Imaging, or Magnetic Resonance Tomography (MRT)), brain activity can be analyzed at the molecular level. All perceived data in the brain is represented in the same way, through the electrical firing patterns of neurons. The learning mechanism is also the same: memories are constructed by strengthening the connections between neurons that fire together, using a biochemical process known as long-term potentiation. Recently atomic magnetometers have begun development of inexpensive and portable MRI instruments without large magnets (used in traditional MRI machines to image parts of the human anatomy, including the brain). There are over 10 billion neurons in the brain, each of which has synapses that are involved in memory and learning, which can also be analyzed by brain imaging methods, soon in-real time. It has been proven that new brain cells are created whenever one learns something new by physically interacting with their environment. Whenever stimuli in the environment or through a thought makes a significant enough impact on the brain perception, new neurons are created. During this process synapses carry on electro-chemical activities that directly reflect activity related to both memory and thought, from a tactile point of sensation. The sense of touch, weight, and all other tactile sensory stimuli need to be implemented as the concrete “it” value that is assigned to an agent by the nominal concept. By reconstructing 3D neuroanatomy from molecular level data, sensory activity in the brain at the molecular level can be detected, measured, stored, and reconstructed of a subset of the neural projections, generated by an automated segmentation algorithm, to convey the neurocomputational sensation to an AI agent. Existence of such somatosensory Universe-like database, focused on the training of AI agents, beyond visual interaction, may bring us closer to the 100% general AI.
Net Neutrality Laws: A Policy Perspective
The free and equal exchange of packets of information is at the very heart of the internet. It is this free exchange which made the modern internet possible, and with it the many business, educational, and informational changes it has brought around the globe. For decades, no one questioned or challenged this core concept. The information was there for the taking, and millions of Internet uses reaped the benefits of growing high-speed internet and the many new resources it made available. Ironically, the very thing that made the Internet successful and widespread also gave birth to the very thing that would threaten the Internet’s future: the growth of high-speed Internet during the first decade of this century.
In February, the Federal Communications Commission voted in favor of regulations reclassifying broadband Internet as public utility under Title II of the Communications Act, in the form of the Bright Line Rules. The new rules prohibit Internet service providers, including cellular carriers from blocking, slowing down or speeding up online traffic, giving priority to Web services in exchange for payment or decide which applications cost your data—known as zero-rating. In June, the U.S. Court of Appeals for the D.C. Circuit upheld the vote stating that the Commission exercised its proper authority when it reclassified broadband Internet access as a regulated, common-carrier service, rejecting the U.S. Telecom Association, an industry group representing providers including Verizon, Comcast and AT&T, challenge.
Pundits debate that the end of net neutrality is the greatest threat the Internet has ever faced. Is this true? And what exactly is net neutrality? To better understand net neutrality, you need to understand how and why the free exchange of information that is currently allowed on the Internet is so critically important.
Picture the Internet as a vast, modern high-speed, multi-lane highway. Traffic freely flows on this highway, at any speed it desires. Every vehicle has equal access to this highway, and every vehicle goes wherever it pleases, with no tolls or charges. This is the Internet as it currently stands. Every single piece of data – e-mails, streaming videos, business presentations, and personal pictures - has the same right of access, and it all travels to its destination at the speed of light. No piece of data has higher priority than any other, and no user has to pay extra to get certain types of data, such as streaming video. Now, picture the same vast, modern high-speed multi-lane highway, but running along this highwy is a series of much smaller, two-lane roads, each with a regular series of toll booths. Highway access is limited to a specific brand of car, or people who pay a high tariff to get access to the highway, where they can drive as fast as they please. Meanwhile, traffic on the two-lane roads is bumper-to-bumper, and slowly lumbering along between toll booths, which creates a very slow, expensive drive for those cars. This is the Internet of the future, if net neutrality is legislated out of existence. The free and equal exchange of packets of information is at the very heart of the internet. It is this free exchange which made the modern internet possible, and with it the many business, educational, and informational changes it has brought around the globe.
For decades, no one questioned or challenged this core concept. The information was there for the taking, and millions of Internet uses reaped the benefits of growing high-speed internet and the many new resources it made available. Ironically, the very thing that made the Internet successful and widespread also gave birth to the very thing that would threaten the Internet’s future: the growth of high-speed Internet during the first decade of this century. High-speed internet caused an explosion in video-on-demand and streaming video services, such as Netflix, Hulu, and HBO Now. As the Internet became increasingly faster and available to an ever-increasing audience, the popularity of video services exploded—and planted the seeds of the Internet’s potential destruction. Streaming video consumes large amounts of data. Internet service providers, or ISPs, such as Time Warner, ComCast and AT&T, saw this and realized that large amounts of the bandwidth they were providing to their customers was being disproportionately used by customers who were using streaming video services.
Under existing net neutrality laws, the ISPs were not allowed to charge their customers or the streaming video provider additional charges to carry the video data. This left ISPs in a position where their networks are facing potential slowdowns due to bandwidth congestion caused by streaming video, and, later, additional services such as streaming gaming services, without the right to demand additional compensation. This lead to ISPs applying increasing pressure to the Federal Communications Commission (FCC) and the US Congress to end net neutrality and to in effect allow the ISPs to develop a multi-tier Internet. In a post-net neutrality world, ISPs would be allowed to decide what traffic they would carry on their networks, and at what cost. ISPS could also in affect create bundles of internet services—just as they have with cable television—in which they could charge extra for access to certain types of data, such as streaming video, and specifically which streaming video service you would be allowed to stream on their data network. ISPs would also have free reign to charge specific web sites, such as Facebook, a fee to allow customers on their data network to access the web site, as well as billing services such as Netflix for the amount of data that Netflix customers consumed on the ISP’s data network.
As a result, Internet access costs for websites, service providers such Netflix, and individual businesses and consumers will quickly spiral out of control. Many sites or service providers would simply cease to operate, limited the range of options available to internet users. The worst side-effect of the end of net neutrality, however, would be the death of diversity on the Internet, and the death of the free information exchange which sparked much of the progress—economic, social, and technological—of the last two decades.
As it currently stands, every voice on the Internet stands equal with every other voice. Every site has equal access to the same audience, and equal opportunity to state their case to the conscience of the world. The death of net neutrality will end this. The only voices on the Internet will be those who can pay to be heard. Individual voices of dissent will be silenced and replaced with corporate and approved establishment voices. Consider, for instance, the growing progress made in turning back the clock on global warming. Consider how little would have been done if the researchers and activists that lead the fight were not given access to the Internet—or were simply drowned out, silenced, or forced off the Internet by deep-pocketed corporate sites or government agencies who saw more profit in maintaining the status quo. Similarly, the free exchange of knowledge which has led to much of the progress of the last decade would also be forever silenced in a post-net neutrality world. The free spread of knowledge and ideas, particularly ideas which are seen as threateningly to the established governments or business interest, would be simply forced off the internet and silenced. Attempts to end net neutrality have thus far been unsuccessful. However, there is no guarantee that the ISPs have given up their fight.
Patents in an era of artificial intelligence
The fuzziness of software patents’ boundaries has already turned the ICT industry into one colossal turf war. The expanding reach of IP has introduced more and more possibilities for opportunistic litigation (suing to make a buck). In the US, two-thirds of all patent law suits are currently over software, with 2015 seeing more patent lawsuits filed than any other year before.
“If you have an apple and I have an apple and we exchange these apples then you and I will still each have one apple. But if you have an idea and I have an idea and we exchange these ideas, then each of us will have two ideas.”
― George Bernard Shaw
Just in the last month, headlines about the future of artificial intelligence (AI) were dominating most of the technology news across the globe:
On 15 November, OpenAI, a research company in San Francisco, California, co-founded by entrepreneur Elon Musk, announced their partnership with Microsoft to start running most of their large-scale experiments on Microsoft’s open source deep learning software platform, Azure;
Two weeks later, Comma.ai open sourced its AI driver assistance system and robotics research platform;
On 3 December, DeepMind, a unit of Google headquartered in London, opened up its own 3D virtual world, DeepMind Lab, for download and customization by outside developers;
Two days later, OpenAI released a ‘meta-platform’ that enables AI programs to easily interact with dozens of 3D games originally designed for humans, as well as with some web browsers and smartphone apps;
A day later, in a keynote at the annual Neural Information Processing Systems conference (NIPS) Russ Salakhutdinov, director of AI research at Apple, announced that Apple’s machine learning team would both publish its research and engage with academia;
And on 10 December, Facebook announced to open-source their AI hardware design, Big Sur
What’s going on here? In the AI field, maybe more than in any other, the research thrives directly on open collaboration—AI researchers routinely attend industry conferences, publish papers, and contribute to open-source projects with mission statements geared toward the safe and careful joint development of machine intelligence. There is no doubt that AI will radically transform our society, having the same levels of impact as the Internet has since the nineties. And it has got me thinking that with AI becoming cheaper, more powerful and ever-more pervasive, with a potential to recast our economy, education, communication, transportation, security and healthcare from top to bottom, it is of the utmost importance that it (software and hardware) wouldn’t be hindered by the same innovation establishment that was designed to promote it.
System glitch
Our ideas are meant to be shared—in the past, the works of Shakespeare, Rembrandt and Gutenberg could be openly copied and built upon. But the growing dominance of the market economy, where the products of our intellectual labors can be acquired, transferred and sold, produced a system side-effect glitch. Due to the development costs (of actually inventing a new technology), the price of unprotected original products is simply higher than the price of their copies. The introduction of patent (to protect inventions) and copyright (to protect media) laws was intended to address this imbalance. Both aimed to encourage the creation and proliferation of new ideas by providing a brief and limited period of when no one else could copy your work. This gave creators a window of opportunity to break even with their investments and potentially make a profit. After which their work entered a public domain where it could be openly copied and built upon. This was the inception of open innovation cycle—an accessible vast distributed network of ideas, products, arts and entertainment - open to all as the common good. The influence of the market transformed this principle into believing that ideas are a form of property and subsequently this conviction yield a new term of “intellectual property” (IP).
Loss aversion
“People’s tendency to prefer avoiding losses to acquiring equivalent gains”: it’s better to not lose $10 than to find $10 and we hate losing what we’ve got. To apply this principle to intellectual property: we believe that ideas are property; the gains we gain from copying the ideas of others don’t make a big impression on us, however when it’s our ideas being copied, we perceive it as a property loss and we get (excessively) territorial. Most of us have no problem with copying (as long as we’re the ones doing it). When we copy, we justify it; when others copy, we vilify it. So with the blind eye toward our own mimicry and propelled by faith in markets and ultimate ownership, IP swelled beyond its original intent with broader interpretations of existing laws, new legislation, new realms of coverage and alluring rewards. Starting in the late nineties, in the US, a series of new copyright laws and regulations began to be shaped (Net Act of 1997, DMCA of 1998, Pro-IP of 2008, The Enforcement of Intellectual Property Rights Act of 2008) and many more are in the works (SOPA, The Protect IP Act, Innovative Design Protection and Piracy Prevention Act, CAS “Six Strikes Program”). In Europe, there is currently 179 different sets of laws, implementing rules and regulations, geographical indications, treaty approvals, legal literature, IP jurisprudence documents, administered treaties and treaty memberships.
In the patents domain, technological coverage to prevent loss aversion made the leap from physical inventions to virtual ones, most notably—software.
Rundown of computing history
The first computer was a machine of cogs and gears, and became practical only in the 1950s and 60s with the invention of semi-conductors. Forty years ago, (mainframe-based) IBM emerged as an industry forerunner. Thirty years ago, (client server-based) Microsoft leapfrogged and gave ordinary people computing utility tools, such as word-processing. As computing became more personal and the World-Wide-Web turned Internet URLs into web site names that people could access, (internet-based) Google offered the ultimate personal service, free gateway to the infinite data web, and became the new computing leader. Ten years ago, (social-computing) Facebook morphed into a social medium as a personal identity tool. Today, (conversational-computing) Snap challenges Facebook as-Facebook-challenged-Google-as-Google-challenged-Microsoft-as-Microsoft-challenged-IBM-as-IBM-challenged-cogs-and-gears.
History of software patenting
Most people in the S/W patent debate are familiar with Apple v. Samsung, Oracle v. Google with open-source arguments, etc., but many are not familiar with the name Martin Goetz. Martin Goetz filed the first software patent in 1968, for a data organizing program his small company wished to sell for use on IBM machines. At the time, IBM offered all of their software as a part of the computers that they sold. This gave any other competitors in the software space a difficult starting point: competitors either offered their own hardware (HP produced their first computer just 2 years earlier) or convince people to buy software to replace the free software that came with the IBM computers.
Martin Goetz was leading a small software company, and did not want IBM to take his technological improvements and use the software for IBM's bundled programs without reimbursement, so he filed for a software patent. Thus, in 1968, the first software patent was issued to a small company, to help them compete against the largest computer company of the time. Although they had filed a patent to protect their IP, Goetz's company still had a difficult time competing in a market that was dominated by IBM, so they joined the US Justice Department's Anti-Trust suit against IBM, forcing IBM to un-bundle their software suite from their hardware appliances.
So the beginning of the software industry started in 1969, with the unbundling of software by IBM and others. Consumers had previously regarded application and utility programs as cost-free because they were bundled in with the hardware. With unbundling, competing software products could be put on the market because such programs were no longer included in the price of the hardware. Almost immediately, the software industry has emerged. On the other hand, it was quickly evident that some type of protection would be needed for this new form of intellectual property.
Unfortunately, neither copyright law nor patent law seemed ready to take on this curious hybrid of creative expression and functional utility. During the 1970s, there was total confusion as to how to protect software from piracy. A few copyrights were issued by the Copyright Office, but most were rejected. A few software patents were granted by the PTO, but most patent applications for software-related inventions were rejected. The worst effect for the new industry was the uncertainty as to how this asset could be protected. Finally, in 1980, after an extensive review by the National Commission on New Technological Uses of Copyrighted Works (CONTU), Congress amended the Copyright Act of 1976 to cover software. It took a number of important cases to resolve most of the remaining issues in the copyright law, and there are still some issues being litigated, such as the so-called “look and feel”, but it appears that this area of the law is quite well understood now. For patents, it took a 1981 Supreme Court decision, Diamond v. Diehr, to bring software into the mainstream of patent law. This decision ruled that the presence of software in an otherwise patentable technology did not make that invention unpatentable. Diamond v. Diehr opened the door for a flood of software-related patent applications. Unfortunately, the PTO was not prepared for this new development, and in the intervening years they have issued thousands of patents that appear to be questionable to the software industry. It took a few years after 1981 for the flow of software-related applications to increase, and then there was some delay because of the processing of these applications. Now the number of infringement case is on the rise.
The transition from physical patents to virtual patents was not a natural one. In its core, a patent is a blueprint for how to recreate an invention; while (the majority of) software patents are more like a loose description of something that would look like if it actually was invented. And software patents are written in the broadest possible language to get the broadest possible protection - the vagueness of these terms can sometimes reach absurd levels, for example “information manufacturing machine” which covers anything computer-like or “material object” which covers… pretty much everything.
What now?
35 U.S.C. 101 reads as follows:
“Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirement of this title.”
When considering subject matter eligibility under 35 U.S.C. 101, it must be determined whether the technology is directed to one of the four statutory categories of invention, i.e., process, machine, manufacture, or composition of matter. Since it became widespread and commercially valuable, it has been highly difficult to classify software within a specific category of intellectual property protection.
Attempts are usually made in the field of software technology to combine methods or means used in different fields or apply them to another field in order to achieve an intended effect. Consequently, combining technologies used in different fields and applying them to another field is usually considered to be within the exercise of an ordinary creative activity of a person skilled in the art, so that when there is no technical difficulty (technical blocking factor) for such combination or application, the inventive step is not affirmatively inferred unless there exist special circumstances, such as remarkably advantageous effects. Software is not a monolithic work: it possesses a number of elements that can fall within different categories of intellectual property protection.
In Israel, legal doctrines adapt to changes in innovative technological products and the commercial methods that extend this innovation to the marketplace. The decision issued by the Israeli Patent Registrar in the matter of Digital Layers Inc confirms the patentability of software-related inventions. The Registrar ruled that the claimed invention should be examined as a whole and not by its components, basing his ruling on the recent matter of HTC Europe Co Ltd v. Apple Inc, quoting:
"…It causes the device to operate in a new and improved way and it presents an improved interface to application software writers. Now it is fair to say that this solution is embodied in software but, as I have explained, an invention which is patentable in accordance with conventional patentable criteria does not become unpatentable because a computer program is used to implement it…"
After Alice Corp. v. CLS Bank International, if the technology does fall within one of the categories, it must then be determined whether the technology is directed to a judicial exception (i.e., law of nature, natural phenomenon, and abstract idea), and if so, it must additionally be determined whether the technology is a patent-eligible application of the exception. If an abstract idea is present in the technology, any element or combination of elements must be sufficient to ensure that the technology amounts to significantly more that the abstract idea itself. Examples of abstract ideas include fundamental economic practices (comparing new and stored information and using rules to identify options in SmartGene); certain methods of organizing human activities (managing game of Bingo in Planet Bingo v. VKGS and user interface for mean planning in Dietgoal Innovation vs. Bravo Media); an idea itself (store and transmit information in Cyberfone); and mathematical relationship/formulas (updating alarm limits using a mathematical formula in Parker v. Flook and generalized formulation of computer program to solve mathematical problem in Gottschalk v. Benson). The technology cannot merely amount to the application or instructions to apply the abstract idea on a computer, and is considered to amount to nothing more than requiring a generic computer system to merely carry out the abstract idea itself. Automating conventional activities using generic technology does not amount to an inventive concept as these simply describes “automation of a mathematical formula/relationship through use of generic computer function” (OIP Technologies v. Amazon). The procedure of the invention using an existing general purpose computer do not purport to improve any another technology or technical field, or to improve the functioning of a computer itself and do not move beyond a general link of the use of an abstract idea to a particular technological environment.
The Federal Circuit continues to refine patent eligibility for software
Following the Supreme Court’s decision in Alice v. CLS Bank, the court of appeals in Ultramercial v. Hulu reversed its prior decision and ruled that the claims were invalid under 35 U.S.C. § 101. Following the two-step framework outlined in Alice, Judge Lourie concluded that the claims were directed to an abstract idea.
The Federal Circuit’s decision in Digitech Image Techs. v. Electronics for Imaging illustrated the difficulty many modern software implemented inventions face. If a chemist were to invent a mixture of two ingredients that gives better gas mileage, it is hard to imagine that a claim to such a mixture would receive a § 101 rejection. Yet, when to elements of data are admixed to produce improved computational results, the court are quick to dismiss this as a patent-ineligible abstraction. The real problem Digitech faced was that both data elements were seen as being abstractions: one data type represented color information (an abstraction) and the other data type represented spatial information (another abstraction).
DDR Holdings v. Hotels.com, a 2014 Federal Circuit decision, provides a good discussion of a patent-eligible Internet-centric technology. In applying the Mayo/Alice two-part test, the court admitted it can be difficult sometimes to distinguish “between claims that recite a patent-eligible invention and claims that add too little to a patent-ineligible abstract concept”.
Content Extraction v. Wells Fargo Bank gives a roadmap to how the Court of Appeals for the Federal Circuit will likely handle business method patents in the future. First, if the manipulation of economic relations are deemed present, you can be sure that any innovative idea with the economic realm will be treated as part of the abstract idea. Essentially, no matter how clever an economic idea may be, that idea will be branded part of the abstract idea problem, for which there can be only one solution, and that is having something else innovative that is not part of the economic idea. Practically speaking, this means the technology needs to incorporate an innovative technology improvement that makes the clever economic idea possible.
So the fuzziness of software patents’ boundaries has already turned the ICT industry into one colossal turf war. The expanding reach of IP has introduced more and more possibilities for opportunistic litigation (suing to make a buck). In the US, two-thirds of all patent law suits are currently over software, with 2015 seeing more patent lawsuits filed than any other year before. Of the high-tech cases, more than 88% involved non-practicing entities (NPEs). These include two charmlessly evolving species who’s entire business model is lawsuits—patent trolls and sample trolls. These are corporations that don’t actually create anything, they simply acquire a library of intellectual property rights and then litigate to earn profits (and because legal expenses are millions of dollars, their targets usually highly motivated to settle out of court). And the patent trolls are most common back in the troubled realm of software. The estimated wealth loss in the US alone is $500,000,000,000 (that’s a lot of zeros).
Technology conversion and open innovation
For technological companies, conversion and the advance of open source approach, driven largely by collaborative processes introduced by GitHub, Google's Android, Apple’s Swift and most recently by Microsoft joining Linux Foundation, has created a systematic process for innovation which is increasing software functionality and design. 150 years ago, innovation required a dedicated team spending hours in a lab, extensively experimenting and discovering “10,000 ways not to make a light-bulb”, before finding one that worked. Today, innovation has gained a critical mass as technology and users’ feedback are combined to give a purposeful team the ability to find 10,000 ways not to do something in a matter of hours, with the right plan in place. Today, a development team can deliver a product in a matter of months and test it in such a way that customer responses are delivered to the right development team member directly with the feedback being implemented and a system being corrected (almost) in real-time. The life of a software today patent is still 20 years from the date the application was filed. The patent system, that has existed since 1790, is not equipped to handle this new technology and there is a need to establish an agile, sui generic, short-cycle— three to five years—form of protection dedicated solely to software protection. As patents play an essential role in market-centred systems of innovation, patent exclusivity criteria should be redesigned more systematically to reflect the ability of software patents to foster innovation and to encourage technology diffusion.
The belief in intellectual property has grown so dominantly it has pushed the original intent of patents out of public consciousness. But that original purpose is right there, in plain sight—the US Patent Act of 1790 reads “An Act to promote the progress of useful Arts”. However, the exclusive rights this act introduced were offered in sacrifice for a different purpose - the intent was to better the lives of everyone by incentivizing creativity and producing a rich pool of knowledge open to all—but exclusive rights themselves came to be considered the only point, so they were expanded exponentially, and the result hasn’t been more progress or more learning, but more squabbling and more legal abuse. AI is entering the age of daunting problems—we need the best ideas possible, we need them now, and we need them to spread as fast as possible. The common meme was overwhelmed by exclusivity obsession and it needs to spread again, especially today. If the meme prospers, our laws, our norms, and our society—they will all transform as well.
Ambient Intelligence as a Multidisciplinary Paradigm
The future of artificial intelligence is not so much about direct interaction between humans and machines, but rather indirect amalgamation with the technology that is all around us, as part of our everyday environment. Rather than having machines with all-purpose intelligence, humans will interact indirectly with machines having highly developed abilities in specific roles. Their sum will be a machine ecosystem that adapts to and aids in whatever humans are trying to do. In that future, the devices might feel more like parts of an overall environment we interact with, rather than separate units we use individually. This is what ambient intelligenceis.
In recent years, advances in artificial intelligence (AI) have opened up new business models and new opportunities for progress in critical areas such as personal computing, health, education, energy, and the environment. Machines are already surpassing human performance of certain specific tasks, such as image recognition.
Artificial intelligence technologies have received $974m of funding as a first half of 2016, set to surpass 2015’s total, with 200 AI-focused companies have raised nearly $1.5 billion in equity funding. These figures will continue to rise as more AI patent applications were filed in 2016 than ever before: more than three thousand patent applications versus just under a hundred in 2015.
Yet the future of artificial intelligence is not so much about direct interaction between humans and machines, but rather indirect amalgamation with the technology that is all around us, as part of our everyday environment. Rather than having machines with all-purpose intelligence, humans will interact indirectly with machines having highly developed abilities in specific roles. Their sum will be a machine ecosystem that adapts to and aids in whatever humans are trying to do.
In that future, the devices might feel more like parts of an overall environment we interact with, rather than separate units we use individually. This is what ambient intelligence is.
The IST Advisory Group (ISTAG) coined the term in 2001, with an ambitious vision of its widespread presence by 2010. The report describes technologies which exist today, such as wrist devices, smart appliances, driving guidance systems, and ride sharing applications. On the whole it might seem still very futuristic, but nothing in it seems outrageous. At first glance, its systems seem to differ from what we have today in pervasiveness more than in kind.
The scenarios, which ISTAG presents, surpass present technology in a major way, though. The devices they imagine anticipate and adapt to our needs in a much bigger way than anything we have today. This requires a high level of machine learning, both about us and about their environment. It implies a high level of interaction among the systems, so they can acquire information from one another.
Not Quite Turing's vision
Alan Turing thought that advances in computing would lead to intelligent machines. He envisioned a computer that could engage in a conversation indistinguishable from a human's. Time has shown that machine intelligence is poor at imitating human beings, but extremely good at specialized tasks. Computers can beat the best chess players, drive cars more safely than people can, and predict the weather for a week or more in advance. Computers don't compete with us at being human; they complement us with a host of specialties. They're also really good at exchanging information rapidly.
This leads naturally to the scenario where AI-implemented devices attend to our needs, each one serving a specific purpose but interacting with devices that serve other purposes.
We witness this in the Internet of Things. Currently most of its devices perform simple tasks, such as accepting remote direction and reporting status. They could do a lot more, though. Imagine a thermostat that doesn't just set the temperature when we instruct it to, but turns itself down when we leave the house and turns itself back up when we start out for home. This isn't a difficult task, computationally; it just requires access to more data about what we're doing.
Computers perform best in highly structured domains. They “like” to have everything unambiguous and predictable. Ambient intelligence, on the other hand, has to work in what are called "uncertain domains." (Much as in HBO’s Westworld, users (guests) are thrown into pre-determined storylines from which they are free to deviate, however ambient intelligence (hosts) are programmed with script objectives, so even minor deviations or improvisations based on a user’s interference won't totally disrupt their functioning, they adapt.) The information in these domains isn't restricted to a known set of values, and it often has to be measured in probability. What constitutes leaving home and returning home? That's where machine learning techniques, rather than algorithms, come into play.
To work effectively with us, machines have to catch on to our habits. They need to figure out that when we go out to lunch in the middle of the day, we most likely aren't returning home. Some people do return home at noon, though, so this has to be a personal measurement, not a universal rule.
Concerns About Privacy and Control
Giving machines so much information and leeway will inevitably raise concerns. When they gather so much information about us, how much privacy do we give up? Who else is collecting this information, and what are they using it for? Might advertisers be getting it to plan campaigns to influence us? Might governments be using it to learn our habits and track all our moves?
When the machines anticipate our needs, are they influencing us in subtle ways? This is already a concern in social media. Facebook builds feeds that supposedly reflect our interests, and in doing so it controls the information we see. Even without any intent to manipulate us, this leads to our seeing what we already agree with and missing anything that challenges our assumptions. There isn't much to prevent the manipulation of information to push us toward certain preferences or conclusions.
With ambient intelligence, this effect could be far more pervasive than it is today. The machines that we think are carrying out our wishes could herd us without being noticed.
The question of security is important. Many devices on the Internet of Things have almost nonexistent security. (An unknown attacker intermittently knocked many popular websites offline for hours last week, from Twitter to Amazon and Etsy to Netflix, by exploiting the security breach in ordinary household electronic devices such as DVRs, routers and digital closed-circuit cameras.) Devices have default passwords that are easily discovered. In recent months, this has let criminals build huge botnets of devices and use them for denial of service attacks on an unprecedented scale.
If a malicious party could take control of the devices in an ambient intelligent network, the results could be disastrous. Cars could crash, building maintenance systems shut down, daily commerce disintegrate. To be given so high a level of trust, devices will have to be far more secure than the ones of today.
The Convergence of Many Fields
Bringing about wide-scale ambient intelligence involves much more than technology. It will need psychological expertise to effectively anticipate people's needs without feeling intrusive or oppressive. It will involve engineering so that the devices can operate physical systems efficiently and give feedback from them. But mainly it will involve solving non technology-related factors: social, legal and ethical implications of full integration and adaptation of intelligent machines into our everyday life, accessing and controlling every aspect of it.
Unravelling Smart Cities: An Integrative Framework
By 2030 the world’s 750 largest cities will account for 61 percent of global GDP. Supporting and establishing those future cities as smart cities (with sophisticated sensors, buildings, and appliances everywhere to ensure the management of city infrastructures and the delivery of services to its citizens) will be very different in many ways and thus is already becoming a major priority. Its fundamental solution, the Internet of Things, will create a digital layer of infrastructure that will help citizens access and consume any information they need, no matter where they are.
“Its urbanization, progressing steadily, had finally reached the ultimate. All the land surface of Trantor, 75,000,000 square miles in extent, was a single city. The population, at its height, was well in excess of forty billions. This enormous population was devoted almost entirely to the administrative necessities of Empire, and found themselves all too few for the complications of the task.”
― Isaac Asimov, Foundation
By 2030 the world’s 750 largest cities will account for 61 percent of global GDP (or just $80,000,000,000,000). Supporting and establishing those future cities as smart cities (with sophisticated sensors, buildings, and appliances everywhere to ensure the management of city infrastructures and the delivery of services to its citizens) will be very different in many ways and thus is already becoming a major priority. Its fundamental solution, the Internet of Things, will create a digital layer of infrastructure that will help citizens access and consume any information they need, no matter where they are.
Components of the Smart City
What the digital skeleton of the smart city might look like is open to dispute, but some elements are sure to emerge as standard:
Smart energy networks - The city might have numerous small power plants in buildings and between them, with batteries or other energy storage at many locations. Smart sensors will monitor the power plants, wiring, and other component.
Smart shareable homes—The Internet of Things will make it easier for people to rent out spare rooms in their homes and apartments. At a moment's notice it will be possible to remotely activate lights and heat or air conditioning. In addition to lighting, heating, and cooling a homeowner will be able to control appliances remotely. If you are in a hurry to start baking, you can remotely preheat the stove so it is time to put the pizza in when you arrive.
Traffic flow will be more efficient and predictable—There will be no need to drive around and around looking for a parking place. Mobile apps will help drivers find parking spaces and route themselves around trouble spots. Smart traffic monitoring systems with networks of wired devices will make it easier to keep a traffic jam from developing at all. Intersections can managed by smart devices that keep drivers safe and streamline traffic flow.
Shareable and reusable buildings—Smart devices will make it easier to manage commercial and industrial buildings, of course. The WEF suggests that smart devices will make it easy for a business owner to modify space to suit the communication and lighting needs of different clients. Another suite of smart devices will monitor power and water use, to make the most efficient use of each.
The "Killer App" Is Data Accessibility
All cities need telecommunications grids to handle the vast amounts of data racing around, into and out of, a city. Roads, power lines, water lines, sewage systems, traffic flow infrastructures, utilities in large commercial and government buildings—all need to be kept in top shape, monitored and managed efficiently. The data required to improve systems, perform timely maintenance, and do quick repairs has to come from somewhere. Currently, most cities rely on a complex blend of manual inspections, stand-alone electronic devices, and Web-enabled devices. A smart city moves that mix of management and maintenance tools toward a heavy reliance on Web-enabled devices.
The Internet of Things will help service providers use the full range of data that's potentially available to manage how they deliver services. Government officials will be better equipped to make good strategic decisions about infrastructure, while allowing more transparency to the citizens—service providers and government offices will have to share information on their performance. Are they meeting performance and quality goals? The data from Web-enabled devices will make it easy for decision makers and citizens to find out.
The Smart City's Data-Intensive Infrastructure
The vast number of Web-enabled devices around the smart city will have to communicate over an increasingly dense network of wires, wireless devices, and routers. Creating and managing that digital infrastructure will create new business opportunities for anyone who can create and install elements of that new infrastructure. Data storage and processing on a vast scale are going to be critical. That's the challenge that underlies all of the innovative developments that will create a Smart City. Wireless networks and wired networks will need to be ubiquitous and much faster than the norm today.
Governance
Faced with ongoing budget concerns, city managers are working to create more effective and efficient operating models by moving away from top-down, centralized management systems and breaking down vertical service functions and departments. Today, cities require regular maintenance and repair services to roads, sewer lines, phone lines, and power lines. Whether private companies or the government that will do the work in the future is irrelevant. Whoever performs the monitoring, maintenance, and repair work can look forward to integrate the IoT and software applications to control them in areas like road infrastructure (better monitoring of pavement and bridge conditions by using intelligent sensors and new “big data” computing capabilities), highway traffic management, healthcare, education, and agriculture. Collaborative design of multi-stakeholder ownership and processes calls for new governance and business models, which are essential to aligning all city services. Cities should take that into consideration when designing a coherent deployment plan to ensure synergies and cross-functionalities that optimize the number of sensors and services provided for the wires, wireless devices, and routers that connect all of these smart devices to each other and to their owners, constantly generating more data mining opportunities.
Privacy Challenges
The smart city depends on collecting and processing a huge amount of data, some concerning residents' daily activities. The connected devices that make their lives easier also expose them to a potential loss of privacy. The proliferation of sensing equipment in society already raises important questions about data security and privacy. Addressing these challenges requires rules of the game that allow the fast-moving technology and market trends to evolve. To address data privacy and security and other challenges related to enabling progress in the circular economy–intelligent assets will require a robust legal framework with adequate innovative enforcement mechanisms. The key challenge for policy-makers lies in stimulating (open) innovation while ensuring data security and generating trust for those who are directly and indirectly linked through city intelligent assets. Companies and policymakers would need a multi-stakeholder approach to create such conditions; if successful they could lay the groundwork for solving several of the core challenges for designing an economy that is truly restorative and regenerative.
Rethinking Additive Manufacturing and Intellectual Property Protection
While patent laws protect design concepts in the traditional manufacturing model, additive manufacturing is not so clear-cut. The legal question becomes, "Who really owns the design of a part that is printed?". And regarding counterfeiting of parts, the technology of additive manufacturing makes reverse engineering an unnecessary step, thereby easing the way for counterfeiters to do their work quickly and more efficiently. Add to that the very real concern about the structural integrity of objects produced by additive manufacturing methods, and you can see that counterfeit parts produced in this way may result in catastrophic failure, and, depending on the use of the object, even potentially loss of life.
Additive manufacturing is a process in which material is added (rather than subtracted) to create a product. The most commonly recognizable form of additive manufacturing is 3D printing, which is a process by which a scanned digital file can be recreated, or printed, as a three-dimensional object. As the technology to achieve this becomes both cheaper and more widely available, it is postulated that 3D printing may change the design and manufacturing industry forever.
For instance, consider the following scenario. Suppose the sprocket on your bicycle breaks. Traditionally, you would repair your bike by purchasing a replacement sprocket at a retail store. However, if you had the ability to simply replicate the sprocket by printing a new one with a 3D printer and simple materials right in your own home, how likely would you be to make the trip to the store to pick up the part? From a consumer standpoint, 3D printing is a great idea and a convenient and inexpensive way to get the things you need quickly. And from a manufacturing point of view, additive manufacturing has many helpful and exciting applications as well.
But, consider this same scenario from another point of view. Suppose you are the person or company who originally designed and manufactured the sprocket. How would you feel about the ability of others to simply replicate your product without compensation to you?
The scenario as described illustrates in a very basic way how the growing technology of additive manufacturing has generated a number of intellectual property (IP) concerns. Additive manufacturing and the issues of IP protection are uneasy bedfellows. Concerns include questions about the ownership of designs, concerns about the possibility of copying parts by scanning and printing them, and concerns about the counterfeiting of component parts of protected designs.
Current legal regimes do not offer resolution to these issues in a definitive way. The legality of copying parts, even for individual use, is a murky area. And the possibilities additive manufacturing offers for counterfeiting parts and products presents additional conundrums for manufacturers and the legal system.
While patent laws protect design concepts in the traditional manufacturing model, additive manufacturing is not so clear-cut. The legal question becomes, "Who really owns the design of a part that is printed?". And regarding counterfeiting of parts, the technology of additive manufacturing makes reverse engineering an unnecessary step, thereby easing the way for counterfeiters to do their work quickly and more efficiently. Add to that the very real concern about the structural integrity of objects produced by additive manufacturing methods, and you can see that counterfeit parts produced in this way may result in catastrophic failure, and, depending on the use of the object, even potentially loss of life.
Also IP issues are further complicated by the possibility of using a basic design and slightly modifying it duringthe additive manufacturing process. Does this invalidate any design patent or trade dress protection afforded to the original product that is being replicated? The answer is not yet legally clear.
Utility patents may be a way of protecting many aspects of additive manufacturing, but this method also has severe shortcomings. Since the process of obtaining a utility patent can take on average almost two years, it is easy to see that advances in thetechnology of additive manufacturing will outstrip the patent process by a considerable amount of time, making it likely that utility patents as they are currently acquired will do little to protect IP rights.
What About Copyright Protections?
3D printing technology offers previously-unimagined opportunities for rapid prototyping and the creation of new designs. On the downside, those opportunities pose new threats to copyright owners, who now fear that 3d printing and intellectual property infringement will soon go hand-in-hand.
Under copyright law, the creator of a three-dimensional work of art has the exclusive right to make and sell copies of his work, to prepare derivative works based on his original work, and to transfer ownership of that work to another party. An infringer with a 3d printer can make multiple copies of a protected work and distribute those copies in direct contravention of these exclusive rights. (To a lesser extent, 3d printers might facilitate infringement of design patents and certain utility patents, but copyright protection is the main intellectual property right that is impacted by 3d printing technology).
A party claiming infringement of his or her copyright needs to show that the infringer had access to the copyrighted work of art and that this work was, in fact, copied. Access is oftentimes presumed if the original and copy are identical or if they closely resemble each other. The big question in 3d printer intellectual property infringement cases will be whether the copy is too similar to the original.
Courts will use a "level of abstraction" analysis to determine similarity between two products. Consider, for example, Michelangelo's David, the original of which is on display at the Accademia di Belle Arti in Florence, Italy. The copyright that Michelangelo had in his masterpiece has long since expired, but if that copyright were still in effect and owned by Michelangelo he would have the exclusive right to make and distribute copies of his statue. A rival sculptor might use a 3d printer to craft his own statue of David. A court that is charged with determining whether the rival's statue is an infringement of the original work would look first at the highest level of abstraction, namely, whether the two sculptures both depict the Biblical King David. Drilling down into the next level, the court would examine if the two statues each depicted a younger David preparing for his encounter with Goliath. Thereafter, the court would see if both statues showed a slouching David with his weight disproportionately placed on his right leg, and holding a rock to his shoulder with his left arm. Successively deeper levels of abstraction would look more closely at the respective details between the two statues. If the similarities were too excessive at a deep level of abstraction, the court would likely rule that the rival's 3d printed copy is an infringement of the original.
Copyright law provides certain safe harbors that will void infringement claims. A creator cannot use copyright law, for example, to protect utilitarian aspects of his works. In the United States, the Digital Millennium Copyright Act protects infringers that are not actively encouraging infringement. Nonetheless, 3d printers will present a new world of opportunities for infringers to make copies of protected works.
Copyright protection affords protection of a creator's original and creative expression. However, data structures have been determined to fall out of the scope of copyright protection. Hence, unless significant changes are made to copyright law, the process of additive manufacturing will not be affected by this type of protection.
Adding further complication is the fact that enforcement of IP rights varies from country to country. A multi-year international approach to securing patent rights may prove to be too slow to be of much use in matters concerning additive manufacturing. It is obvious that a combination of different protection mechanisms will be necessary to craft acceptable IP protections that are enforceable and effective. Ideally, these protections would be consistent from one country to another. However, this is not an ideal world.
In the US, it is imperative that current patent laws must evolve to encompass additive manufacturing technology issues. Ensuring the protection of data associated with three dimensional objects manufactured digitally should be a priority before the technology becomes widely available to the general public.