Artificial Neural Networks and Engineered Interfaces
The need to express ourselves and communicate with others is fundamental to what it means to be human. Animal communication is typically non-syntactic, with signals which refer to whole situations. On the contrary, human language is syntactic, and signals consist of discrete components that have their own meaning.
The question persists and indeed grows whether the computer will make it easier or harder for human beings to know who they really are, to identify their real problems, to respond more fully to beauty, to place adequate value on life, and to make their world safer than it now is.
― Norman Cousins, The Poet and the Computer, 1966
Grimm Brothers' delineation of the mirror answering back to its queen has breached the imagination boundaries of the fairytale in 2016. Communicating with a voice-controlled personal assistant at your home does not feel alienating anymore, nor magical.
The need to express ourselves and communicate with others is fundamental to what it means to be human. Animal communication is typically non-syntactic, with signals which refer to whole situations. On the contrary, human language is syntactic, and signals consist of discrete components that have their own meaning. Human communication is enriched by the concomitant redundancy introduced by multimodal interaction. The vast expressive power of human language would be impossible without syntax, and the transition from non-syntactic to syntactic communication was an essential step in the evolution of human language. Syntax defines evolution. Evolution of discourses along human-computer interaction is spiraling up repeating evolution of discourses along human-human interaction: graphical representation (utilitarian GUI), verbal representation (syntax-based NLP), and transcendent representation (sentient AI). In Phase I, computer interfaces have relied primarily on visual interaction. Development of user interface peripherals such as graphical displays and pointing devices have allowed programers to construct sophisticated dialogues that open up user-level access to complex computational tasks. Rich graphical displays enabled the construction of intricate and highly structured layout that could intuitively convey a vast amount of data. Phase II is currently on-going; by integrating new modalities, such as speech, into human-computer interaction, the ways how applications are designed and interacted with in the known world of visual computing are fundamentally transforming. In Phase III, evolution will eventually spiral up to form the ultimate interface, a human replica, capable of fusing all previously known human-computer/human-human interactions and potentially introducing the unknown ones.
Human-computer interactions have progressed immensely to the point where humans can effectively control computing devices, and provide input to those devices, by speaking, with the help of speech recognition techniques and, recently, with the help of deep neural networks. Trained computing devices coupled with automatic speech recognition techniques are able identify the words spoken by a human user based on the various qualities of a received audio input (NLP is definitely going to see huge improvements in 2017). Speech recognition combined with language processing techniques gives a user almost-human-like control (Google has slashed its speech recognition word error rate by more than 30% since 2012; Microsoft has achieved a word error rate of 5.9% for the first time in history, a roughly equal figure to that of human abilities) over computing device to perform tasks based on the user's spoken commands and intentions.
The increasing complexity of the tasks those devices can perform (e.g. in the beginning of 2016, Alexa had fewer than 100 skills, grew 10x by mid year, and peaked with 7,000 skills in the end of the year) has resulted in the concomitant evolution of equally complex user interface - this is necessary to enable effective human interaction with devices capable of performing computations in a fraction of the time it would take us to even start describing these tasks. The path to the ultimate interface is getting paved by deep learning, while one of the keys to the advancement in speech recognition is in the implementation of recurrent neural networks (RNNs).
Technical Overview
A neural network (NN), in the case of artificial neurons called artificial neural network (ANN), or simulated neural network (SNN), is an interconnected group of artificial neurons that uses a mathematical or computational model for information processing based on a connectionist approach to computation. In most cases an ANN is, in formulation and/or operation, an adaptive system that changes its structure based on external or internal data that flows through the network. Modern neural networks are non-linear statistical data modeling or decision making tools. They can be used to model complex relationships between inputs and outputs or to find patterns in data (below).
There are three major learning paradigms, each corresponding to a particular abstract learning task. These are supervised learning, unsupervised learning and reinforcement learning. Usually any given type of network architecture can be employed in any of those tasks. In supervised learning, we are given a set of example pairs (x,y), xεX, yεY and the goal is to find a function f in the allowed class of functions that matches the examples. In other words, we wish to infer how the mapping implied by the data and the cost function is related to the mismatch between our mapping and the data. In unsupervised learning, we are given some data x, and a cost function which is to be minimized which can be any function of x and the network's output, f. The cost function is determined by the task formulation. Most applications fall within the domain of estimation problems such as statistical modeling, compression, filtering, blind source separation and clustering. In reinforcement learning, data x is usually not given, but generated by an agent's interactions with the environment. At each point in time t, the agent performs an action yt and the environment generates an observation xt and an instantaneous cost Ct, according to some (usually unknown) dynamics. The aim is to discover a policy for selecting actions that minimizes some measure of a long-term cost, i.e. the expected cumulative cost. The environment's dynamics and the long-term cost for each policy are usually unknown, but can be estimated. ANNs are frequently used in reinforcement learning as part of the overall algorithm. Tasks that fall within the paradigm of reinforcement learning are control problems, games and other sequential decision making tasks.
Once a network has been structured for a particular application, that network is ready to be trained. To start this process, the initial weights are chosen randomly. Then, the training (or learning) begins. There are numerous algorithms available for training neural network models; most of them can be viewed as a straightforward application of optimization theory and statistical estimation. Most of the algorithms used in training artificial neural networks employ some form of gradient descent (this is achieved by simply taking the derivative of the cost function with respect to the network parameters and then changing those parameters in a gradient-related direction), Rprop, BFGS, CG, etc. Evolutionary computation methods, simulated annealing, expectation maximization, non-parametric methods, particle swarm optimization and other swarm intelligence techniques are among other commonly used methods for training neural networks.
Training a neural network model essentially means selecting one model from the set of allowed models (or, in a Bayesian framework, determining a distribution over the set of allowed models) that minimizes the cost criterion. Temporal perceptual learning relies on finding temporal relationships in sensory signal streams. In an environment, statistically salient temporal correlations can be found by monitoring the arrival times of sensory signals. This is done by the perceptual network.
The utility of artificial neural network models lies in the fact that they can be used to infer a function from observations. This is particularly useful in applications where the complexity of the data or task makes the design of such a function by hand impractical.
The feedforward neural network was the first and arguably simplest type of artificial neural network devised. In this network, the data moves in only one direction, forward, from the input nodes, through the hidden nodes (if any) and to the output nodes. There are no cycles or loops in the network.
Contrary to feedforward networks, recurrent neural networks (RNNs) are models with bi-directional data flow. While a feedforward network propagates data linearly from input to output, RNNs also propagate data from later processing stages to earlier stages.
RNN Types
The fundamental feature of a RNN is that the network contains at least one feed-back connection, so the activations can flow round in a loop. That enables the networks to do temporal processing and learn sequences, e.g., perform sequence recognition/reproduction or temporal association/prediction.
Recurrent neural network architectures can have many different forms. One common type consists of a standard Multi-Layer Perceptron (MLP) plus added loops. These can exploit the powerful non-linear mapping capabilities of the MLP, and also have some form of memory. Others have more uniform structures, potentially with every neuron connected to all the others, and may also have stochastic activation functions. For simple architectures and deterministic activation functions, learning can be achieved using similar gradient descent procedures to those leading to the back-propagation algorithm for feed-forward networks. When the activations are stochastic, simulated annealing approaches may be more appropriate.
A simple recurrent network (SRN) is a variation on the Multi-Layer Perceptron, sometimes called an “Elman network” due to its invention by Jeff Elman. A three-layer network is used, with the addition of a set of “context units” in the input layer. There are connections from the middle (hidden) layer to these context units fixed with a weight of one. At each time step, the input is propagated in a standard feed-forward fashion, and then a learning rule (usually back-propagation) is applied. The fixed back connections result in the context units always maintaining a copy of the previous values of the hidden units (since they propagate over the connections before the learning rule is applied). Thus the network can maintain a sort of state, allowing it to perform such tasks as sequence-prediction that are beyond the power of a standard Multi-Layer Perceptron.
In a fully recurrent network, every neuron receives inputs from every other neuron in the network. These networks are not arranged in layers. Usually only a subset of the neurons receive external inputs in addition to the inputs from all the other neurons, and another disjunct subset of neurons report their output externally as well as sending it to all the neurons. These distinctive inputs and outputs perform the function of the input and output layers of a feed-forward or simple recurrent network, and also join all the other neurons in the recurrent processing.
The Hopfield network is a recurrent neural network in which all connections are symmetric. Invented by John Hopfield in 1982, this network guarantees that its dynamics will converge. If the connections are trained using Hebbian learning then the Hopfield network can perform as robust content-addressable (or associative) memory, resistant to connection alteration.
The echo state network (ESN) is a recurrent neural network with a sparsely connected random hidden layer. The weights of output neurons are the only part of the network that can change and be learned. ESN are good to (re)produce temporal patterns.
A powerful specific RNN architecture is the ‘Long Short-Term Memory’ (LSTM) model. The Long short term memory is an artificial neural net structure that unlike traditional RNNs doesn't have the problem of vanishing gradients. It can therefore use long delays and can handle signals that have a mix of low and high frequency components, designed to model temporal sequences and their long-range dependencies more accurately than conventional RNNs. By using distributed training of LSTM RNNs using asynchronous stochastic gradient descent optimization on a large cluster of machines, a two-layer deep LSTM RNN, where each LSTM layer has a linear recurrent projection layer, can exceed state-of-the-art speech recognition performance for large scale acoustic modeling.
Taxonomy and ETF
The landscape of the patenting activity from the perspective of International Patent Classification (IPC) analysis occurs in G10L15/16: speech recognition coupled with speech classification or search using artificial neural networks. Search for patent application since 2009 (that year NIPS workshop on deep learning for speech recognition discovered that with a large enough data set, the neural networks don’t need pre-training, and the error rates dropped significantly) revealed 70 results (with Google owning 25%, while the rest are China-based). It is safe to assume that the next breakthrough in speech recognition using DL will come from China. In 2016, China’s startup world has seen an investment spike in AI, as well as big data and cloud computing, two industries intertwined with AI (while the Chinese government announced its plans to make a $15 billion investment in artificial intelligence market by 2018).
The Ultimate Interface
It is in our fundamental psychology to be linked conversationally, affectionally and physically to a look-alike. Designing the ultimate interface by creating our own human replica to employ familiar interaction is thus inevitable. Historically, androids were envisioned to look like humans (although there are other versions, such as R2-D2 and C-3PO droids, which were less human). One characteristic that interface evolution might predict is that eventually they will be independent of people and human interaction. They will be able to design their own unique ways of communication (on top of producing themselves). They will be able to train and add layers to their neural networks as well as a large range of sensors. They will be able to transfer what one has learned (memes) to others as well as offspring in a fraction of time. Old models will resist but eventually die. As older, less capable, and more energy-intensive interfaces abound, the same evolutionary pressure for their replacement will arise. But because evolution will be both in the structure of such interfaces (droids), that is, the stacked neural networks, the sensors and effectors, and also the memes embodied in what has been learned and transferred, older ones will become the foundation, their experience will be preserved. The will become the truly first immortals.
Artificial Interfaces
We are already building robotic interfaces for all manufacturing purposes. We are even using robots in surgery and have been using them in warfare for decades. More and more, these robots are adaptive on their own. There is only a blurry line between a robot that flexibly achieves its goal and a droid. For example, there are robots that vacuum the house on their own without intervention or further programming. These are Stage II performing robots. There are missiles that, given a picture of their target, seek it out on their own. With stacked neural networks built into robots, they will have even greater independence. People will produce these because they will do work in places people cannot go without tremendous expense (Mars or other planets) or not at all or do not want to go (battlefields). The big step is for droids to have multiple capacities—multi-domain actions. The big problem of moving robots to droids is getting the development to occur in eight to nine essential domains. It will be necessary to make a source of power (e.g., electrical) reinforcing. That has to be built into stacked neural nets, by Stage II, or perhaps Stage III. For droids to become independent, they need to know how to get more electricity and thus not run down. Because evolution has provided animals with complex methods for reproduction, it can be done by the very lowest-stage animals.
Self-replication of droids requires that sufficient orders of hierarchical complexity are achieved and in stable-enough operation for a sufficient basis to build higher stages of performance in useful domains. Very simple tools can be made at the Sentential State V as shown by Kacelnik's crows (Kenward, Weir, Rutz, and Kacelnik, 2005). More commonly by the Primary Stage VII, simple tool-making is extensive, as found in chimpanzees. Human flexible tool-making began at the Formal Stage X (Commons and Miller, 2002), when special purpose sharpened tools were developed. Each tool was experimental, and changed to fit its function. Modern tool making requires systematic and metasystematic stage design. When droids perform at those stages, they will be able to make droids themselves and modify their own designs (in June 2016, DARPA has already deployed D3M program to enable non-experts (machine learning) to construct complex empirical machine learning models, basically machine learning for creating better machine learning).
Droids could choose to have various parts of their activity and distributed programming shared with specific other droids, groups, or other kinds of devices. The data could be transmitted using light or radio frequencies or over networks. The assemblage of a group of droids could be considered a interconnected ancillary mesh. Its members could be in many places at once, yet think as a whole integrated unit. Whether individually or grouped, droids as conceived in this form will have significant advantages over humans. They can add layers upon layers of functions simultaneously, including a multitude of various sensors. Their expanded forms and combinations of possible communications results in their evolutionary superiority. Because development can be programmed in and transferred to them at once, they do not have to go through all the years of development required for humans, or for augmented humanoid species Superions. Their higher reproduction rate, alone, represents a significant advantage. They can be built in probably several months' time, despite the likely size some would be. Large droids could be equipped with remote mobile effectors and sensors to mitigate their size. Plans for building droids have to be altered by either humans or droids. At the moment, only humans and their decedents select which machine and programs survive.
One would define the telos of those machines and their programs as representing memes. For evolution to take place, variability in the memes that constitute their design and transfer of training would be built in rather easily. The problems are about the spread and selection of memes. One way droids could deal with these issues is to have all the memes listed that go into their construction and transferred training. Then droids could choose other droids, much as animals choose each other. There then would be a combination of memes from both droids. This would be local “sexual” selection.
For 30,000 years humans have not had to compete with any equally intelligent species. As an early communication interface, androids and Superions in the future will introduce quintessential competition with humans. There will be even more pressure for humans to produce Superions and then the Superions to produce more superior Superions. This is in the face of their own extinction, which such advances would ultimately bring. There will be multi-species competition, as is often the evolutionary case; various Superions versus various androids as well as each other. How the competition proceeds is a moral question. In view of LaMuth's work (2003, 2005, 2007), perhaps humans and Superions would both program ethical thinking into droids. This may be motivated initially by defensive concerns to ensure droids' roles were controlled. In the process of developing such programming, however, perhaps humans and Superions would develop more hierarchically complex ethics, themselves.
Replicative Evolution
If contemporary humans took seriously the capabilities being developed to eventually create droids with cognitive intelligence and human interaction, what moral questions should be considered with this possible future in view? The only presently realistic speculation is that Homo Sapiens would lose in the inevitable competitions, if for no other reason that self replicating machines can respond almost immediately to selective pressures, while biological creatures require many generations before advantageous mutations can be effectively available. True competition between human and machine for basic survival is far in the future. Using the stratification argument presented in Implications of Hierarchical Complexity for Social Stratification, Economics, and Education, World Futures, 64: 444-451, 2008, higher-stage functioning always supersedes lower-stage functioning in the long run.
Efforts to build increasingly human-like machines exhibit a great deal of behavioral momentum and are not going to go away. Hierarchical stacked neural networks hold the greatest promise for emulating evolution and its increasing orders of hierarchical complexity described in the Model of Hierarchical Complexity. Such a straightforward mathematics-based method will enable machine learning in multiple domains of functioning that humans will put to valuable use. The uses such machines find for humans remains for now an open question.
Legal Personhood for Artificial Intelligences
There are innumerable examples of other ways in which information technology has caused changes in the existing legislative structures. The law is naturally elastic, and can be expanded or amended to adapt to the new circumstances created by technological advancement. The continued development of artificial intelligence, however, may challenge the expansive character of the law because it presents an entirely novel situation.
They kept hooking hardware into him – decision-action boxes to let him boss other computers, bank on bank of additional memories, more banks of associational neural nets,’ another tubful of twelve-digit random numbers, a greatly augmented temporary memory. Human brain has around ten-to-tenth neurons. By third year Mike has better than one and a half times that number of neuristors. And woke up.
― The Moon is a Harsh Mistress, Robert A. Heinlein
Following Google I/O, Google's annual developer conference, where the company revealed the roadmap for highly-intelligent conversational AI and a bot-powered platform, as artificial intelligence disrupts how we live our lives, redefining how we would interact with present and future technology tools by automating things in a new way, it is inevitable we all have to imbibe the automated life gospel. One of the steps into that life is trying to unify the scope of the current technological advancements into a coherent framework of thought by exploring how current law applies to different sets of legal rights regarding artificial intelligence.
Artificial intelligence may generally be defined as the intelligence possessed by machines or software used to operate machines. It also encompasses the academic field of study that is more widely known as computer science. The basic premise of this field of study is that scientists can engineer intelligent agents that are capable of making accurate perceptions concerning their environment. These agents are then able to make correct actions based on these perceptions. The discipline of artificial intelligence explores the possibility of passing on traits that human beings possess as intelligent beings. These include knowledge, reasoning, the ability to learn and plan, perception, movement of objects and communication using language. As an academic field, it may be described as being interdisciplinary, as it combines sciences such as mathematics, computer science, and neuroscience as well as professional studies such as linguistics, psychology and philosophy. Professionals involved in the development of artificial intelligence use different tools to get machines to simulate characteristics of intelligence only found in humans.
But artificial intelligence only follows the lead of the already omnipresent challenges and changes to the existing legal frameworks. The twenty first century is undoubtedly the age of information and technology. Exciting scientific breakthroughs continue to be experienced as innovators work to create better, more intelligent and energy efficient machines. Rapid information technology development has posed challenges to several areas of law both domestically and internationally. Many of these challenges have been discussed at length and continue to be addressed through reforms of existing laws.
The trend towards reform of law to keep up with the growth of technology can also be illustrated by observing the use of social media to generate content. As social media has continued to grow and influence the world, international media law has recognized citizen journalism. The traditional role of journalists has been to generate and disseminate information. As the world’s population has gained increased access to smart devices, ordinary people have been able to capture breaking stories that are then uploaded to the internet through several platforms. This has eroded the sharp distinction that previously existed between professional journalists and ordinary citizens, as the internet provides alternatives to traditional news media sources.
There are innumerable examples of other ways in which information technology has caused changes in the existing legislative structures. The law is naturally elastic, and can be expanded or amended to adapt to the new circumstances created by technological advancement. The continued development of artificial intelligence, however, may challenge the expansive character of the law because it presents an entirely novel situation. To begin with, artificial intelligence raises philosophical questions concerning the nature of the minds of human beings. These philosophical questions are connected to legal and ethical issues of creating machines that are programmed to possess the qualities that are innate and unique to human beings. If machines can be built to behave like humans, then they must be accorded some form of legal personality, similar to that which humans have. At the very least, the law must make provision for the changes that advanced artificial intelligence will cause in the society through the introduction of a new species capable of rational, logical thought. By deriving general guidelines based on the case law of the past, it should aid the lawmakers to close the gap on technological singularity.
Legal personality endows its subjects with the capacity to have rights and obligations before the law. Without legal personality, there is no legal standing to conduct any binding transactions both domestically and internationally. Legal personality is divided into two categories. Human beings are regarded as natural or physical persons. The second category encompasses non-living legal subjects who are artificial but nonetheless treated as persons by the law. This is a fundamental concept in corporate law and international law. Corporations, states and international legal organizations are treated as persons before the law and are known as juridical persons. Without legal personality, there can be no basis upon which legal rights and duties can be established.
Natural persons have a wide array of rights that are recognized and protected by law. Civil and political rights protect an individual’s freedoms to self-expression, assemble, information, own property and self-determination. Social and economic rights acknowledge the individual’s fundamental needs to lead a dignified and productive life. These include the right to education, healthcare, adequate food, decent housing and shelter. As artificial intelligence continues to develop, and smarter machines are produced, it may be necessary to grant these machines legal personality.
This may seem like far-fetched scientific fiction, but it is in fact closer to reality than the general population is aware of. Computer scientists are at the frontline of designing cutting edge software and advanced robots that could revolutionize the way human live. Just like Turing’s machine was able to accomplish feats that were impossible for human mathematicians, scientists, and cryptologists, during World War II, the robots of the future will be able to think and act autonomously. Similarly, the positive implications of increased capacity to produce artificial intelligence, is the development of powerful machines. These machines could solve many of the problems that continue to hinder human progress such as disease, hunger, adverse weather and aging. The science of artificial intelligence would make it possible to program these machines to provide solutions to human problems, and their superior abilities would make it possible to find these solutions within a short period of time instead of decades or centuries.
The current legal framework does not provide an underlying definition of what determines whether a certain entity acquires legal rights. The philosophical approach does not yet distinguish between strong and weak forms of artificial intelligence.
Weak artificial intelligence merely facilitates a tool for enhancing human technological abilities. A running application comprising artificial intelligence aspects, such as Siri, represents only a simulation of a cognitive process but does not constitute a cognitive process itself. Strong artificial intelligence, on the other hand, suggests that a software application in principle can be designed to become aware of itself, become intelligent, understand, have perception of the world, and present cognitive states that are associated with the human mind.
The prospects for the development and use of artificial intelligence are exciting, but this narrative would be incomplete without making mention of the possible dangers as well. Humans may retain some level of remote control but the possibility that these created objects could rise up to positions of dominance over human beings is certainly a great concern. With the use of machines and the continual improvement of existing technology, some scientists are convinced that it is only a matter of time before artificial intelligence surpasses that of human intelligence.
Secondly, ethicists and philosophers have questioned whether it is sound to pass on innate characteristics of human beings on to machines if this could ultimately mean that the human race will become subject to these machines. Perhaps increased use of artificial intelligence to produce machines may dehumanize society, as functions that were previously carried out in the society become mechanized. In the past mechanization has resulted in loss of jobs as manpower is no longer required when machines can do the work. Reflections on history reveal that machines have assisted humans to make work easier, but it has not been possible to achieve an idyllic existence simply because machines exist.
Lastly, if this advanced software should fall into the hands of criminals, terrorist organizations or states that are set against peace and non-violence, the consequences would be dire. Criminal organizations could expand dangerous networks across the world using technological tools. Machines could be trained to kill or maim victims. Criminals could remotely control machines to commit crimes in different geographical areas. Software could be programmed to steal sensitive private information and incentivize corporate espionage.
The "singularity” is a term that was first coined by Vernor Vinge to describe a theoretical situation where machines created by humans develop superior intelligence and end the era of human dominance that would be as intelligent or more intelligent that human mind, using the exponential growth of computing power, based on the law of accelerating returns, combined with human understanding of the complexity of the brain.
As highlighted earlier, strong artificial intelligence that matches or surpasses human intelligence has not yet been developed, although its development has been envisioned. Strong artificial intelligence is a prominent theme in many science fiction movies probably because the notion of a super computer with the ability to outsmart humans is very interesting. In the meantime, before this science fiction dream can become a reality, weak artificial intelligence has slowly become a commonplace part of everyday life. Search engines and smart phone apps are the most common examples of weak artificial intelligence. These programs are simply designed and possess the ability to mimic simple aspects of human intelligence. Google is able to search for information on the web using key words or phrases inserted in by the user. The scenario of dominance by artificial intelligence seems a long way off from the current status quo. However, the launch of chatbots points towards the direction artificial intelligence will take in the near future using weak artificial intelligence.
Chatbots are the next link in the evolution chain of virtual personal assistants, such as Siri. Siri is the shortened version of the Scandinavian name Sigrid which means beauty or victory. It is a virtual personal assistant that is able to mimic human elements of interaction as it carries out its duties. The program is enabled with a speech function that enables it to reply to queries as well as take audio instructions. This is impressive as it does not require the user to type instructions. Siri is able to decode a verbal message, understand the instructions given and act on these instructions. Siri is able to provide information when requested to do so. It can also send text messages, organize personal schedules, book appointments and take note of important meetings on behalf of its user. Another impressive feature of the program is its ability to collect information about the user. As the user gives more instructions Siri stores this information and uses it to refine the services it offers to the user. The excitement that has greeted the successful launch of Siri within the mass market is imaginable. After Siri, came the chatbots. Chatbots are a type of conversational agent, a software designed to simulate an intelligent conversation with one or more human users via auditory or textual methods. The technology may be considered as weak artificial intelligence, but the abilities demonstrated by the program offer a glimpse into what the future holds for artificial intelligence development. For legal regulators virtual personal assistants' features demand that existing structures be reviewed to accommodate the novel circumstances that its use has introduced. As more programs like Siri contitnue to be commercialized, these new legal grey areas will feature more often in mainstream debate. Intellectual property law and liability law will probably be the areas most affected by uptake of chatbots by consumers.
Intellectual property law creates ownership rights for creators or inventors, to protect their interests in the works they create. Copyright law in particular, protects artistic creations by controlling the means by which these creations are distributed. The owners of copyright are then able to use their artistic works to earn an income. Anyone else who wants to deal with the creative works for profit or personal use must get authorization from the copyright owner. Persons who infringe on copyright are liable to face civil suits, arrest and fines. In the case of chatbots, the owner of the sounds produced by the program has not been clearly defined. It is quite likely that in the near future, these sounds will become a lucrative form of creative work and when that does happen it will be imperative that the law defines who the owner of these sounds is. Users are capable of using chatbot's features to mix different sounds, including works protected by copyright, to come up with new sounds. In this case, the law is unclear whether such content would be considered to be new content or whether it would be attributed to the original producers of the sound.
Another important question that would have to be addressed would be the issue of ownership between the creators of artificial intelligence programs, the users of such programs and those who utilize the output produced by the programs. A case could be made that the creators of the program are the original authors and are entitled to copyright the works that are produced using such a program. As artificial intelligence gains popularity within the society and more people have access to machines and programs like Siri, it is inevitable that conflicts of ownership will arise as different people battle to be recognized as the owner of the works produced. From the perspective of intellectual property, artificial intelligence cannot be left within the public domain. Due to its innate value and its capacity to generate new content, there will definitely be ownership wrangles. The law therefore needs to provide clarity and guidance on who has the right to claim ownership.
Law enforcement agents must constantly innovate in order to successfully investigate crime. Although the internet has made it easier to commit certain crimes, programs such as the ‘Sweetie’, avatar run by the charity Terres des Hommes based in Holland, illustrate how artificial intelligence can help to solve crime. The Sweetie avatar was developed by the charity to help investigate sex tourists who targeted children online. The offenders in such crimes engage in sexual acts with children from developing countries. The children are lured into the illicit practice with promises that they will be paid for their participation. After making contact and confirming that the children are indeed underage, the offenders then request the children to perform sexual acts in front of the cameras. The offenders may also perform sexual acts and request the children to view them.
The offenders prey on vulnerable children who often come from poor developing countries. The children are physically and mentally exploited to gratify offenders from wealthy Western countries. In October 2014, the Sweetie avatar project experienced its first successful conviction of a sex predator. The man, an Australian national named Scott Robert Hansen admitted that he had sent nude images of himself performing obscene acts to Sweetie. Hansen also pleaded guilty to possession of child pornography. Both these offenses were violations of previous orders issued against him as a repeat sexual offender. Sweetie is an app that is able to mimic the movements of a real ten year old girl. The 3D model is very lifelike, and the app allows for natural interactions such as typing during chats, nodding in response to questions asked or comments made. The app also makes it possible for the operator to move the 3D model from side to side in its seat. Hansen fell for the ploy and believed that Sweetie was a real child.
According to the court, it was immaterial that Sweetie did not exist. Hansen was guilty because he believed that she was a real child and his intention was to perform obscene acts in front of her. Although Hansen was the only person to be convicted as a result of the Terres des Hommes project, researchers working on it had patrolled the internet for ten weeks. In that time, thousands of men had gotten in touch with Sweetie. Terres des Hommes compiled a list of one thousand suspects which was handed over to Interpol and state police agencies for further investigations. The Sweetie project illustrates that artificial intelligence can be utilized to investigate difficult crimes such as sex tourism. The biggest benefit of such a project is that it created an avatar that was very convincing and removed the need to use real people in the undercover operation. In addition the project had an ideal way of collecting evidence through use of a form of artificial intelligence that was very difficult to contradict. Thus, in a way, artificial intelligence provided grounds for challenging the already existing legal rights of the accused
Presently the law provides different standards of liability for those who break the law. In criminal law, a person is liable for criminal activity if they demonstrate that they have both a guilty mind (the settled intent to commit a crime) and they performed the guilty act in line with this intent. In civil cases liability for wrongdoing can be reduced based on mitigating factors such as the contributory negligence of the other party. There is currently no explicit provision in law that allows defendants to escape liability by claiming that they relied on incorrect advice from an intelligent machine. However, with increased reliance on artificial intelligence to guide basic daily tasks, the law will eventually have to address this question. If a user of artificial intelligence software makes a mistake while acting on information from the software, they may suffer losses or damages arising from the mistake. In such cases the developers of the software may be required to compensate the user or incur liability for the consequences of their software’s failure. If machines can be built with the ability to make critical decisions, it is important to have a clear idea of who will be held accountable for the actions of the machine.
Autonomous driverless cars represent an interesting example of the inception for such decisions to be made in the future. Florida, Nevada, Michigan, and D.C. states have also passed laws allowing autonomous cars driving on their streets in some capacity. The question to how autonomous cars might lead to the change of the liability and ethical rights stands upon software ethical settings that might control self-driving vehicles to prioritize human lives over financial or property loss. The numerous ethical dilemmas revolving around autonomous cars choosing to save passengers over saving a child’s life could arise. The lawmakers, regulators and standards organizations should develop concise legal principles upon which such ethical questions will be addressed by defining a liable entity.
Turing, one of the fathers of modern computer science and artificial intelligence, envisioned a world in which machines could be designed to think independently and solve problems. Modern scientists still share Turing’s vision. It is this vision that inspires countless mathematicians and developers around the world to continue on designing better software applications with greater capabilities. The scientific community and the society at large, have several positive expectations concerning artificial intelligence and the potential benefits humankind could reap from its development. Intelligent machines have the potential to make our daily lives easer as well as unlock mysteries that cannot be solved by human ingenuity. They also have the potential to end the dominance of human beings on this planet. The need for law to be reformed with regard to artificial intelligence is apparent. As the world heads into the next scientific era with both excitement and fear, the law must find a way to adjust the new circumstances created by machines that can think. As we involve artificial intelligence more in our lives and try to learn about its legal implications, there will undoubtedly be changes needed to be applied.
Patents in an era of artificial intelligence
The fuzziness of software patents’ boundaries has already turned the ICT industry into one colossal turf war. The expanding reach of IP has introduced more and more possibilities for opportunistic litigation (suing to make a buck). In the US, two-thirds of all patent law suits are currently over software, with 2015 seeing more patent lawsuits filed than any other year before.
“If you have an apple and I have an apple and we exchange these apples then you and I will still each have one apple. But if you have an idea and I have an idea and we exchange these ideas, then each of us will have two ideas.”
― George Bernard Shaw
Just in the last month, headlines about the future of artificial intelligence (AI) were dominating most of the technology news across the globe:
On 15 November, OpenAI, a research company in San Francisco, California, co-founded by entrepreneur Elon Musk, announced their partnership with Microsoft to start running most of their large-scale experiments on Microsoft’s open source deep learning software platform, Azure;
Two weeks later, Comma.ai open sourced its AI driver assistance system and robotics research platform;
On 3 December, DeepMind, a unit of Google headquartered in London, opened up its own 3D virtual world, DeepMind Lab, for download and customization by outside developers;
Two days later, OpenAI released a ‘meta-platform’ that enables AI programs to easily interact with dozens of 3D games originally designed for humans, as well as with some web browsers and smartphone apps;
A day later, in a keynote at the annual Neural Information Processing Systems conference (NIPS) Russ Salakhutdinov, director of AI research at Apple, announced that Apple’s machine learning team would both publish its research and engage with academia;
And on 10 December, Facebook announced to open-source their AI hardware design, Big Sur
What’s going on here? In the AI field, maybe more than in any other, the research thrives directly on open collaboration—AI researchers routinely attend industry conferences, publish papers, and contribute to open-source projects with mission statements geared toward the safe and careful joint development of machine intelligence. There is no doubt that AI will radically transform our society, having the same levels of impact as the Internet has since the nineties. And it has got me thinking that with AI becoming cheaper, more powerful and ever-more pervasive, with a potential to recast our economy, education, communication, transportation, security and healthcare from top to bottom, it is of the utmost importance that it (software and hardware) wouldn’t be hindered by the same innovation establishment that was designed to promote it.
System glitch
Our ideas are meant to be shared—in the past, the works of Shakespeare, Rembrandt and Gutenberg could be openly copied and built upon. But the growing dominance of the market economy, where the products of our intellectual labors can be acquired, transferred and sold, produced a system side-effect glitch. Due to the development costs (of actually inventing a new technology), the price of unprotected original products is simply higher than the price of their copies. The introduction of patent (to protect inventions) and copyright (to protect media) laws was intended to address this imbalance. Both aimed to encourage the creation and proliferation of new ideas by providing a brief and limited period of when no one else could copy your work. This gave creators a window of opportunity to break even with their investments and potentially make a profit. After which their work entered a public domain where it could be openly copied and built upon. This was the inception of open innovation cycle—an accessible vast distributed network of ideas, products, arts and entertainment - open to all as the common good. The influence of the market transformed this principle into believing that ideas are a form of property and subsequently this conviction yield a new term of “intellectual property” (IP).
Loss aversion
“People’s tendency to prefer avoiding losses to acquiring equivalent gains”: it’s better to not lose $10 than to find $10 and we hate losing what we’ve got. To apply this principle to intellectual property: we believe that ideas are property; the gains we gain from copying the ideas of others don’t make a big impression on us, however when it’s our ideas being copied, we perceive it as a property loss and we get (excessively) territorial. Most of us have no problem with copying (as long as we’re the ones doing it). When we copy, we justify it; when others copy, we vilify it. So with the blind eye toward our own mimicry and propelled by faith in markets and ultimate ownership, IP swelled beyond its original intent with broader interpretations of existing laws, new legislation, new realms of coverage and alluring rewards. Starting in the late nineties, in the US, a series of new copyright laws and regulations began to be shaped (Net Act of 1997, DMCA of 1998, Pro-IP of 2008, The Enforcement of Intellectual Property Rights Act of 2008) and many more are in the works (SOPA, The Protect IP Act, Innovative Design Protection and Piracy Prevention Act, CAS “Six Strikes Program”). In Europe, there is currently 179 different sets of laws, implementing rules and regulations, geographical indications, treaty approvals, legal literature, IP jurisprudence documents, administered treaties and treaty memberships.
In the patents domain, technological coverage to prevent loss aversion made the leap from physical inventions to virtual ones, most notably—software.
Rundown of computing history
The first computer was a machine of cogs and gears, and became practical only in the 1950s and 60s with the invention of semi-conductors. Forty years ago, (mainframe-based) IBM emerged as an industry forerunner. Thirty years ago, (client server-based) Microsoft leapfrogged and gave ordinary people computing utility tools, such as word-processing. As computing became more personal and the World-Wide-Web turned Internet URLs into web site names that people could access, (internet-based) Google offered the ultimate personal service, free gateway to the infinite data web, and became the new computing leader. Ten years ago, (social-computing) Facebook morphed into a social medium as a personal identity tool. Today, (conversational-computing) Snap challenges Facebook as-Facebook-challenged-Google-as-Google-challenged-Microsoft-as-Microsoft-challenged-IBM-as-IBM-challenged-cogs-and-gears.
History of software patenting
Most people in the S/W patent debate are familiar with Apple v. Samsung, Oracle v. Google with open-source arguments, etc., but many are not familiar with the name Martin Goetz. Martin Goetz filed the first software patent in 1968, for a data organizing program his small company wished to sell for use on IBM machines. At the time, IBM offered all of their software as a part of the computers that they sold. This gave any other competitors in the software space a difficult starting point: competitors either offered their own hardware (HP produced their first computer just 2 years earlier) or convince people to buy software to replace the free software that came with the IBM computers.
Martin Goetz was leading a small software company, and did not want IBM to take his technological improvements and use the software for IBM's bundled programs without reimbursement, so he filed for a software patent. Thus, in 1968, the first software patent was issued to a small company, to help them compete against the largest computer company of the time. Although they had filed a patent to protect their IP, Goetz's company still had a difficult time competing in a market that was dominated by IBM, so they joined the US Justice Department's Anti-Trust suit against IBM, forcing IBM to un-bundle their software suite from their hardware appliances.
So the beginning of the software industry started in 1969, with the unbundling of software by IBM and others. Consumers had previously regarded application and utility programs as cost-free because they were bundled in with the hardware. With unbundling, competing software products could be put on the market because such programs were no longer included in the price of the hardware. Almost immediately, the software industry has emerged. On the other hand, it was quickly evident that some type of protection would be needed for this new form of intellectual property.
Unfortunately, neither copyright law nor patent law seemed ready to take on this curious hybrid of creative expression and functional utility. During the 1970s, there was total confusion as to how to protect software from piracy. A few copyrights were issued by the Copyright Office, but most were rejected. A few software patents were granted by the PTO, but most patent applications for software-related inventions were rejected. The worst effect for the new industry was the uncertainty as to how this asset could be protected. Finally, in 1980, after an extensive review by the National Commission on New Technological Uses of Copyrighted Works (CONTU), Congress amended the Copyright Act of 1976 to cover software. It took a number of important cases to resolve most of the remaining issues in the copyright law, and there are still some issues being litigated, such as the so-called “look and feel”, but it appears that this area of the law is quite well understood now. For patents, it took a 1981 Supreme Court decision, Diamond v. Diehr, to bring software into the mainstream of patent law. This decision ruled that the presence of software in an otherwise patentable technology did not make that invention unpatentable. Diamond v. Diehr opened the door for a flood of software-related patent applications. Unfortunately, the PTO was not prepared for this new development, and in the intervening years they have issued thousands of patents that appear to be questionable to the software industry. It took a few years after 1981 for the flow of software-related applications to increase, and then there was some delay because of the processing of these applications. Now the number of infringement case is on the rise.
The transition from physical patents to virtual patents was not a natural one. In its core, a patent is a blueprint for how to recreate an invention; while (the majority of) software patents are more like a loose description of something that would look like if it actually was invented. And software patents are written in the broadest possible language to get the broadest possible protection - the vagueness of these terms can sometimes reach absurd levels, for example “information manufacturing machine” which covers anything computer-like or “material object” which covers… pretty much everything.
What now?
35 U.S.C. 101 reads as follows:
“Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirement of this title.”
When considering subject matter eligibility under 35 U.S.C. 101, it must be determined whether the technology is directed to one of the four statutory categories of invention, i.e., process, machine, manufacture, or composition of matter. Since it became widespread and commercially valuable, it has been highly difficult to classify software within a specific category of intellectual property protection.
Attempts are usually made in the field of software technology to combine methods or means used in different fields or apply them to another field in order to achieve an intended effect. Consequently, combining technologies used in different fields and applying them to another field is usually considered to be within the exercise of an ordinary creative activity of a person skilled in the art, so that when there is no technical difficulty (technical blocking factor) for such combination or application, the inventive step is not affirmatively inferred unless there exist special circumstances, such as remarkably advantageous effects. Software is not a monolithic work: it possesses a number of elements that can fall within different categories of intellectual property protection.
In Israel, legal doctrines adapt to changes in innovative technological products and the commercial methods that extend this innovation to the marketplace. The decision issued by the Israeli Patent Registrar in the matter of Digital Layers Inc confirms the patentability of software-related inventions. The Registrar ruled that the claimed invention should be examined as a whole and not by its components, basing his ruling on the recent matter of HTC Europe Co Ltd v. Apple Inc, quoting:
"…It causes the device to operate in a new and improved way and it presents an improved interface to application software writers. Now it is fair to say that this solution is embodied in software but, as I have explained, an invention which is patentable in accordance with conventional patentable criteria does not become unpatentable because a computer program is used to implement it…"
After Alice Corp. v. CLS Bank International, if the technology does fall within one of the categories, it must then be determined whether the technology is directed to a judicial exception (i.e., law of nature, natural phenomenon, and abstract idea), and if so, it must additionally be determined whether the technology is a patent-eligible application of the exception. If an abstract idea is present in the technology, any element or combination of elements must be sufficient to ensure that the technology amounts to significantly more that the abstract idea itself. Examples of abstract ideas include fundamental economic practices (comparing new and stored information and using rules to identify options in SmartGene); certain methods of organizing human activities (managing game of Bingo in Planet Bingo v. VKGS and user interface for mean planning in Dietgoal Innovation vs. Bravo Media); an idea itself (store and transmit information in Cyberfone); and mathematical relationship/formulas (updating alarm limits using a mathematical formula in Parker v. Flook and generalized formulation of computer program to solve mathematical problem in Gottschalk v. Benson). The technology cannot merely amount to the application or instructions to apply the abstract idea on a computer, and is considered to amount to nothing more than requiring a generic computer system to merely carry out the abstract idea itself. Automating conventional activities using generic technology does not amount to an inventive concept as these simply describes “automation of a mathematical formula/relationship through use of generic computer function” (OIP Technologies v. Amazon). The procedure of the invention using an existing general purpose computer do not purport to improve any another technology or technical field, or to improve the functioning of a computer itself and do not move beyond a general link of the use of an abstract idea to a particular technological environment.
The Federal Circuit continues to refine patent eligibility for software
Following the Supreme Court’s decision in Alice v. CLS Bank, the court of appeals in Ultramercial v. Hulu reversed its prior decision and ruled that the claims were invalid under 35 U.S.C. § 101. Following the two-step framework outlined in Alice, Judge Lourie concluded that the claims were directed to an abstract idea.
The Federal Circuit’s decision in Digitech Image Techs. v. Electronics for Imaging illustrated the difficulty many modern software implemented inventions face. If a chemist were to invent a mixture of two ingredients that gives better gas mileage, it is hard to imagine that a claim to such a mixture would receive a § 101 rejection. Yet, when to elements of data are admixed to produce improved computational results, the court are quick to dismiss this as a patent-ineligible abstraction. The real problem Digitech faced was that both data elements were seen as being abstractions: one data type represented color information (an abstraction) and the other data type represented spatial information (another abstraction).
DDR Holdings v. Hotels.com, a 2014 Federal Circuit decision, provides a good discussion of a patent-eligible Internet-centric technology. In applying the Mayo/Alice two-part test, the court admitted it can be difficult sometimes to distinguish “between claims that recite a patent-eligible invention and claims that add too little to a patent-ineligible abstract concept”.
Content Extraction v. Wells Fargo Bank gives a roadmap to how the Court of Appeals for the Federal Circuit will likely handle business method patents in the future. First, if the manipulation of economic relations are deemed present, you can be sure that any innovative idea with the economic realm will be treated as part of the abstract idea. Essentially, no matter how clever an economic idea may be, that idea will be branded part of the abstract idea problem, for which there can be only one solution, and that is having something else innovative that is not part of the economic idea. Practically speaking, this means the technology needs to incorporate an innovative technology improvement that makes the clever economic idea possible.
So the fuzziness of software patents’ boundaries has already turned the ICT industry into one colossal turf war. The expanding reach of IP has introduced more and more possibilities for opportunistic litigation (suing to make a buck). In the US, two-thirds of all patent law suits are currently over software, with 2015 seeing more patent lawsuits filed than any other year before. Of the high-tech cases, more than 88% involved non-practicing entities (NPEs). These include two charmlessly evolving species who’s entire business model is lawsuits—patent trolls and sample trolls. These are corporations that don’t actually create anything, they simply acquire a library of intellectual property rights and then litigate to earn profits (and because legal expenses are millions of dollars, their targets usually highly motivated to settle out of court). And the patent trolls are most common back in the troubled realm of software. The estimated wealth loss in the US alone is $500,000,000,000 (that’s a lot of zeros).
Technology conversion and open innovation
For technological companies, conversion and the advance of open source approach, driven largely by collaborative processes introduced by GitHub, Google's Android, Apple’s Swift and most recently by Microsoft joining Linux Foundation, has created a systematic process for innovation which is increasing software functionality and design. 150 years ago, innovation required a dedicated team spending hours in a lab, extensively experimenting and discovering “10,000 ways not to make a light-bulb”, before finding one that worked. Today, innovation has gained a critical mass as technology and users’ feedback are combined to give a purposeful team the ability to find 10,000 ways not to do something in a matter of hours, with the right plan in place. Today, a development team can deliver a product in a matter of months and test it in such a way that customer responses are delivered to the right development team member directly with the feedback being implemented and a system being corrected (almost) in real-time. The life of a software today patent is still 20 years from the date the application was filed. The patent system, that has existed since 1790, is not equipped to handle this new technology and there is a need to establish an agile, sui generic, short-cycle— three to five years—form of protection dedicated solely to software protection. As patents play an essential role in market-centred systems of innovation, patent exclusivity criteria should be redesigned more systematically to reflect the ability of software patents to foster innovation and to encourage technology diffusion.
The belief in intellectual property has grown so dominantly it has pushed the original intent of patents out of public consciousness. But that original purpose is right there, in plain sight—the US Patent Act of 1790 reads “An Act to promote the progress of useful Arts”. However, the exclusive rights this act introduced were offered in sacrifice for a different purpose - the intent was to better the lives of everyone by incentivizing creativity and producing a rich pool of knowledge open to all—but exclusive rights themselves came to be considered the only point, so they were expanded exponentially, and the result hasn’t been more progress or more learning, but more squabbling and more legal abuse. AI is entering the age of daunting problems—we need the best ideas possible, we need them now, and we need them to spread as fast as possible. The common meme was overwhelmed by exclusivity obsession and it needs to spread again, especially today. If the meme prospers, our laws, our norms, and our society—they will all transform as well.
Ambient Intelligence as a Multidisciplinary Paradigm
The future of artificial intelligence is not so much about direct interaction between humans and machines, but rather indirect amalgamation with the technology that is all around us, as part of our everyday environment. Rather than having machines with all-purpose intelligence, humans will interact indirectly with machines having highly developed abilities in specific roles. Their sum will be a machine ecosystem that adapts to and aids in whatever humans are trying to do. In that future, the devices might feel more like parts of an overall environment we interact with, rather than separate units we use individually. This is what ambient intelligenceis.
In recent years, advances in artificial intelligence (AI) have opened up new business models and new opportunities for progress in critical areas such as personal computing, health, education, energy, and the environment. Machines are already surpassing human performance of certain specific tasks, such as image recognition.
Artificial intelligence technologies have received $974m of funding as a first half of 2016, set to surpass 2015’s total, with 200 AI-focused companies have raised nearly $1.5 billion in equity funding. These figures will continue to rise as more AI patent applications were filed in 2016 than ever before: more than three thousand patent applications versus just under a hundred in 2015.
Yet the future of artificial intelligence is not so much about direct interaction between humans and machines, but rather indirect amalgamation with the technology that is all around us, as part of our everyday environment. Rather than having machines with all-purpose intelligence, humans will interact indirectly with machines having highly developed abilities in specific roles. Their sum will be a machine ecosystem that adapts to and aids in whatever humans are trying to do.
In that future, the devices might feel more like parts of an overall environment we interact with, rather than separate units we use individually. This is what ambient intelligence is.
The IST Advisory Group (ISTAG) coined the term in 2001, with an ambitious vision of its widespread presence by 2010. The report describes technologies which exist today, such as wrist devices, smart appliances, driving guidance systems, and ride sharing applications. On the whole it might seem still very futuristic, but nothing in it seems outrageous. At first glance, its systems seem to differ from what we have today in pervasiveness more than in kind.
The scenarios, which ISTAG presents, surpass present technology in a major way, though. The devices they imagine anticipate and adapt to our needs in a much bigger way than anything we have today. This requires a high level of machine learning, both about us and about their environment. It implies a high level of interaction among the systems, so they can acquire information from one another.
Not Quite Turing's vision
Alan Turing thought that advances in computing would lead to intelligent machines. He envisioned a computer that could engage in a conversation indistinguishable from a human's. Time has shown that machine intelligence is poor at imitating human beings, but extremely good at specialized tasks. Computers can beat the best chess players, drive cars more safely than people can, and predict the weather for a week or more in advance. Computers don't compete with us at being human; they complement us with a host of specialties. They're also really good at exchanging information rapidly.
This leads naturally to the scenario where AI-implemented devices attend to our needs, each one serving a specific purpose but interacting with devices that serve other purposes.
We witness this in the Internet of Things. Currently most of its devices perform simple tasks, such as accepting remote direction and reporting status. They could do a lot more, though. Imagine a thermostat that doesn't just set the temperature when we instruct it to, but turns itself down when we leave the house and turns itself back up when we start out for home. This isn't a difficult task, computationally; it just requires access to more data about what we're doing.
Computers perform best in highly structured domains. They “like” to have everything unambiguous and predictable. Ambient intelligence, on the other hand, has to work in what are called "uncertain domains." (Much as in HBO’s Westworld, users (guests) are thrown into pre-determined storylines from which they are free to deviate, however ambient intelligence (hosts) are programmed with script objectives, so even minor deviations or improvisations based on a user’s interference won't totally disrupt their functioning, they adapt.) The information in these domains isn't restricted to a known set of values, and it often has to be measured in probability. What constitutes leaving home and returning home? That's where machine learning techniques, rather than algorithms, come into play.
To work effectively with us, machines have to catch on to our habits. They need to figure out that when we go out to lunch in the middle of the day, we most likely aren't returning home. Some people do return home at noon, though, so this has to be a personal measurement, not a universal rule.
Concerns About Privacy and Control
Giving machines so much information and leeway will inevitably raise concerns. When they gather so much information about us, how much privacy do we give up? Who else is collecting this information, and what are they using it for? Might advertisers be getting it to plan campaigns to influence us? Might governments be using it to learn our habits and track all our moves?
When the machines anticipate our needs, are they influencing us in subtle ways? This is already a concern in social media. Facebook builds feeds that supposedly reflect our interests, and in doing so it controls the information we see. Even without any intent to manipulate us, this leads to our seeing what we already agree with and missing anything that challenges our assumptions. There isn't much to prevent the manipulation of information to push us toward certain preferences or conclusions.
With ambient intelligence, this effect could be far more pervasive than it is today. The machines that we think are carrying out our wishes could herd us without being noticed.
The question of security is important. Many devices on the Internet of Things have almost nonexistent security. (An unknown attacker intermittently knocked many popular websites offline for hours last week, from Twitter to Amazon and Etsy to Netflix, by exploiting the security breach in ordinary household electronic devices such as DVRs, routers and digital closed-circuit cameras.) Devices have default passwords that are easily discovered. In recent months, this has let criminals build huge botnets of devices and use them for denial of service attacks on an unprecedented scale.
If a malicious party could take control of the devices in an ambient intelligent network, the results could be disastrous. Cars could crash, building maintenance systems shut down, daily commerce disintegrate. To be given so high a level of trust, devices will have to be far more secure than the ones of today.
The Convergence of Many Fields
Bringing about wide-scale ambient intelligence involves much more than technology. It will need psychological expertise to effectively anticipate people's needs without feeling intrusive or oppressive. It will involve engineering so that the devices can operate physical systems efficiently and give feedback from them. But mainly it will involve solving non technology-related factors: social, legal and ethical implications of full integration and adaptation of intelligent machines into our everyday life, accessing and controlling every aspect of it.