I specialize in identifying disruptive, core technologies and strategic technology trends in early-stage startups, research universities, government sponsored laboratories and commercial companies.

In my current role, I lead sourcing of strategic technology investment opportunities and managing Dyson’s diligence and outreach processes, specifically in the U.S., Israel and China.

I write here (sporadically) on the convergence of science and engineering, with broader adopted interests in novel disruptive technologies, cognitive psychology, human-computer interaction (HCI), philosophy, linguistics and artificial intelligence (AI).

Swarm intelligence: from natural to artificial systems

"What is not good for the swarm is not good for the bee."

― Marcus Aurelius

Complexity is natively intervened within data: if an operation is decomposable into rudimentary steps whose number varies depending on data complexity, exploiting a data sequence as a whole (collective effort of colony members in the specific task), rather than a single data input, can conduce to a much faster result. By forming a closed-loop system among large populations of independent agents, the ‘Swarm’, high-level intelligence can emerge that essentially exceeds the capacity of the individual participants. The intelligence of the universe is social.

Yet due to the complexity concept, when designing artificially intelligent systems, researchers have historically turned to creating a single machine with the ability to perform a single task extremely well and eventually better than a human (ANI), similar to the way a honey bee can transfer pollen between flowers and plants extremely well, better than a human at that task. But a single honey bee has no capacity to extend natural means of reproduction of honey bee colonies by locating the ideal location for a hive and building such an incredibly complex structure, just like DeepMind’s AlphaGo has no capacity to truly understand most ordinary English sentences (yet). AlphaGo learned to play an exceedingly intricate game (and plays it better than any human) by analyzing about 30 million moves done by professional Go players and once AlphaGo could mimic human play, it moved to an even higher level by playing game after game against itself, closely tracking the results of each move. Is there true limitation to high-level intelligence that can arise from linking such independent agents as AlphaGo into a swarm of individuals working in collaboration to autonomously extract vast amounts of training data from each other? Will humans be able to grasp that critical mass point beyond which our mind can’t foresee the end result?

Swarm intelligence (SI) is a branch of artificial intelligence that deals with artificial and natural systems that are composed of many individual agents that coordinate using decentralized self-organization and control. This architecture models the collective behavior of social swarms in nature such as honey bees, ant colonies, schools of fish and bird flocks. Although these agents (swarm individual) are uncomplicated and non-intelligent per se, they are able to achieve necessary tasks for their survival by working collectively, something that could not be achieved by the limited capability capacities on their own. The type of interaction between these agents can be direct (visual or audio contact) but more interestingly it can also be indirect. Indirect interaction is referred to as stigmergy. This entails communication by modifying the physical environment therefore acting as a means of communication (in nature, ants leave trails of pheromone on their way in search for food or building sources and these pheromones signal, guide and enable other following ants). Swarm intelligence algorithms have been already successively applied in various problem domains including finding optimal routes, function optimization problems, structural optimization, scheduling and image and data analysis. Computational modeling of swarms has been in steady increase as applications in real life problem arise.  Some of the existing models today are Artificial Bee Colony, Cat Swarm Optimization, Bacterial Foraging and the Glowworm Swarm Optimization, but the two most commonly used models are Ant Colony Optimization and Particle Swarm Optimization.

Ant Colony Optimization (ACO) Model draws inspiration from the social behavior of ants’ colony. It is a natural outlook that a group of ants can jointly figure out the shortest path between their nest and their food. Real ants lay down pheromones that direct each other while the simulated ants similarly record their positions and the accuracy of their solutions. This ensures that later simulation iterations get even better solutions. Similarly, artificial ants agent can locate the most optimal solutions of navigating through a parameter space with all possible options represented.
ACO has been used in many optimization problems that include scheduling, assembly line balancing, probabilistic Traveling Salesman Problem (TSP), DNA sequencing, protein-ligand docking and 2D-HP protein folding. More recently the Ant Colony Optimization algorithms have been extended for use in machine learning (deep learning) and data mining to enhance telecommunication and bioinformatics domains (the similarity of the ordering problems in bioinformatics, such as  sequence alignment and gene mapping, makes it possible to solve extremely efficiently using ACO).

Particle Swarm Optimization (PSO) is based on the sociological behavior associated the flocking structure of birds. Birds can fly in large groups extending over large distances without colliding. They ensure there is an optimal separation between themselves and their neighbors. The PSO algorithm is a population-based search strategy that aims at finding optimal solutions by the use of a series of flying particles. Their velocities are dynamically adjusted according to their neighbors in the search space and the historical performance. PSO provides a breakthrough through solutions that can be mapped by a set of points in a n-dimension solution space. The term particle is used to refer to population members who are fundamentally described as the swarm positions in the n-dimensional solution space. Each particle is set into motion through the solution space with a velocity vector representing the particle‘s speed in each dimension. Each particle has a memory to store its historically best solution (i.e., its best position ever attained in the search space so far, which is also called its experience). Due to its simplicity, efficiency and fast-convergence nature, PSO has been applied in various real life problems. These ranges from combinational optimization to computational intelligence, signal processing to electromagnetic applications, robotics to medical applications. PSO is also used widely to train the weights of a feed-forward multilayer perceptron neural network. Consequent  application include areas such as image classification, image retrieval, pixel classification, detection of texture synthesis, character recognition, shape matching, image noise cancellation and motion estimation, all of the parameters that can lead us to establishing a fully autonomous transportation system.

Swarm intelligence, deploying nature inspired models and converging on a preferred solution, is proved to provide simple yet robust methods of solving some complex real life problems in various fields of research, with incredible results. Yet enabling swarm intelligence by merging different autonomous narrow AI agents could irreversibly break “human-in-the-loop” and accelerate its expansion, beyond our knowledge or control. Are we going to be around to witness how smart an artificial swarm intelligence can get?  

Improvisational intelligence as a domain-specific adaptation

Legal personhood for artificial intelligences