I specialize in identifying disruptive, core technologies and strategic technology trends in early-stage startups, research universities, government sponsored laboratories and commercial companies.

In my current role, I lead sourcing of strategic technology investment opportunities and managing Dyson’s diligence and outreach processes, specifically in the U.S., Israel and China.

I write here (sporadically) on the convergence of science and engineering, with broader adopted interests in novel disruptive technologies, cognitive psychology, human-computer interaction (HCI), philosophy, linguistics and artificial intelligence (AI).

Human preference for machine- versus human-based judgement: a balance theory perspective

As algorithms are becoming more ubiquitous and permeate everyday life, so does human reliance on the algorithms to replace the cognitive processes of social behavior, specifically that of decision-making. On the most general level, according to Merriam-Webster's dictionary, algorithm is “a step-by-step procedure for solving a problem or accomplishing some end”. Algorithms are referred to herein as a set of complex rules (mathematical logic) that manifest in a sequence of operations for performing a computation or solving a given problem in the context of human decision-making processes. More pertinently, these algorithms are machine-learning (not constructed by human), designed to “learn” (to approximate and improve the behavior of) its function as more data are inputted and to “train” a computer program to automatically recognize statistical patterns in a large set of data. 

Relatedly, human decision-making is subjected to cognitive distortions—biases, false assumptions, overconfidence, disregard of contradictions, misrepresentation, emotional state—which fundamentally affect the quality of effective and logical conclusions, especially when the amount of information to process increases (total cognitive load limitations). According to Meehl (1954), actuarial (mechanical), data-driven algorithms can better predict human behavior than trained clinical psychologists. Further, in the context of decision-making performance, studies have shown that algorithmic performing models achieved accuracies that are comparable to or exceed human ability: general clinical judgement, interpretation of radiographs, judicial decisions, financial returns on investment and pathological morphology diagnosis.

Algorithms shape the modern society in a profound way, facilitating data-driven decisions with often significant direct impacts on humans. Algorithmic reasoning systems are deployed to aid cognitively or replace a decision-making processes that would otherwise be performed by humans, from college admissions and psychiatric and medical diagnoses to predictive policing and criminal justice risk assessment. In the United States, correctional agencies consistently rely on algorithmic criminal risk assessments to “predict” that someone convicted of a felony will recidivate in the future. Corporations in the private sector across the globe are currently using machine learning systems for job interviews to evaluate facial expressions and body language, voice and emotional state and overall personality compatibility of candidates. Automated resume screening is a well-established human resources recruiting practice for eliminating applicants who do not meet the qualifications for a job listing. Several multinational organizations—tech companies, research and development centers, manufacturers—have also substantially expanded their capital investment in associated software technologies, to help improve the customer experience, identify emerging market opportunities and drive growth.

In this regard, despite the proliferation of algorithms, human hesitation towards algorithmic decision-making systems exists. A substantial body of literature has shown the tendency for people to be averse to using (empirically superior) algorithmic recommendations to improve their decisions. From hesitance to get into an elevator without an operator, to the fear of riding in a fully self-driving vehicle, human resistance to relinquishing decision-making to automated decision systems extends across various research domains and practical applications. Despite the evidence that algorithmic-based predictions yield higher success rates than intuitive human estimations, people often choose to rely on their own or someone else’s judgment, even when it comes at the expense of their performance. 

Contrarily to the notion of algorithm aversion, recent empirical work has alternatively been examining the phenomenon of “algorithm appreciation”, or people preferring algorithmic-driven reasoning and decision-making to that of a human (Logg, Minson & Moore, 2018). Similarly, when solving a logic task, participants are found to place greater confidence in an algorithm over other people. Human positive perceptions of algorithmic decision-making tools can also often depend on the context of those decisions and the objective characteristics of the tasks they are assigned to solve. Further, subjects given even a modest amount of control in the decision through the adjustment mechanism tend to rely on the algorithm outcome more than those who simply have to accept its recommendation. The level of human acceptance of algorithmic decisions is likely to increase with repeated interactions, as people become more familiar with the automation systems in question. As with any emerging disruptive technology, gaining a profound understanding of the reasons why people trust algorithms (or not) to modify and revise them to improve their chances of acceptance is desired.

Researchers from a broad range of disciplines have proposed explanations on the basis of human belief in their superior expertise, inclusion of performance reward making subjects more likely to rely on their own judgement, and overconfidence. People are less likely to trust in algorithmic than human judgements after they have seen the algorithm’s (superior) results but also its errors, as they compared to a reference point the algorithm’s overall target degree of accuracy rather than the algorithm’s performance to their own. Although studies comparing human preference for machine- versus human-based judgement exist, they have not suggested (cognitive) balance theory as a possible explanation.

In the psychology of motivation, the concept of balance theory, proposed by a social psychologist Fritz Heider (1946, 1958), has described the triadic relational structure among two individuals and an object (or an impersonal entity) based on positive (appreciation) or negative (aversion) emotion. The central tenets of balance theory have been concerned with individual’s change in relational attitude based on cognitive dissonance principles—that is, when relational structures become unbalanced due to dissimilarities, cognitive stresses associated with psychological pressures (e.g., an uncomfortable feeling of negative affect) change an individual’s sentiments (like, dislike) into a congruent pattern. Balance theory has demonstrated that social attitudinal relations strive to be balanced out based on an individual’s interpersonal relations—equally positive or negative. In other words, balance theory has posited that human like internal experience (e.g., attitudes) to be congruent with external experience (e.g., behavior).

Generally, a triadic relation between two individuals and an object has been comprised of: (a) the relation between a first (focal) individual P and a second individual O; (b) the relation between the second individual O and a third object (entity) X; and (c) the relation between the first individual P and the object X (also described as the P-O-X model, see Figure above). According to Heider’s balance theory, the specific relations between two persons can be positive (i.e., the two individuals like each other) or negative (i.e., the two individuals dislike each other). A triad balance is achieved when it includes either no or an even number of negative relations (+ x + x + = + or – x – x + = +). In contrast, a triad is imbalanced when it includes an odd number of negative relations (– x + x + = –). This has meant that the preference for balance typically leads in directed relations to changes and is characterized by transitivity, influencing the formation of new attitudes. To restore balance in the triad, P must change attitudes toward O or X, or alter his or her beliefs about the O-X relation.

While the original formulation of balance theory was designed to recognize patterns of interpersonal relations, the distinction between balanced and imbalanced triads has been also applied to study the perceived attitudes towards objects (or impersonal entities). For example, balance theory has been employed to examine consumers' perceptions of celebrity endorsers and how they affect consumers' attitudes toward products. 

Similarly, Heider’ balance theory could be applied to provide a possible explanation of human reliance on algorithms for prospective events, by examining a relation between a first individual P (human) and a second individual O (an algorithm developer or a company), the relation between the second individual O and an algorithm X (assumed as positive), and the relation between the first individual P and the algorithm X. Particularly, a positive or negative relation can result from the perception that the algorithm developer and its algorithms do or do not belong together (unit relations). Positive unit relations (appreciation) can result from any kind of closeness, similarity, or proximity a subject in the study could experience in relation to the developer and its algorithm (e.g., an electric vehicle developer and a self-driving vehicle algorithm). In contrast, negative unit relations (aversion) based on the subject’s perception of the algorithm developer can result from distance, dissimilarity, or distinctness (e.g., a social media company and an investment robo-advisor algorithm).

Speculating through design fiction