I specialize in identifying disruptive, core technologies and strategic technology trends in early-stage startups, research universities, government sponsored laboratories and commercial companies.

In my current role, I lead sourcing of strategic technology investment opportunities and managing Dyson’s diligence and outreach processes, specifically in the U.S., Israel and China.

I write here (sporadically) on the convergence of science and engineering, with broader adopted interests in novel disruptive technologies, cognitive psychology, human-computer interaction (HCI), philosophy, linguistics and artificial intelligence (AI).

AI roadmap for enterprise adoption

In today’s fast-paced technological landscape, artificial intelligence stands as a powerful catalyst reshaping industries and presenting new solutions to longstanding challenges. Projections indicate the AI market could soar to nearly $740 billion by 2030 and contribute a staggering $15.7 trillion to the global economy. While the advent of ChatGPT sparked tremendous excitement for AI’s transformative potential, practical implementation reveals that sophisticated enterprise adoption demands more than just large language models (LLMs). Leading organizations now recognize the importance of model diversity – integrating proprietary, third-party and task-specific models. This evolving multi-model approach creates massive potential for startups to develop foundational tools and drive the advancement of enterprise AI into the next era. For nascent enterprises navigating the complexities of AI and emerging technologies, achieving success hinges on precise execution and continuous adaptation.

The Starting Point: Data 

Every organization’s AI journey begins with its data. Most established enterprises have spent decades accumulating valuable datasets that exist entirely apart from the public internet. Customer support logs, point-of-sale transactions, IoT sensor data and EMR medical records – these business-specific datasets are the lifeblood for training enterprise AI models. However, in many cases this data resides in legacy on-premise systems built for transactional workloads, rather than analytics or machine learning. Much of it also contains sensitive personally identifiable information (PII) that requires careful data governance. Preparing these vast troves of enterprise data for AI presents a significant yet underexploited opportunity.

The challenge of data preparation gives rise to what some term “Data Ops 2.0” – a next-generation data engineering paradigm dedicated to priming data for advanced analytics and AI. This involves considerable efforts in labeling, cleaning, normalization and beyond. Startups are now emerging with innovative solutions to expedite, automate and scale this pivotal data preparation step across massive datasets. As next-generation AI models demand more data, the tools and infrastructure for rapidly preparing enterprise data will grow in strategic importance. Startups adept at transforming raw enterprise data into high-quality training data will emerge as pillars of the AI ecosystem.

Building Custom Foundation Models

Armed with clean and structured data, many organizations are keen to train proprietary foundation models in line with their specific industries or use cases. For example, a credit card company may harness decades of transaction data to train custom models that detect fraud, minimize risk or craft personalized customer incentives. Similarly, an insurance firm might train proprietary underwriting models using claims and policyholder data. While general-purpose LLMs like GPT-3 offer a strong starting point, they fall short in matching the specificity of models optimized for a company’s unique data assets.

Successfully training large proprietary models requires considerable technical expertise along with specialized infrastructure. Startups have emerged to democratize access to scalable infrastructure for in-house model development, employing techniques such as distributed training across GPU server clusters. Meanwhile, other startups provide turnkey solutions and managed services to assist enterprises in training custom foundation models on petabyte-scale internal datasets. As model sizes continue to swell from billions to trillions of parameters, the ability to efficiently train proprietary models will increasingly become a competitive advantage. Startups that provide superior tools and infrastructure to unlock the value in enterprise data will flourish in the market. 

The AI Infrastructure Boom

Effectively training large ML models requires specialty hardware, notably high-performance GPUs. As the appetite for AI accelerates, demand for hardware like GPUs has skyrocketed, leading to supply shortages, long lead times and exorbitant costs. For instance, Nvidia's flagship H100 AI GPUs sell for over $50,000 on secondary markets, while AMD’s competing MI300X starts at $10,000-15,000. This discrepancy between supply and demand has fueled tremendous growth for startups innovating across the AI infrastructure stack.

In hardware, certain startups provide specialized enclosures packed with dense GPU servers and advanced liquid cooling systems, optimal for AI workloads. Other startups offer AI-tailored Kubernetes solutions, streamlining the setup and oversight  of distributed training infrastructure. At the forefront of innovation, emerging chip startups are pioneering novel architectures like GPNPUs, TPUs, IPUs and Neurosynaptic chips. These present alternatives to GPUs for ML training and inference. As AI continues to permeate various industries, the demand for advanced infrastructure is poised for growth.

Moreover, AI adoption is shifting towards edge-based solutions, where models are deployed directly on devices rather than centralized cloud data centers to facilitate real-time decision making. This trend is driving innovation in model compression and  the development of efficient inference chips tailored for edge devices. Startups that can facilitate on-device ML while preserving model accuracy stand to gain significantly. In this realm, Myriad forged a partnership with Quadric, a developer specializing in GPNPU architecture optimized for on-device artificial intelligence computing.

Enhancing Public Foundation Models

While some organizations invest in proprietary foundation models, many others leverage publicly available models like GPT-3 as a starting point. However, applying these general-purpose models often falls short of delivering satisfactory results straight out of the box. Consequently, a new category of “model tuning” has emerged. Existing public LLMs are now often fine-tuned on an enterprise’s internal data to create customized solutions. For example, an e-commerce company could refine an open-source product description model based on their catalog data.

Startups have surfaced to provide managed services and tools that simplify the process of tuning and adapting public foundation models for various business use cases. Rather than investing in training models from scratch, this fine-tuning approach allows companies to augment the “pre-trained smarts” of public LLMs at a fraction of the cost and time. As published models rapidly improve due to open-source competition, enhancing them through transfer learning presents a compelling option for many enterprises embarking on their AI journey.

MLOps: From Experimentation to Production

Foundation models – whether fully custom or fine-tuned – wield significant power. However, transitioning them into large-scale production requires considerable software infrastructure. MLOps platforms have emerged to streamline the end-to-end lifecycle of deploying and managing machine learning in production. Nevertheless, these conventional MLOps tools must undergo adaptation to address the unique complexities of LLMs. With their colossal size, insatiable data requirements and sensitivity to latency purpose-built solutions become imperative.

Startups are racing to build LLM-specific MLOps or LLMOps stacks, facilitating the seamless deployment of models to meet enterprise demands for scalability, reliability and compliance. Ensuring robust model monitoring, explainability and governance is crucial as organizations build trust in AI systems and mitigate risks. LLMOps solutions tailored to oversee and optimize the effective utilization of foundation models stand to seize a massive opportunity as LLMs integrate into business workflows.

Augmenting Foundation Models

LLMs boast immense power, but are also bound by inherent limitations. Their memory capacity is constrained by long-term context, they lack native commonsense reasoning, and they stumble when confronted with tasks requiring symbolic mathematical reasoning. This has spurred innovation in techniques that complement and enhance LLMs:

  • Neuro-symbolic AI combines the pattern recognition strengths of neural networks with formal logic and knowledge representation. Startups are pioneering new advancements to improve reasoning and explainability.

  • Reinforcement learning-based (RL) models learn through trial-and-error interactions, rather than static training data. Startups are leveraging RL in fields including robotics, scheduling and others.

  • Retrieval-augmented models incorporate external knowledge bases to supplement  LLMs’ limited memory and knowledge. Startups are driving innovation in semantic search, knowledge graphs and LLM enhancement.

Embracing a portfolio approach – combining task-specific models, integrating hybrid techniques and strategically applying complementary AI methods alongside foundation models – represents the future of enterprise AI. Startups facilitating this integrated, multi-modal AI approach will power the next generation of intelligent business applications.

The Way Forward: AI Diversity & Infrastructure

It’s clear that building impactful enterprise AI goes beyond simply adopting individual public foundation models. Leading organizations recognize the importance of model diversity – from proprietary and third party to fine-tuned and specialized, integrated and augmented. Succeeding in this emerging world of heterogeneous, multi-model AI demands a robust underlying infrastructure stack.

From handling petabyte-scale datasets to managing distributed training clusters, and ensuring model deployment observability to monitoring compliance, enterprises require capabilities across the full AI lifecycle. This greenfield opportunity extending beyond foundational models has given rise to revolutionary startups, from cutting-edge chips to LLM-Ops software to Industry 4.0 solution providers. By providing the picks and shovels to support model diversity and simplify infrastructure complexity, these startups will power the next phase of enterprise AI adoption.

The bottom line: while public LLMs have dominated the headlines, practical business adoption requires a much broader and deeper range of AI capabilities. Employing an ensemble approach with a variety of integrated models calls for extensive tooling and infrastructure. Across the tech stack – from data preparation to training systems to operations software – this multi-model reality presents a massive market opportunity. Startups that deliver the capabilities to handle model diversity while simplifying complex infrastructure will thrive in the coming Cambrian explosion of enterprise AI adoption.

Navigating the Future Security Landscape with a SecOps Cloud Platform