Beyond AGI: Engineering Programmable AI Systems

Beyond AGI: Engineering Programmable AI Systems

a16z Podcast Jan 16, 2026 english 6 min read

The future of AI isn't just bigger models, but structured systems. Explore the shift from AGI to API, new programming paradigms, and the critical role of declarative intent.

Key Insights

  • Insight

    The pursuit of Artificial General Intelligence (AGI) through merely scaling model parameters and pre-training data is an outdated strategy, with leading labs already shifting focus.

    Impact

    This necessitates a re-evaluation of AI investment and research priorities, moving away from pure scale towards more complex system integration and post-training methodologies. Businesses should adapt their AI strategies to leverage these evolving development paradigms for greater efficiency.

  • Insight

    The core challenge in advanced AI development is not model 'capabilities' but the effective specification of human intent to build reliable, functional systems.

    Impact

    This insight highlights a critical bottleneck for entrepreneurship and product development in AI; companies must prioritize user-friendly and precise methods for users to articulate desires to AI, opening markets for new interface solutions.

  • Insight

    Artificial Programmable Intelligence (API) – building structured software systems *with* LLMs – is a more practical and impactful goal than unconstrained AGI for real-world applications.

    Impact

    This redefines the business case for AI, focusing on specialized, reliable, and interpretable systems that solve specific problems, rather than a single, ambiguous 'general' intelligence. This creates clear pathways for new tech ventures.

  • Insight

    Natural language is too ambiguous and traditional code too rigid for precisely defining complex human intent in AI systems, demanding a new abstraction layer.

    Impact

    This identifies a significant gap in current AI development tools, driving demand for innovative programming languages and frameworks that bridge the 'fuzzy' nature of human thought with the 'formal' requirements of computation, impacting software engineering practices.

  • Insight

    DSPy's 'signatures' provide a declarative abstraction for AI functions, separating the 'what' (intent) from the 'how' (model-specific implementation and optimization).

    Impact

    This allows for more modular, maintainable, and portable AI applications, reducing the development burden and increasing the longevity of AI software projects. It enables businesses to build more resilient AI products less dependent on specific model iterations.

  • Insight

    AI engineering requires a comprehensive software stack, similar to traditional computing's C-language abstraction over assembly, to achieve portability, maintainability, and allow for underlying optimizations.

    Impact

    This promotes the development of a robust AI software industry, fostering competition and innovation in tools, compilers, and frameworks. It implies a standardization of AI development practices, benefiting scalability and talent development.

Key Quotes

"Nobody wants intelligence, period. Yeah, we want something else, right? And that something else is always specific, or at least more specific."
"The question is, is that actually sufficient for making the best use and the most used of of these language models? It's not a problem of capabilities. It's a problem of actually we don't necessarily just need models if we want systems."
"I think AGI is fairly irrelevant. Like it's not the thing I'm interested in. I'm interested, I joke sometimes, I'm interested in API or artificial programmable intelligence."

Summary

Beyond AGI: Engineering Programmable AI Systems

For years, the discourse around Artificial Intelligence has been dominated by the pursuit of Artificial General Intelligence (AGI)—the dream of a single, all-encompassing intelligence. However, a compelling counter-narrative is emerging from leading AI researchers and practitioners: the true frontier lies not in endlessly scaling raw model capabilities, but in building robust, programmable AI systems. This paradigm shift emphasizes "Artificial Programmable Intelligence" (API) over a nebulous AGI, focusing on how humans can effectively specify and control intelligent behavior within software.

The AGI Illusion and the Shift to API

The notion that ever-larger language models (LLMs) and vast pre-training data alone will solve all problems is rapidly becoming an outdated "playbook." While models continue to improve, the frontier labs are already moving beyond simple scaling laws, investing heavily in post-training pipelines, retrieval, web search, tool use, and agent training. The challenge is no longer merely one of model capabilities, but rather how to harness these powerful models into reliable, interpretable, and maintainable systems.

> "The question is, is that actually sufficient for making the best use and the most used of of these language models? It's not a problem of capabilities. It's a problem of actually we don't necessarily just need models if we want systems."

The Bottleneck of Intent Specification

One of the most significant hurdles in AI development is the imprecise nature of intent specification. Natural language, while human-friendly, is inherently ambiguous, leading to misinterpretations and unpredictable model behavior. Conversely, traditional code is too rigid, demanding an exhaustive level of detail that often extends beyond what a developer knows or needs to specify for an intelligent component. This gap necessitates a new abstraction layer—a way to declare intent without drowning in implementation specifics, or relying on vague prompts.

DSPy: A New Abstraction Layer for AI Engineering

This is where frameworks like DSPy come into play. Inspired by the leap from assembly to C in traditional computing, DSPy offers a structured, programmatic approach to building AI applications. Its core innovation lies in "signatures"—formal declarations of AI functions that define inputs, outputs, and their fuzzy, natural-language-based transformations. These signatures isolate ambiguity, allowing developers to express what they want (declarative intent) while decoupling it from how the underlying LLM will achieve it.

DSPy programs leverage standard programming constructs like control flow, modularity, and variable passing, which are difficult or impossible to manage effectively with raw prompts. Critically, DSPy includes optimizers that dynamically improve the performance of these structured AI programs, adapting to specific models and tasks, ensuring that developers don't sacrifice quality for better abstractions.

> "I think AGI is fairly irrelevant. Like it's not the thing I'm interested in. I'm interested, I joke sometimes, I'm interested in API or artificial programmable intelligence."

The Future of AI Software Development

The vision is clear: AI software engineering needs a comprehensive stack where the "language" of intent is well-defined. This stack will include modular components (signatures), inference time strategies (modules like Chain-of-Thought), and data-based optimization algorithms. This approach ensures that as models evolve, the core abstractions and systems built upon them remain stable and maintainable, much like how C programs benefit from compiler optimizations across different hardware.

> "Nobody wants intelligence, period. Yeah, we want something else, right? And that something else is always specific, or at least more specific."

This shift moves us beyond the "prompt engineering" treadmill towards a principled, engineering-driven approach. It allows for the creation of reliable, scalable, and human-aligned AI systems by formalizing the interaction between human intent and machine intelligence. The emphasis is on building a robust ecosystem where researchers, academics, and developers collaborate on models, optimizers, and modules, ultimately benefiting all AI applications.

Action Items

AI developers and engineering teams should explore and adopt structured programming paradigms (e.g., DSPy) for LLMs to build modular and maintainable AI applications.

Impact: This action can significantly reduce technical debt, improve system reliability, and accelerate the development cycle for AI-powered products across industries. It fosters a more mature approach to AI software creation.

Businesses and technology leaders should invest in developing and integrating abstraction layers that decouple AI application logic from rapidly changing LLM capabilities.

Impact: This strategic investment ensures long-term system resilience, reduces dependency on specific model providers, and protects against obsolescence, translating to more stable and adaptable AI-driven business operations.

AI researchers should focus on creating advanced optimization algorithms and inference techniques that bridge high-level, declarative intent with current LLM limitations.

Impact: This research will drive practical improvements in AI system performance, efficiency, and scalability, making sophisticated AI accessible for a wider range of scientific and business applications.

AI labs developing foundational models should prioritize designing 'programmable models' that natively understand and integrate with structured software systems.

Impact: This shift would enable seamless integration of LLMs into complex enterprise architectures, accelerating the adoption of AI across various business functions and improving overall system coherency and control.

Tags

Keywords

artificial general intelligence artificial programmable intelligence LLM programming DSPy framework AI software development prompt optimization declarative AI AI system design model capabilities AI engineering