Share
AI/ML
9 min read
Share
AI agents collaborate to autonomously achieve specific goals by reasoning, planning, interacting, executing, and using tools. They rely on self-learning and continuous improvement and can act individually or collaboratively. These software-based systems are expected to become fundamental to enterprise software, using large language models (LLMs) and existing tools to perform actions.
As agents become more advanced, a global network of thousands of interoperable agents (an Internet of Agents), that is capable of communicating and collaborating across diverse environments using standard protocols is needed.
Currently, agents require significant human oversight and manual setup. In a future state, they will have the ability to dynamically discover and form execution graphs, find the right tools, and adapt to changing environments with minimal or zero human involvement. Autonomy, agency, and dynamism together impact the design, creation, and operation of AI systems, from hand-crafted to self-organizing agents.
We can draw parallels between the evolution of AI agents and the five levels of autonomy in self-driving vehicles. It is expected that AI agents will operate in a similar spectrum of intelligence. Understanding this progression and its business implications is key to envisioning the future of intelligent systems in our automated world.
Enterprises are already building agents to automate internal processes and workflows. As this trend continues, they will not only develop their own agents but also integrate external ones, creating a dynamic ecosystem of intelligent, autonomous agents. As a result, we expect thousands of agents for use in a global system of interconnected resources.
These will be interoperable. Agents deployed in different locations and of different origins will communicate and collaborate using standard protocols and over well-known communication channels. Enterprises will create new agents for their needs but will also consume agents developed externally.
Level 1: Rule-based automation | All systems and processes are rule-based and script-based. They are reactive with predetermined actions. |
Level 2: ML-assisted automation | ML-based agents assist with specific tasks, which are deterministic. Systems exhibit constrained decision-making. |
Level 3: Partial automation | LLM-based agents provide autonomous planning for well-defined use cases. Agents create and execute plans. |
Level 4: High automation | AI agents operate mostly autonomously and have access to sensors to make observations. They learn from experience and generalize based on context. |
Level 5: Fully autonomous | Agents operate fully autonomously without human intervention, exhibit original thinking, and embody personality and emotion. They solve problems not envisioned during initial training. |
At Level 1, agentic AI systems predominantly rely on rules or scripts to execute predetermined actions based on specific inputs. These systems react and require significant human oversight. For instance, interactive voice response (IVR) systems route customer calls or retrieve information based on customer identifiers. Although they handle simple tasks effectively, these systems lack adaptability and learning capabilities.
Level 1 agents form the foundation by automating repetitive and straightforward tasks. Their reliance on predetermined scripts limits their ability to handle unexpected situations or complex decision-making, necessitating human intervention when variables change, or tasks exceed predefined parameters.
Level 2 introduces machine learning. Some steps augment tasks with machine learning, but tasks remain deterministic. The software development lifecycle sees its first step in constrained decision-making. Examples include categorizing customer incidents into topics or analyzing employee survey results for patterns. Eventually, human intervention will be required.
Agents based on LLMs come into use. Workflows using LLMs invoke tools to complete tasks, and well-defined use cases gain autonomous planning capability. These agents understand the goal, create plans, and execute them. Most agents today operate at this level. Examples include a conversational assistant backed by retrieval-augmented generation (RAG) to answer HR-related questions or provide documentation to answer request-for-proposals.
As a sidebar consideration starting at Level 3 and beyond, most enterprise systems rely on determinism when updating databases, analyzing data, preparing reports, and sending emails. LLMs are intrinsically stochastic, meaning their behavior is non-deterministic and involves chance or probability. For interactions requiring exact output, like API-calling, a quick probability analysis reveals how rapidly error rates can climb. For this reason, enterprises need to have comprehensive error mitigation strategies, as potential errors rates in cascading AI systems can mean a small failure can compound across multiple interactions.
In this level, agents work together and can make decisions and reflect on them. They have access to sensors and can make observations using them. They track emergent tasks and demonstrate a degree of dynamism in decision making. They learn from experience and generalize based on context.
Tasks may include writing financial reports, reconciling complicated invoices, solving math equations, and rewriting sophisticated corporate communications. Currently, this level of agency is only achievable with a human in the loop, but it paves the way for increasing levels of autonomous decision-making.
Agentic intelligence reaches the highest degree of autonomy at Level 5, referred to as Artificial General Intelligence (AGI). These agents, when developed, will possess original thinking and collaborate seamlessly. They will command lesser agents and embody personality and emotion. They can solve problems that weren’t envisioned during their initial training. Think of a digital agent as a knowledge worker.
A typical AI agent software lifecycle starts at creation and completes at observation and evaluation.
The journey from Level 1 to Level 5 involves a gradual progression in these processes at each level of autonomy. Prior to Level 1, an organization would have the basic data protection and privacy controls established without automation at Level 0.
Processes in agentic enterprise software lifecycle | |
Create | Enable a developer to build a new agent and make it available for us, including the production of synthetic or real data. |
Plan | Understand the intent of the task or assignment and produce a broad and flexible outline of the work item |
Compose | Intelligently discover and compile the most fitting list of tasks required to fulfill the intent |
Orchestrate | Drive and manage the execution and management of tasks designed by the compose process. |
Safeguard | Verify the identity of each agent and ensure they are trustworthy and delivers safe, secure, and compliant results |
Observe (and evaluate) | Monitor the execution of the agents and gather telemetry data from every execution step for error propagation, analytics, and improvements. |
These levels involve rule-based and ML-based processes familiar to most organizations, handling tasks that are routine and well-defined. Level 1 is rule-based and uses tools for perception and predetermined actions. It focuses on generating sub-tasks and using existing modules to address pre-defined tasks. Level 2 introduces reasoning and decision-making capabilities. These levels maintain rule-based access control and tracking of all processes, creating alerts when thresholds are breached.
Advanced enterprises are swiftly adopting Level 3 capabilities, featuring LLM-enhanced agents that can reason and plan within defined tasks. The AI becomes contextually aware and proactive. It dynamically creates, discovers, and learns from agent interactions and crafts solutions. The orchestration process manages sequential, concurrent, or mixed execution orders. Customizable guardrails prioritize security and trustworthiness. Level 3's observability enhances prediction and tracking to sensory analysis. It alerts when deviations occur and continuously learns to improve future outcomes.
At Level 4, agentic AI creates software akin to seasoned developers, learning from experiences and adapting to new challenges. It becomes contextually aware, retaining memory and user preferences to offer personalized services. It catalogs and discovers agents to stay up to date. It builds dynamic graphs and creates new agents when needed.
During orchestration, it handles static agent graphs and assembles agents to build applications. Safeguarding mechanisms include traceability, audibility, and adaptability. Observability becomes multi-sensory, enhancing understanding through vision, haptic, motion, and audio inputs to refine predictions and alerts.
Level 5 autonomy introduces sentient AI that collaborates with agents and systems, showing personality and emotion. It partners creatively in software development, coding, brainstorming, and empathizing with users without needing human guidance. AI plans and manages resources, aligning projects with strategic goals while ensuring safety and trust in interactions. It dynamically creates, modifies, and discovers agents, using static and dynamic graphs for task execution.
Safeguarding peaks with adaptive guardrails that learn from behavior data to enhance security. Comprehensive sensory input allows precise observation and prediction, ensuring meticulous action and continuous learning from any variances.
With each level of autonomy, the stakes for enterprises grow.
As AI reaches higher levels of autonomy in Levels 3-5, enterprises need to implement policies, oversee mechanisms to prevent unintended consequences, and maintain accountability with compliance with legal and ethical norms. Enterprises need to adopt an ethical, proactive governance approach to manage risks associated with autonomous AI agents. By the time Level 5 of fully autonomous AI is reached, consequences can be catastrophic for enterprises without the appropriate guardrails.
By deploying technical solutions and regulatory measures for dynamic risk management, ethical compliance, and continuous monitoring, organizations can ensure AI systems operate within defined parameters. Developing governance frameworks also becomes important to foster innovation while upholding accountability and societal good while mitigating risks such as bias, privacy, and data security.
AI governance for Levels 3-5: Accountability, ethics, and agentic evolution | ||
Level 3 | Level 4 | Level 5 |
|
|
|
This evolution in agentic AI in enterprise software development represents a frontier for enterprises, offering opportunities to redefine processes, improve decision-making, and grow in an automated world. By navigating agentic AI complexities with care, businesses can lead the way to a more intelligent future.
At Outshift by Cisco, we know the future of software development is promising unprecedented levels of innovation. We are leading the way in building an open, interoperable, agent-first, quantum-safe infrastructure for the future of AI. We can’t do it alone. Shape the future of AI with us.
Get emerging insights on innovative technology straight to your inbox.
Outshift is leading the way in building an open, interoperable, agent-first, quantum-safe infrastructure for the future of artificial intelligence.
* No email required
The Shift is Outshift’s exclusive newsletter.
The latest news and updates on generative AI, quantum computing, and other groundbreaking innovations shaping the future of technology.