At a Glance
- Because most AI agent solutions are still shoehorned into standard SaaS agreements — with little or no modification — the lack of meaningful performance protections could stall adoption before it even starts.
- In most generative AI agreements, vendors disclaim all liability regarding the accuracy and use of the output; but for AI agents, the “output” or outcome is a successful and accurate completion of the process.
- One way to address this gap on liability is to change the service model. Instead of the customer directly licensing the technology for the AI agent, the vendor could instead license, implement, and maintain an AI agent to perform services more efficiently, as part of a service offering.
- When SaaS models and standard agreements are combined with well-designed metrics tailored to specific business processes, customer trust and adoption of these solutions can grow.
The way businesses operate is on the cusp of significant transformation. Over the last year, attention has been shifting from generative AI to the implementation of AI agents — semi- or fully autonomous systems that can sense, reason, and act on their own. Chatbots, which are essentially generative AI solutions, can answer questions by referencing specific and relevant data in real time. AI agents represent a step forward, as they integrate with other software systems to perform particular tasks with minimal human involvement.
AI agent solutions are being offered through “off-the-shelf” agents, which have been trained and developed to specialize in specific tasks. Vendors also offer agent-builders, which can be used to develop an AI agent quickly for your specific business operations. AI agents can also leverage machine-learning techniques to continuously improve their skillsets and efficiencies.
According to IEEE, “a strong majority of technologists globally (96%) agree that agentic AI innovation, exploration and adoption will continue at lightning speed in 2026, as both established enterprises and start-ups deepen investments and commitments to the technology.” This means that you will probably see a shift this year in the way you order food or groceries to your home, interact with retailers, and conduct day-to-day repetitive tasks.
Despite the efficiencies offered by AI agents, “human-in-the-loop” verification remains important. Vendors of AI agent solutions continue to take on minimal risk regarding the “output” or “performance” of their solutions. This construct made sense, at least to a certain extent, for generative AI applications, where customers understand that they needed to check the “outputs” for accuracy. But this concept does not work the same way for AI agents. The financial benefit of developing AI agents is centered on autonomy, but how can customers trust or rely on AI agents when their developers refuse to stand behind their performance?
In many ways, the AI agent model is similar to outsourcing, in that one or more processes are being performed by a contractor — albeit a nonhuman one. In outsourcing models, the outsourcing provider takes on responsibility for the performance of its personnel through service warranties, indemnities, and service-level agreements. These protections are not common in off-the-shelf AI agent contracts, which currently still appear to be Software as a Service (SaaS) agreements. Such agreements typically provide only “availability” commitments and remedies. Because most AI agent solutions are still shoehorned into standard SaaS agreements — with little or no modification — the lack of meaningful performance protections could stall adoption before it even starts. The stakes are even higher when AI agents are tasked with higher-risk tasks like making payments to a third party.
This article explores two paths forward: (a) a hybrid service model that leverages AI agents for efficiency, or (b) a hybrid contracting model that mitigates performance risks. Vendors tout the transformative efficiencies of AI agents, and the contracting and operating model should tell the same story.
Hybrid Service Offering
SaaS is a cloud-based model where applications are hosted and access is provided over the internet via web browsers or apps. Users subscribe to a software service, typically paying a recurring fee. As part of the service, the software provider typically bundles maintenance, updates, and infrastructure costs, allowing end users to access and use the software without managing installation or hardware. SaaS is generally scalable and appealing for businesses seeking lower upfront costs. Generative AI applications, such as chatbots, are typically delivered to end users as a service utilizing the SaaS model. Therefore, most traditional SaaS concepts and contracting principles remain applicable when contracting for a generative AI application.
Likewise, the technology that hosts and allows development of AI agents is accessed through a similar SaaS model; however, that is only half of the story. Business or technical operations are also being outsourced to the AI agent. Traditional SaaS agreements do not account for this change and typically draw a line with respect to how you use the software application. In most generative AI agreements, vendors also disclaim all liability regarding the accuracy and use of the output; but for AI agents, the “output” or outcome is a successful and accurate completion of the process.
One way to address this gap on liability is to change the service model. Instead of the customer directly licensing the technology for the AI agent, the vendor could instead license, implement, and maintain an AI agent to perform services more efficiently, as part of a service offering. Therefore, the vendor remains responsible for the quality of the services, as in a traditional service agreement, but can incorporate the cost-savings in the ultimate fee structure for the overall service offering.
For this model, the service provider should provide transparency on its use of AI agents and obtain approval for the specific functions. The service agreement should also address intellectual property rights, privacy, security and data usage (which may not always be included in standard service agreements), and also address when and what customer data can be used to train models and how those trained models can be used. Concepts related to SaaS offerings, such as availability and maintenance and support would no longer be important, because the vendor is acting as a service provider and remains responsible for these components indirectly.
The parties should also define quantifiable service-level agreement (SLA) metrics — such as timeliness, quality, throughput, and response/resolution time — to create accountability and provide a basis for remedies, much like in standard outsourcing arrangements. The agreement may also include reduced SLAs to account for any drop in efficiency if the tools become temporarily unavailable for reasons outside either party’s control.
If the vendor is not willing to take on the role of a service provider, then the standard SaaS contract should be updated to include quantifiable performance metrics.
Hybrid Contract
In particular, a standard SaaS contract will not account for relevant metrics — such as error rates, accuracy rates, turnaround time, and customer satisfaction. Instead, service availability, maintenance, and support SLAs are usually provided. While these SLAs remain important, as the AI agent solution is still a SaaS application, specific and quantifiable metrics related to the performance of the AI agent will mitigate risk and increase customer adoption of these solutions.
If such quantifiable performance metrics are difficult to calculate, then the parties should consider gainsharing or cost avoidance metrics as an alternative route. These are metrics that can prove out the efficiencies touted by AI companies. With these models, actual expenditures are calculated against historic figures to demonstrate the financial benefits / costs savings of the AI agent. This solution will help prove out the return on investment (ROI) of the AI agent, but it will not solve the concerns surrounding accuracy.
Therefore, the time required for human review by the customer of the processes should be included in the calculations to ensure that the anticipated ROI is achieved. In addition, software vendors should develop functionality that allows for detailed auditing as well as real-time monitoring of the AI agents’ activities. These requirements should also be included in the SaaS agreements. As they stand today, SaaS agreements typically permit the SaaS vendors to audit the customer for license compliance, but customers are not given the rights to audit the SaaS systems. The lack of transparency and documentation is another weakness when relying on a standard SaaS contract for agentic solutions.
The parties should reassess whether conventional SaaS warranties, liability caps, and indemnification obligations sufficiently address the risks of using AI agents. Standard SaaS agreements typically address IP infringement and data breaches, but even these protections are often subject to significant limitations. While those risks remain relevant, AI agents introduce a new category of concern: performance liability. What happens when an AI agent exceeds its authorization, discloses sensitive data or proprietary information, or transfers funds to the wrong recipient?
Traditional software applications are rule-based and predictable — the inputs produce the outputs based on the application of set rules. AI agents, by contrast, operate on probabilistic and adaptive models, making their behavior context-dependent and harder to anticipate. This unpredictability is precisely what makes AI agents valuable, it is also why standard SaaS agreements fail to account for these risks.
Conclusion
While traditional SaaS models and contracts provide a framework for AI agent use cases, they are not sufficient on their own to address the unique characteristics of these solutions. When these models and standard agreements are combined with well-designed metrics tailored to specific business processes, customer trust and adoption of these solutions can grow. As the AI agent market matures, the vendors and customers who proactively address these contracting challenges will be best positioned to realize the full potential of this technology.