Ventana Research Analyst Perspectives

Productivity by 1,000 Cuts: Building AI Feasibility From the Bottom Up

Written by Robert Kugel | Mar 5, 2024 11:00:00 AM

After a year of near-constant AI chatter, the broad strokes of how the technology will roll out in business over the next three to five years are coming into place. It’s almost trite but worth repeating that artificial intelligence will drive a substantial boost in productivity as it’s adopted. Rather than making large swathes of jobs obsolete, it will take the robotic work out of those job descriptions, enabling people to focus on tasks with a greater economic return.  

In the rapid evolution that’s about to take place, we can expect to see:  

  • Top-down versus bottom-up AI initiatives: The former are strategic, enterprise-wide undertakings with a broad vision, while the latter are tactical, opportunistic projects designed to solve a problem or set of related issues and managed at a business unit or departmental level. 
  • Enterprise versus small-scale data management: A prerequisite for any AI deployment is having enough quality data. Bottom-up AI initiatives will typically be built around focused and relatively easily sourced data, requiring less effort than enterprise-wide data management initiatives. The latter will face the challenge that any large-scale undertaking runs into, making high-profile, spectacular failures inevitable. 
  • Large versus narrow language models: Making generative AI practical and affordable will entail using the right language model for the right task, ranging from the familiar large language models to narrow ones and everything in between. Large language models will become quickly commoditized, with a few highly specialized ones the exception. 

Enterprises are making significant investments in AI. ISG Research has found that, on average, organizations spent 2% of IT budgets on AI in 2023. The average expected spend for 2024 is 3.7% and 5.9% in 2025. The research also found that enterprises were willing to spend more per seat to have AI-enabled capabilities, with sales performance management, supply chain management and treasury and risk management showing the greatest propensity to pay more. 

ISG-Ventana Research asserts that by 2027, almost all providers of software designed for finance organizations will incorporate AI capabilities to reduce workloads. The productivity gains from AI are more likely to be achieved initially through a steady stream of small hacks and initiatives that eliminate less productive and unproductive work rather than big bangs. One reason is that the process of infusing AI into cloud business applications will be incremental, with the level of sophistication constrained by a gradual leveling up of data availability and proven trustworthiness.  

Like all good technology, AI works best when it’s invisible to the user, which is especially the case with relatively trivial tasks. Depending on their complexity, higher-order tasks that require interaction with the system will have a worker learning curve. Change-management issues are almost inevitable. Some may be a relatively light lift, such as generative AI creating first drafts of simple documents or emails. 

Introducing more sophisticated AI use cases is likely to be more challenging–for example, creating first drafts of a budget or financial forecast generated by applying predictive analytic techniques. Instilling a mindset that first drafts are not a finished product will be an ongoing management challenge. Individuals will need to reflexively accept responsibility for content reviews, understanding their role in the finished result, whether it’s a simple email response or a legal contract. But humans are fallible, so supervisory capabilities will be an essential part of a business application, and third-party supervisory tools will be a niche market. 

Productivity by a thousand cuts means that these initial, relatively simple applications are likely to produce an underappreciated boost to productivity. They will be successful because of their focus, meaning that they will be built to work well in existing IT environments. Their usefulness will prove AI’s feasibility and value, a key part of building trust in utilizing this technology. 

Moving beyond simple AI use cases will require increasing levels of effort in data management, especially for business software. Until now, serious data management has been a diet-and-exercise affair–everyone knows they need to pay closer attention to it, but few make the effort. Predictive and generative AI will provide a strong business case for investing in data management capabilities. Because of their focus, tactical AI use cases will require less investment, but inevitably, a mass of scattered efforts will need to coalesce into a more comprehensive approach. Moreover, data management technology has advanced considerably over the past decade, notably in the increasing sophistication and ease of use of application programming interfaces and advancements in operationalizing data fabric concepts. 

History suggests that large-scale data initiatives are more likely to be a source of failure because of organizational reasons. They will be unfocused, with competing interests and uncertain aims, all key ingredients to failed project management. Ultimately, though, enterprises will need to sort out the data architecture underpinning all third-party and internally developed software to optimize cost and performance, a task that will be made much easier through AI. 

An essential element in the advancement of AI that’s only just gaining attention is the importance of language model orchestration. While a lot of the public’s attention has been on a few large language models such as ChatGPT or Gemini, there’s been a considerable amount of investment in the development of narrow, small and medium language models. Some of these have a specific industry or functional purpose, such as material management in aerospace, legal and accounting. For example, software that automates the processing of incoming invoices will require a narrow model for the optical character recognition to extract the text, small language models to correctly extract and characterize the data to properly classify its meaning, a medium size model to determine the appropriate routing and notifications plus enrich the data set with other information needed or useful for additional analysis. Medium-size models will summarize inbound contracts and highlight wording that requires review, while an LLM will create alternative contract wording for review. Language models of all sorts will become a commodity, and an initial abundance of these will quickly winnow down to those that are superior in terms of performance, cost and specific utility. Business applications will require a combination of models of all sizes. 

The frenzied pace of innovation in AI is matched only by a seeming willingness of enterprises to adopt software that will increase productivity. While there are technical challenges and the need to deal effectively with privacy and security, I strongly recommend that business executives focus on organizational issues by providing an overarching, realistic vision of what AI will do for workers and customers in the short- and medium-term. Change management is a likely impediment to adoption and achieving full benefit from investments. Executives must also demand a clear roadmap from their IT organization and a coordinated approach to data management to optimize the impact of those investments. 

Regards,

Robert Kugel