I’ve never been a fan of talking about semantic models because most of the workforce probably doesn’t understand what they are, or doesn’t recognize them by name. But the findings in our recent Analytics and Data Benchmark Research have changed my mind. The research shows how important a semantic model can be to the success of data and analytics processes. Organizations that have successfully implemented a semantic model are more than twice as likely to report satisfaction with analytics (77%) compared with a 33% overall satisfaction rate. Therefore, I owe it to all of you to write about them.
There is a fundamental flaw in information technology, or at least in the way it is most commonly delivered. Most technology systems are developed under the assumption that all people will use the system primarily in the same way. Sure, there are some options built in — perhaps the same action can be initiated by either clicking on a button, selecting a menu item or invoking a keyboard short-cut. The problem is that when every variation needs to be coded into the system, the prospect of providing personalized software programs to every individual is impractical.
The data governance landscape is growing rapidly. Organizations handling vast amounts of data face multiple challenges as more regulations are added to govern sensitive information. Adoption of multi-cloud strategies increases governance concerns with new data sources that are accessed in real time. Our Data Governance Benchmark Research shows that organizations face multiple challenges when deploying data governance. Three-quarters (73%) of organizations report disparate data sources as the biggest challenge, and half of the organizations report creating, modifying, managing and enforcing governance policies as the second biggest challenge.
The use of artificial intelligence (AI) using machine learning (ML) will be the single most important trend in business software this decade because it can multiply the investment value of such applications and provide vendors an important source of differentiation to achieve a competitive advantage in what are today very mature software categories. I assert that by 2025, almost all Office of Finance software vendors will have incorporated some AI capabilities to reduce workloads and improve performance. However, software vendors will be challenged to apply innovations in this area quickly while ensuring that the AI capabilities function well enough in the real world to foster rapid adoption while avoiding user frustration. The failures of the Apple Newton and Microsoft’s Clippy office assistant stand out as examples of too-ambitious-too-soon attempts at infusing intelligent automation.
In this analyst perspective, Dave Menninger takes a look at data lakes. He explains the term “data lake,” describes common use cases and shares his views on some of the latest market trends. He explores the relationship between data warehouses and data lakes and share some of Ventana Research’s findings on the subject. He also provides an assessment of the risks organizations face in working with data lakes and offers recommendations for maximizing the potential of data.