Four years ago this month I wrote an article on the top ten obstacles to success in enterprise AI, which received quite a bit of attention. That was more than a year before the unleashing of ChatGPT, followed by the LLM/GenAI hype-storm. We’ve had a great deal of experience since that time, so I thought it would be useful to revisit.
From my perspective in the trenches at the time:
Top ten obstacles in 2021:
1) Strong executive champions are rare
2) Culture still eats strategy
3) Confusing ML/DL projects with AI systems (extended to LLMs/GenAI since)
4) Assuming big is better
5) Failure to operationalize
6) Redundancy is prevalent
7) Top-tier talent is scarce
8) Not-invented-here syndrome
9) Poorly designed data management systems pre-optimized for AI
10) Three little bear budgets
The two notable changes since 2021 were number 1 and number 10, primarily due to the LLM and GenAI hype-storm, but the outcomes as a percentage of successes and failures haven't changed much for the super majority. Exceptions exist. The five percent success rate cited in a recent MIT study probably represents thousands of successful use cases. With tens of thousands of GenAI experiments conducted, a few will likely succeed even when all else fails.
I would change the rankings today. For example, number 9 should be in the top 3. If the enterprise doesn’t have a well-designed data management system in place, success in any type of AI is very unlikely. Since so many executives have become champions of GenAI, I would delete number 1, and condense down to the top 5 obstacles.
Top five obstacles in 2025:
5) Culture still eats strategy
4) Top-tier talent is scarce
3) Not-invented-here syndrome
2) Assuming big is better
1) And the winner is…. system design (including data systems)
One could easily argue that talent should always be first as it determines all else, including decisions on what to invest in or adopt. Similarly, if we agree that culture still eats strategy (for breakfast, if not lunch & dinner), then I suspect culture is the top obstacle for many. But from my perspective, which is admittedly influenced from decades in designing EAI systems, we clearly have a product problem. I'm quite certain the obsession with LLMs and GenAI is dramatically slowing adoption in the enterprise market.
Despite all the challenges within organizations, if they were presented with the right product in the right manner at the right time, the failure rate wouldn’t be 80% (McKinsey) to 95% (MIT). That's a failure of vendors and VCs, not customers.
LLMs were not intended for organizations, and certainly weren’t purpose built. Indeed, the very nature of LLMs make them inherently unsafe and problematic for the enterprise environment. Business and government need precision accuracy, strong governance and security. Taking a casual approach to move fast and break things with GenAI or Agentic AI could literally be fatal. In addition, and importantly, GenAI projects (aka experiments) have focused on individual use cases rather than robust system architecture that has the ability to achieve the intended goals of use cases in a cost-effective manner.
A quote from my recent paper is focused on this issue:
"With few exceptions, such as high-value projects to accelerate drug development, targeting individual use cases with one-off projects demonstrates a fundamental misunderstanding of how to optimize AI systems for organizations. Due to the inefficiency and high costs, realizing a return on investment (ROI) is implausible for most projects. To remain competitive moving forward, most businesses will need deep domain expertise, enterprise-wide precision data, vertical data integration, and purpose-built infrastructure powered by an efficient EAI OS with the specific capabilities of the KOS."
Among many other important functions, the KOS enables the ability to run unlimited use cases in a super-efficient, unified manner when compared to one-off projects. It should not be surprising, but success in AI systems does require purpose-built AI systems, which in our case represents nearly three decades of research, development and testing (See: “From Theorem to Executable System: A Continuously Adaptive Enterprise OS Powered by Neurosymbolic AI”). In the few highly successful enterprise AI investments I'm aware of, the systems were purpose-built. They integrated GenAI like we do with the KOS, but did so within a larger enterprise architecture that provided essential data structure, governance and security (large, very expensive custom systems like the largest investment banks). To deliver ROI to the majority requires highly refined system design and engineering.
Botton line: Given that the products in this case are systems, it's a system design problem, combined with a structural problem in venture capital that is obsessed with LLMs (See Pitchbook: “41% of all VC dollars deployed this year have gone to just 10 startups”).