Haven’s Failure: A More Helpful Perspective

Haven’s Failure: A More Helpful Perspective

 

 

The recent news reports that Haven, the joint healthcare venture between Amazon, JP Morgan, and Berkshire Hathaway, was quietly being shut down, came as no great surprise to those of us who work in the Artificial Intelligence field. The culprit for the failure of this ambitious effort to reform American healthcare is very likely the same as the reason the Healthcare.gov website collapsed in spectacular fashion when the Affordable Care Act was first rolled out. There is a very common problem that is plaguing American corporations, as well as those of the rest of the world right now: Failure of data interoperability (DI).

 

The published reports on the failure of Haven don’t mention politics or anything catastrophic. However we do know that, after spending what we can assume were very large sums, their attempts to improve healthcare for their hundreds of thousands of employees have failed. We also know that the teams involved in this effort spent the bulk of their time constructing new models to get something new and better out of their data. These three very advanced organizations are run by three very respected leaders, but when confronted with a legacy aspect of that business— healthcare— they confront the same problems as everyone else, in countless other businesses. The three companies had to confront mismatched data formats, heterogeneous physical data stores, and very large employee bases/ headcount. In other words, their data is a mess.

 

We see this devil’s stew of DI problems across countless industries — from a combined book publishing behemoth, to the large hospital system, where employees report that different parts of the hospital have difficulty reconciling records within their own organization

 

Historically, we have solved these problems in one of three ways. The first solution will be familiar to anyone who is a customer of SAP: you and your company can spend a decade, and ~$100 million, bringing your data into their newly created silo. This can work for a time, if you are willing to spend the time and the money, although new issues can be introduced in the process.  This is tantamount to proclaiming that the Earth is the center of the universe and insisting all calculation be done relative to that coordinate system – costly and ineffective, but good for the Church.

 

The second solution is to undertake a massive manual effort, acknowledging that there will be inevitable errors and that you will constantly be trying to fix those errors. This is the route most companies are forced to take. This is also where many large consultancies— not just the ones you’ve heard of, such as Deloitte, McKinsey, and Accenture, but also ones you may not have — Tibco, Wipro, Tata Consulting—make Billions of dollars helping companies clean up their data swamps.  Data integration accounts for 40% of IT spending.  This is tantamount to trying to convert between geographic coordinate systems by hand, or trying to connect electronic devices in a world without standardized electrical sockets (or standard plumbing fixtures in England, many of which were constructed before the industrial revolution)

 

The third solution isn’t really a solution at all. Companies decide not to bother. They look at their mass of data, realize it cannot be organized in a way they can properly utilize it, and they just skip it. They continue to collect and store data, but they cannot use it for analysis because the problem of data interoperability has overwhelmed them.

 

As with Haven, an incredibly high-profile venture, the bulk of these projects fail. These organizations were looking for radical synergy, making use of smart people within smart organizations. But it became easier and smarter to simply pull the plug. Haven failed quickly, albeit not without laying a lot of never used data pipelines, but most of these ventures fail somewhere around year 8, when the company realizes that they have spent $60m and several years, yet there is still no visibility towards finding a solution. I recently spoke with executives at a large bank in Europe who had spent well in excess of $100m over several years and had walked away before even a single day of use. In other words, millions of dollars and several years had been spent in an attempt to solve this problem, and yet all their efforts yielded zero benefit. Like plugging a tv into a wall socket, many data integration have all or nothing properties by definition of what they are trying to do.

 

But a fourth solution now exists, based on recent discoveries in math— an area called Enterprise Category Theory. This solution is quick, effective, and won’t cost hundreds of millions of dollars or a decade of effort. There are companies around the world already taking advantage of this technology. It allows for a new world to exist: data integration problems are formalized in a way reminiscent of high school algebra, and solved using advanced AI techniques implementing the mathematical discovery in software.

 

Most companies are already doing data integration, but poorly. This Enterprise Category Theory solution for data integration is a reachable goal. You can have the best algorithms in the world and pour millions of dollars into AI solutions for your company, but without solving the Data Interoperability problem, you and your company will never harness the true power of AI for your company and your customers. Mathematical breakthroughs in Enterprise Category Theory open a new world in DI, saving companies money and giving power to the promise of “Big Data”.

Share This