It’s not just that healthcare spending now accounts for almost one fifth of the United States GDP drawing top tech companies to the healthcare industry, but also the scope of that reach. Whether it’s the dollars available in the industry, the effect on employees, public health or the technology itself, healthcare is luring top companies from all industries.
There are many differing opinions and predication for what this means for the “future of healthcare” and “who the leaders in the industry are”, but one thing is clear: as more entities enter the healthcare market, the data is going to continue to get larger and more dispersed.
No one industry’s data is the same, which is why generic data tools seldom work, but healthcare is especially unique. There is a level of human subjectivity and moving parts unmatched by most other industries. Healthcare has more data than most, and more complex data than most, but that data is also a byproduct of the main goal: providing the best care to as many people as possible. That means that the data is not optimized for analytics and modelling, but rather a specific operational need.
The data used for analytics and modelling in healthcare was often captured for a very specific purpose, such as paying a claim or measuring the results of a particular study or test, meaning that data is optimized for one-off use not integrated learning. There’s also the fact that healthcare data is captured and “owned” by thousands of different stake-holders leading to an enormous amount of data that is very difficult to make actionable.
We now have the technology to handle this mass of data and the processing power needed to make sense of it, so why are organizations still struggling to reconcile all of this data? The answer is in the gaps. Not gaps in volume, but gaps in context and ability to piece it all together.
Data is coming from different sources, with different definitions and metrics and at varying time intervals, and this is only made worse by additional companies entering the market.
At any one time, a healthcare organization has several sets of vendors using the company’s data and another set adding data to the company’s ecosystem, all while that organization is continually generating data of its own. And now enter Apple, Google, Amazon and Microsoft.
But enough about the challenges, this is exciting news! As more and more companies are utilizing and generating their own health data, standard definitions and benchmarks are disproportionately important and the organizations who can harness that integrated data have massive opportunity. It was once the case that a healthcare stake-holder had 10% of the data they needed yet used 90% of that data, now it’s the reverse: a very small portion of data available is actually used. Not only do organizations have exponentially more data in house, they have a huge pool of un-biased publicly available data to measure and calibrate their own against.
The Bytemap platform automates the back-end steps needed to do this by providing a single standard form of data ingestion, enhancing data with common terms and definitions and providing a UI easily allows access to all off a company’s data alongside pre-cleaned and formatted public data. Rather than trying to piece together all of the data coming from different vendors and internal excel sheets and warehouses, the platform allows technical and non-technical folks alike to access the data for repeatable and reliable insight generation.