It is no secret that the healthcare industry is undergoing a revolution, one driven in large part by an irreversible surge in the quantity and availability of data. It wasn’t too long ago that researchers, clinicians and administrators were forced to make important clinical, operational and financial decisions based on their experience and their gut – without sufficient data. Thanks to the technological advancements of modern data architectures like Hadoop, healthcare entities are now armed with both quality real-time and historical data, radically changing how healthcare services are delivered and consumed.
We are just starting this change. Under the current model of healthcare delivery, patients are retroactively cared for when they visit a care setting. A new data-rich model has emerged that allows hospitals and health networks to predict, and in some cases prevent, people from getting sick in the first place. Within this new data paradigm, hospitals and health networks need to adapt this proactive approach to stay relevant and competitive.
This revolution is starting now and will continue to gain steam as organizations not only realize the value of their data, but are able to unlock the insights to improve operations and overall patient care. New healthcare functions like remote patient monitoring, telehealth and proactive patient behavioral insights are all enhanced by integrating with today’s modern data architectures. These new capabilities hold the promise of improving chronic disease management and meeting the triple aim of bettering the patient experience (including quality and satisfaction), improving the health of populations and reducing the per capita cost of healthcare.
Big Data in healthcare is poised to make a big impact. Yet, health care entities have historically been hampered by repressive data silos. These silos make data sharing across data sets difficult, are notoriously expensive, offer limited storage options and do not handle different structures of data. These technical and financial barriers have made it difficult to determine what data is relevant to store today and what might be meaningful in the future.
Today, inexpensive storage options and the ability to store data in its native format – structured, semi-structured, and unstructured – are making the mantra of store everything both financially and technically possible, both on-premise and in the cloud. There is true value in storing everything, particularly for healthcare. From cutting-edge research, to organ donation and matching, to preventative care, healthcare needs data to be accessible and complete. We don’t know today what data might be valuable tomorrow. Small details matter, so ‘small’ data that wouldn’t traditionally be stored matters too.
Since healthcare is a highly regulated industry, institutions continue to be heavily focused on reporting to meet regulatory standards. But they are also realizing there is value in the data they are generating internally, and they need that data to be as real-time as possible. Prior to Hadoop, organizations would batch process their EMR data overnight when the hospital was at its quietest. A 24-hour difference between data being entered in the EMR and analytics being applied to that data is not acceptable in an industry where seconds and minutes can have a big impact on patient outcomes. With the introduction of data-in-motion technologies as part of modern data architectures, data can be streamed in real-time from the bedside and acted on immediately. It can also be stored to facilitate long term evaluation over larger data sets to bring visibility to unseen trends and actionable historical insights.
Real world examples are not difficult to find. Leading healthcare institutions are pioneering data-based care programs already. For example:
- Arizona State University (ASU) is furthering cancer research and enabling ASU researchers to ask better questions and uncover new links between genetics and cancer by giving them access to the tools they need to store, process, and query more data at a faster pace.
- The United Network for Organ Sharing is helping transplant professionals make more informed decisions when life-saving organs become available.
In addition to the technical benefits of Hadoop, the technology is open source and designed around open standards that should play a central role in any data-driven healthcare system. Hadoop is built by a community of thousands of contributors who can develop code and react to the marketplace more quickly than any single company. Open source improves the quality of code due to the number of people who can contribute, review and test it. It is innovation-driven rather than profit-driven, so pushes the boundaries of what’s possible. And it enables simpler sharing of information and data because of its open ecosystem. These benefits are incredibly powerful in the rapidly changing healthcare space.
Hadoop in healthcare is starting to reach a tipping point. Those that hesitate and delay in understanding how Big Data can impact their organizations will be left at a competitive disadvantage. As our healthcare system moves towards more preventative and predictive care, care providers cannot continue to be reactive in their patient care delivery. Instead, they need to deliver care proactively and understand their individual patients, patient groups and population at a much deeper level. Every service or offering an institution provides should tie back to their goal of being a data-driven organization.
Those that embrace data as the new currency and can unlock insights from their data will thrive, while those that don’t will continue to struggle with the same challenges that have plagued our healthcare delivery system for decades. Healthcare is ready for a change and data-driven decision making will only improve the cost-efficiency of our system and the outcomes for patients.