Dec 17, 2019

Why Cleaning Up Your Dirty Data is Critical to Clinical Interoperability and Financial Survival

This article was written by 4medica President, Gregg Church, and featured in the G2 Intelligence Blog. See G2 article here.

Hospital-based and commercial labs are flooded with all types of mismatched patient data. However, disparate systems coupled with a lack of universally mandated data standards and protocols means that big data just gets bigger ─ and clinical interoperability challenges just get larger.

The quality of healthcare data is critical at every step along the patient care continuum. As the reliance on data in the digital age of healthcare intensifies for collaboration among providers and payers, the demand for accurate and reliable clinical, administrative and financial data increases exponentially.

Cleaning up dirty patient data in the laboratory is key to solving clinical interoperability and profitability challenges. Labs must become patient-centric in their approach to process and communicate information.

Every day, orders that originate from the EMR or an outside practice get entered into the LIS. If the data isn’t good quality, it is certain to impact safety, care outcomes and costs downstream. What’s more, labs are increasingly participating in risk-sharing population health initiatives that are forcing a critical look at data cleanliness as a critical competency to achieve high-level goals.

Who’s accountable for the cleanliness of the patient data? Who gets this data dirty in the first place? Is the culprit the patient, the EMR vendor or the provider? Fact is, every entity that touches patient data plays a part of its flow through various systems, moving in and out of networks with every patient encounter.

Clean patient data is a must-have for value-based care

Value lies in providers’ ability to generate the quality of data, not quantity, to drive industry initiatives that accurately measure results and identify and improve gaps in the care continuum. Labs can no longer afford to blindly accept the incomplete or missing patient information such as incorrect demographic data that comes into their facilities. Because these businesses – and the laboratory is a for-profit business – manage a tremendous amount of data, labs are integral to solving clinical interoperability challenges. They have an opportunity and a responsibility to clean up patient data at the front end of the ordering process, before the lab order moves into the revenue cycle management process.

Healthcare organizations understand the importance of lab results as a valuable source of data for treatment decisions. Many innovative leaders are even using lab data for predictive modeling and patient risk stratification. The Mayo Clinic estimates 60 to 70 percent of all decisions regarding a patient’s diagnosis, treatment, hospital admission and discharge are based on the results of the tests performed by medical laboratory scientists.

Like the well documented studies and articles over the past two decades culminating with the CMS 2015 released guidance for preserving the data integrity and documentation of the electronic medical record, in a similar vein labs must address and identify the root cause of inaccurate, incomplete or unreliable information. In essence, data integrity at the patient identity level must become a top priority for the laboratory.

A longitudinal view of the patient record offering access to an organized, structured view of a patient’s medical data aggregated from multiple clinical and financial sources is essential to achieving collaboration. It is imperative that laboratories put the people, processes and technology in place to improve workflow processes ensuring patient information is correct and valid, getting to the root cause of dirty data.

Moreover, determining pre-authorization and ensuring clean patient data upfront are critical to denials prevention. Whether the mistake is a misspelled name or a more complex coding issue, labs are experiencing a higher rate of claims denials. Black Book™ survey respondents estimated 33 percent of all denied claims result from inaccurate patient identification or information costing the average hospital $1.5 million in 2017 and the U.S. healthcare system over $6 billion dollars annually. Operational efficiency in both orders management and revenue cycle management is critical. Getting that first pass payment up as high as you can is a gamechanger.

The days of fixing this on the back end are over.

If you have bad data coming in, you’re going to have bad data going out. This means medical decisions are at serious risk if the wrong result is attached to the wrong patient. Patient safety issues arise when information is incorrect, and revenues are lost if the billing is inaccurate or orders management is inefficient.

Streamlining lab services to reduce errors, protect against payer audits, easily connect with physician offices and payers, and digitally manage patient data from a single point of access has never been more essential. A clean enterprise master patient index (EMPI) ensures every patient is represented only once and can aggregate patient data contained in separate systems within one facility. In fact, the same Black Book crowdsourced poll of EMPI users revealed that prior to administering an EMPI tool, an average 18 percent of an organization’s patient records are found to be duplicates.

For clinical interoperability and profitability, labs must work to ensure perfect orders that are automatically aligned for successful billing the first time and focus on a near-complete removal of duplicate patient records and automated order systems that eliminate errors and reduce risk. When every order for laboratory services is electronic and fully scrubbed for medical necessity, patient identity, insurance eligibility and pre-authorization as required upfront, labs and imaging centers reap tremendous cost savings in accessioning and support services.