The Self-Pay Compliance Problem: Payment Security
The massive fines and multimillion-dollar settlements associated with data breaches have made hospitals and health systems fully aware of their obligations to protect patients’ personal health information (PHI). But those same organizations often overlook similar obligations related to their legal status as merchants—entities that are able to process credit cards. With higher deductibles and higher copayments forcing patients to use credit to pay for their health care, hospitals and health systems must take steps to protect that data. In particular, it’s critical that decision makers learn how the payment solution they select can change their organizations’ internal security and compliance obligations.
To avoid the costly penalties, damage to reputation, and loss of patient trust that proceed from a credit card data breach, hospitals must become fully informed of the risks and regulations of the new self-pay environment.
External Threats—What Hospitals Need to Know
Health care is a major target for hackers because hospitals have or transmit credit card data alongside stable, personal information such as addresses, birthdates, and social security numbers. The fact that this information is so easily accessible explains why from 2012 through 2014, cybercriminals waged more attacks on healthcare organizations than on the business, military, or government sectors. An analysis of data breaches published in December 2015 by the Identity Theft Resource Center found that 780 breaches occurred in that year across all industries in the United States, and 35 percent occurred in health care—a percentage that has since increased to 43 percent. Indeed, the medical/healthcare sector was the most breached industry in 2016.
For hospitals and health systems, as for organizations in any industry, the primary data threat is not the risk that a single criminal might find a way to hack into a central repository of data; it’s malware, a catchall term describing malicious code that gains access to a computer via phishing emails—email created by hackers to look like legitimate email. The Identity Theft Resource Center report notes that 90 percent of all data breaches in 2015 were accomplished through malware installed on people’s computers without their knowledge.
One of the most common forms of malware is a “keylogger” code that can detect when credit card information is entered and then send that information back to the code’s source. The danger of keyloggers for a hospital is significant because they can capture credit card data at the place of entry despite safeguards that might be in place. The computer terminal itself is infected, so even if the customer service representative is using a secure web payment application, all data transmitted from that station are at risk.
One of the most common misconceptions in revenue cycle management is the belief that if the hospital’s payment vendor is compliant with security regulations, the hospital is, too. As the keylogger scenario shows, though, the hack could occur before the card data ever gets to the payment vendor application. The target points of entry for hackers—the terminal where the initial number is keyed in and the route it takes over a vulnerable network—often are outside of the purview of the vendor. In every case, the hospital is ultimately responsible for protecting patients’ data, not the payment vendor. Decision makers in charge of payment processing must understand two things: that the hospital itself owns some of these vulnerabilities and that it owns all the compliance obligation.
In grappling with the self-pay issue, it is crucial that hospitals and health systems not only develop the means to take credit card payments at all points of service but also take deliberate steps to ensure the credit card information is kept safe.
Meeting Payment Data Security Standards
One way to meet this challenges is to follow the data security standards set by the Payment Card Industry (PCI) Security Standards Council, the body dedicated to protecting credit card data internationally. The council assigns organizations to different classifications, each of which carries a requirement for the completion of a specific audit. These audits are of various lengths, ranging from about 20 questions to well over 300. Most hospitals today are simply not complying or protecting the data. If they are complying, it is typically at the highest level of PCI audit scope (the 300+-question audit) due to the transmission of card data to their network.
The Security Standards Council is, in a sense, using the onerousness of the audits to point merchants towards the most effective solutions—the better the protection, the simpler the audit. Hospitals with a PCI-validated point-to-point encryption solution, which prevents card data from transmitting “in the clear,” undergo the simplest audit, while hospitals with non-validated solutions require the most rigorous—and costly—validation process.
CFOs are responsible for the merchant agreements that dictate their PCI designation and compliance requirements, and revenue cycle leaders determine the services used for taking credit cards. Thus, the responsibility for protecting patients’ credit card data should be tackled as a group, even though penalties arising from noncompliance would likely be laid at the feet of the CFO.
Increasingly, healthcare CFOs are recognizing the urgency of the need to protect the credit card data of their organizations’ patients, and they are taking steps to address this issue. In any such effort, to ensure that the payment security solution established is effective and appropriate, CFOs should work closely with the CIO or chief security officer, together considering the full range of alternatives for credit card data protection.
David King is the co-founder and chief technology officer, OnPlan Health, Bannockburn, Ill.