Many organizations play fast and loose with the phrase “HIPAA-compliant.” But this isn’t a mere marketing label that everyone can apply however they see fit. HIPAA’s standards for achieving and maintaining compliance are admittedly confusing in many areas. But there ARE HIPAA best practices and standards—well-documented ones at that.
In this blog, we’ll offer insights on two specific areas that frequently cause confusion around HIPAA compliance: cloud services and voice over IP (VOIP).
Whenever you see “HIPAA-compliant” and “cloud service provider” in the same sentence, expect things to get confusing. Don’t automatically accept a service provider’s claim that their cloud service is HIPAA-compliant. In their default state, many implementations using cloud services actually aren’t. If a cloud service provider states they are HIPAA-compliant, they generally mean that their platform can be used in a HIPAA-compliant manner if configured and used appropriately.
For instance, many cloud services do not currently enforce encryption at rest when data is stored on their platforms. (Data at rest is simply data that is being stored rather than data being transmitted or processed. It could be, for example, data stored on a hard drive, in a .txt file, or in an Amazon S3 bucket). In many cases, encryption at rest is a configurable feature/option, but you’ll have to manually turn it on. To make matters even more confusing, at rest encryption can be implemented in many different ways, such as full disk encryption; volume or virtual disk encryption; file/folder encryption; and even database table/field encryption. And, of course, no two cloud providers or solutions do things exactly the same way.
If you don’t encrypt data at rest in cloud environments used to store Protected Health Information (PHI), you will often be non-compliant with HIPAA. That naturally leads to the question, “If HIPAA requires encryption at rest, why doesn’t it just say so?” Here are the details.
The HIPAA Security Rule and the U.S. Department of Health and Human Services (HHS) do not explicitly require implementation of encryption at rest in every situation. (This is one of the safeguards called "addressable" in HIPAA parlance, which means “not required.”) But HIPAA does require that every organization conduct a risk assessment to determine whether encryption is a "reasonable and appropriate safeguard" for PHI data at rest for their organization in every situation. (This section of HIPAA has the details. Note that HIPAA handles all of its "addressable" safeguards this way.)
Furthermore, HHS has acknowledged that "encryption protects ePHI by significantly reducing the risk of the information being viewed by unauthorized persons.”
Along with HHS’ recommendation, every covered entity and healthcare organization we’ve worked with has expected that PHI be encrypted at rest when it is outside of the organization’s direct control. Your decision should be even easier when you learn that it doesn’t cost much in time or money to implement at rest encryption.
All of that is why we recommend encrypting PHI in two situations:
Encrypting data in these situations mitigates the risk of data loss. You will be protected in the event of breakdowns of physical or logical security controls in cloud service provider environments or at other potentially uncontrolled locations (such as remote work locations).
Keep in mind that if you want to pass a HIPAA audit, you’ll need to take additional measures for any location/service where you don’t encrypt PHI data at rest. HHS states, "If the entity decides that the addressable implementation specification is not reasonable and appropriate, it must document that determination and implement an equivalent alternative measure, presuming that the alternative is reasonable and appropriate."
Basically, if you decide not to encrypt PHI in a cloud service, you need to document that ahead of time, along with details of the equivalent alternative measures/safeguard you use. All of that must be available to any auditors who come calling from the Office for Civil Rights (OCR, which enforces HIPAA).
For more information on how HIPAA applies to cloud service providers, click here.
Voice services and voicemail present their own HIPAA complexity. Part of the challenge stems from the fact that HIPAA arrived long before anyone used VOIP. HIPAA was released in the same year (1996) that VOIP was invented (specifically, the SIP protocol). But VOIP didn’t hit widespread usage among healthcare providers until around 2011. (Here’s a quick history of VOIP).
HHS has said that telephone and fax services are exempted under the HIPAA Security Rule because they are written and oral communication. But that’s not the end of the story.
This overview of HIPAA’s treatment of telephone and fax services illustrates the challenge. It doesn't address VOIP or video over IP services specifically.
We suspect HIPAA exempted telephone service in the first place because encryption of voice communications wasn’t considered a reasonable and appropriate safeguard in 1996. Transit encryption was ridiculously expensive/difficult to implement for voice transmissions, and voicemail as a service didn’t exist. And no one had even thought about the concept of cloud providers.
The best way to avoid non-compliance in an audit or penalties from OCR is to use transit encryption with VOIP or video services. Both history and recently released information from HHS point to this:
If you need help understanding how HIPAA rules affect your organization, contact us today.
If you’ve ever asked multiple vendors for bids on a penetration test, you know side-by-side comparisons quickly break down. How can three bids for the same service span a $10,000 price range?
Clearly, qualitative differences lurk in the fine print. But how do you sort it out? Start by trusting your gut when it tells you that the low price probably has a catch. And then think about why you’re investing in a pen test in the first place. A mediocre penetration test may check a compliance box for you. But it also can leave you with a false sense of security.
“You might do phishing and a vulnerability scan and check the list for the year. But there’s so much more that goes into it,” says Pratum Senior Penetration Tester Jason Moulder. “If you’re not doing a comprehensive approach and incorporating all the elements of how an attack plays out, you don’t see the big picture.”
To make sure you’re getting a pen test that’s worth your investment, ask the following questions.
Make sure you understand the difference between a vulnerability scan (vuln scan) and a pen test. Pratum includes a vuln scan as part of its pen testing to identify misconfigurations, missing patches, etc. But a surprisingly low pen test price quote might indicate that a company plans to run only a software scan of your system rather than sending a human pen tester to test your defenses as a hacker would. Ask vendors exactly how many human hours they’re budgeting for manual testing activities such as confirming vulnerabilities, exploiting them and attempting to pivot into a breach of the larger system.
Automated scans can find only the weaknesses they’re told to look for. Plus, vuln scans can produce false positives by flagging incorrect headers or by flagging subcomponents that don’t actually compromise the overall system.
A human pen tester can vet the scan’s results for actual threats, plus explore vulnerabilities that the scan doesn’t know to look for. Real hackers use unpredictable methods that a vuln scan can’t simulate. “Just last year,” Jason says, “a few kids figured out how to bypass certain logins just by mashing keys on the keyboard. Those are the kinds of things you just won’t get outside of the human aspect.”
If a penetration testing vendor’s proposal talks mostly about the proprietary technology they use, ask for more details. That could mean that they plan to rely heavily on automated scans rather than deploying human experts to truly test your system.
Industry certifications indicate testers have a solid grounding in fundamentals such as attack life cycles. Look for titles such as Certified Ethical Hacker (CEH), Offensive Security Certified Professional (OSCP), GIAC Penetration Tester (GPEN), GIAC Certified Intrusion Analyst (GCIA) and GIAC Web Application Penetration Tester (GWAPT).
But acronyms after a person’s name don’t guarantee real-world experience. Ask for resumes of the testers who will work on your project. (Many companies keep their pen testers anonymous. But you can still ask for the resume of “Tester A.”) Some companies win your business by talking about their overall experience and then assign your project to an entry-level employee who wasn’t involved in any of the impressive projects you based your decision upon.
An experienced pen tester approaches every engagement with a high level of curiosity and creativity because that’s what a hacker will do. For example, a high-level pen tester working on a university job might research which of the school’s departments have recently received grants and target those departments for attacks. If they’re getting grants, they probably have valuable intellectual property to steal.
Make every decision with the idea that you’re building a system to stop hackers who do whatever it takes to break in. You need a savvy pen tester with the same mindset.
Remember how heist movies always feature teams of specialists who each step in to disable the security system, crack the safe, drive the escape car, etc.? In the same way, the best pen testing vendors assign multiple experts to your job. For a comprehensive test, you want a team that takes its best shot at your system with pros versed in software development, Internet of Things (IoT) devices, hardware and more.
Pen testing is both art and science. While the tester’s creativity plays a key role, they should anchor their approach in industry-recognized methodologies. Ask vendors about what drives their approach. For example, Pratum derives its penetration methodologies from NIST SP800-115, the Open Web Application Security Project (OWASP), Open Source Security Testing Methodology Manual (OSSTMM), Penetration Testing Framework, and other industry best practices.
A good vendor asks about your objectives. “We need to understand your perceived value of the test,” Pratum’s Jason Moulder says. “That helps us adjust the scope to either a more granular type of test or a broader test that incorporates all the elements required to address the scenario they have in mind.”
For example, telling a vendor you want to “do a pen test on our web app” could mean a lot of different things. Should the test be limited to the app itself? Should testers go after the infrastructure behind the app? If the tester can compromise the network via the app, should they keep going to see how much data they can access? Clear answers during scoping will produce the specific results you’re looking for.
Early on, you’ll make the key decision of deciding how much information to give the tester in advance. In a black box test, you tell them almost nothing about your environment. In gray box and white box tests, you give them different levels of information so that their work zeroes in on specific components.
Without a social engineering element, your pen test provides a very limited assessment of your security posture. Well-orchestrated, well-funded zero-day attacks grab a lot of headlines. But in the vast majority of cases, hackers rely on compromising end users in order to gain access to a system. So phishing tests, for example, should be part of a comprehensive pen test—and you should drill down on the vendor’s proposal there, too. “Generic phishing tests only provide about 60% of the potential value,” Jason says. “If you don’t test what can actually happen after someone clicks a fraudulent link, they don’t know actually know the impact it can have.”
Ultimately, the pen test is only as valuable as the report it produces. This document tells you what the testers did, what they found and what they recommend for closing the gaps. Don’t pay for a glorified template full of boilerplate graphics that tell you little about your specific security posture. Ask for a sample report and review it carefully to determine whether it provides the kind of solid information you could act on.
A mediocre report, for example, may describe a weakness in your system. But a detailed report may show you that hackers would have to get through 10 other layers to exploit a weakness. With that information, you can decide whether the vulnerability is an acceptable risk for your organization.
Also, ask how the vendor plans to walk you through the report. You’re paying enough that you should expect analysis of the results, not just a PDF sent by e-mail. The consultant’s personal review often helps connect the data points into an overall picture. “Sometimes a thing by itself is no risk,” Jason says. “But if it’s chained with other things, it becomes a big risk.”
The quote should include a retest of vulnerabilities at a set time (typically 90 days after the initial test). This gives your team time to address gaps and to get third-party validation that they were successfully remediated.
Pen testing is invasive, and there’s a chance that the tester’s actions could cause performance interruptions in your system. Confirm that the vendor you’re considering has insurance to cover business interruptions, restoration costs, etc. (The fact that pen testers can seriously disrupt your operations should be another strong incentive to confirm that you’re hiring a true pro for this work.)
Clearly, many factors go into an effective pen test. That makes sense for a service that represents a significant investment in protecting your organization’s future. For help determining what your next pen test should entail, contact Pratum today.
If it seems like your team spends more time every week answering client questions about your information security policies, you’re not alone. Vendor management has become an increasing point of emphasis for companies of all sizes. That means you’re probably allocating more and more resources to filling out forms explaining how you handle data. This trend will only grow, so it’s time to review a few best practices that can streamline your responses so that you can efficiently address your clients’ vendor management concerns and get back to your day job.
Driven by both legal concerns and worries about data breaches putting them out of business, companies are holding their vendors accountable with SIG questionnaires, SOC 2® certificates, proprietary security questionnaires and more. Companies recognize that their vendors’ risks are their risks, so they’re pushing stringent vendor management requirements all the way down their supply chain. When that initiative comes from a Fortune 500 company or government entity, the ripple effect means that even small companies now face the kind of security reviews that were once common only in larger firms.
Managing all the responses has become a major workflow issue. With every client putting their own slant on a set of core questions, you could easily tie up hours of employee time chasing down answers to the latest question about your security posture.
Vendor management was already a growing point of emphasis before two recent major breaches convinced even late-adopters that their supply chain needed a closer look. The headline-grabbing breaches of SolarWinds in December 2020 and Microsoft Exchange Server in March 2021 proved that even if your vendor is a global tech titan that dwarfs your company, you’re putting your operations into potentially uncertain hands. The Exchange breach alone resulted in compromises of an estimated 60,000 networks in early 2021.
The CMMC standard currently rolling out in every Department of Defense contract will require an estimated 300,000 companies to earn a third-party certification. Some major healthcare companies are now working only with vendors who earn a HITRUST CSF certification.
Many companies establish these requirements to avoid issuing data breach notifications, no matter what happens. These notifications can carry high costs both in raw dollars for the notification and potential fines and in damage to the company’s reputation. As a result, we’re seeing some companies require HIPAA compliance from their vendors, even if those vendors don’t typically handle PHI (Protected Health Information) for the larger company. The companies higher in the supply chain want to ensure that if they inadvertently share data with a partner, the partner has controls in place to prevent the need for a costly breach notification.
Many contracts now mandate security controls related to vendor management. “Right to audit” clauses are also gaining momentum, which means that a company can audit a vendor’s process if they suspect data is not protected. A failed information security audit could put the vendor in breach of contract.
In Pratum’s experience, only about 10% of these “right to audit” clauses are ever exercised. But large companies sometimes use the right to audit as a negotiating tactic. When a contract is up for renewal, the client company may call for an audit, reveal security gaps and seek pricing concessions if the vendor wants to retain the contract.
And keep in mind that if 10% of your, say, 80 clients exercised a right to audit in a given year, you would face eight audits. Some companies are successfully pushing back by getting a third-party certification such as those mentioned below and renegotiating contracts to include the right to audit only if a data breach actually occurs.
Pratum offers several recommendations to help you streamline this process:
Companies that can efficiently report on their security position often separate themselves from competitors. We’ve seen many clients get their big break when a major new customer calls with a rush job. The vendor that can submit their security reports at the same time as their bid typically wins the job, opening a new relationship with a potentially key client.
If you can produce a validated third-party certification (such as SOC 2®, HITRUST CFS or ISO 27001), you’ll instantly stand out from competitors who can present no more than their own statements about how they’re doing things.
Keep in mind that most companies aren’t looking to drop the contractual hammer on their vendors and cancel contracts. Most companies would prefer to keep working with proven vendors. So simply getting your information security house in order can probably secure your relationship and keep clients from considering other vendors.
For more insights on the current landscape in vendor management, watch Pratum’s recent Cybersecurity in 60 webinar.
If you could use help reducing the workload of responding to clients’ security requests, contact us today.