Security Risk Analysis and Management: An Overview (Updated)
Editor’s note: This update replaces the October 2003 practice brief “Security Risk Analysis and Management: An Overview.”
Managing risks is an essential step in operating any business. Because eliminating all threats is impossible, businesses will periodically conduct a risk analysis to determine their possible exposure and how best to manage risks appropriately to an acceptable level.
The concept of risk management is not new to healthcare, but conducting a risk analysis for information technology can be challenging. Reporting on the compliance audits it conducted in 2008, the Centers for Medicare and Medicaid Services (CMS) wrote, “CEs [covered entities] did not understand the key elements of an effective risk assessment. CEs did not conduct a documented analysis targeted at risks to the confidentiality, integrity, and availability of ePHI [electronic protected health information]. In some cases, although management had identified certain risks within the organization, no formally documented risk assessment covering ePHI risks throughout the organization existed.”1
This practice brief reviews the regulatory requirements of an effective security risk analysis and provides an overview of one approach on how to conduct a risk analysis.
The HIPAA security rule requires covered entities and business associates, their agents, and subcontractors to conduct a risk analysis and implement measures “to sufficiently reduce those risks and vulnerabilities to a reasonable and appropriate level.” Specifically, it has two required implementation specifications on risk analysis and risk management:
Because the security rule applies to a variety of organizations ranging from large healthcare systems to small physician practices, as well as various business associates, the standards are flexible in regard to the approach an organization takes based on several factors:
The word “reasonable” appears 51 times and the word “reasonably” appears 21 times in the final security rule (including the preamble). What is reasonable for one organization may be different from what is reasonable for another because of their risk analysis and their management’s comfort level with accepting those risks.
A risk analysis helps determine how best to meet the security rule’s implementation specifications and whether an alternative security measure appropriately meets the intent of an implementation specification. However, regarding the flexibility of applying the HIPAA security rule’s implementation specifications, the preamble states, “Cost is not meant to free covered entities from this [adequate security measures] responsibility.” If the cost is reasonable and a security measure or control would reduce risk significantly, then an organization of any size should consider implementing the control, especially if the risks are high or moderate.
In addition, healthcare organizations that wish to meet the meaningful use criteria must conduct a risk analysis.2 The stage 1 meaningful use criteria includes the following measure: “Conduct or review a security risk analysis per 45 CFR 164.308 (a)(1) and implement security updates as necessary and correct identified security deficiencies as part of its risk management process.”
Risk Analysis: Framework
The HIPAA security rule does not specify a method or process for conducting a risk analysis. Therefore, this practice brief will follow the National Institute of Standards and Technology (NIST) Special Publication (SP) 800-30, “Risk Management Guide for Information Technology Systems,” because it is a comprehensive framework and is referenced by the Department of Health and Human Services (HHS) and/or CMS in the following publications:
Figure 1, “Risk Analysis Process as Outlined in NIST SP 800-30,” illustrates the risk analysis process as detailed in NIST SP 800-30. There are nine process steps, identified in the rectangular boxes in the center of the illustration. The rounded boxes to the left of a process step are the inputs, and the rounded boxes to the right represent the possible outputs of a process step. The next sections of this practice brief will provide additional information about each of the nine steps.
Note: To make it easier to follow the text in the next sections, refer back to this figure as needed. Because this practice brief is intended to be a high-level overview, AHIMA recommends that the reader download NIST SP 800-30 “Risk Management Guide for Information Technology Systems” for a more detailed explanation of risk analysis.7
Step 1. System Characterization
System characterization is used to expedite the risk analysis. It is the process of identifying which information assets need protecting either because of their criticality to the business and/or because ePHI is processed and stored on the system. This process includes conducting an inventory of major applications and general support systems—any systems that process or store PHI. A major application is an application that is critical to an organization or stores PHI. Generally, the “owner” for a major application is the director of the department that is the primary user of the application. Listed below are some examples of major applications, with the probable owner in square brackets:
General support systems are the systems used throughout the organization to support one or more applications. They are usually “owned” by the IT department. Listed below are some examples of general support systems:
The initial focus of the organization’s risk analysis should be on systems that have the greatest effect on healthcare operations and systems that pose the greatest risk for the organization. A business impact analysis, often conducted before creating a disaster recovery plan, is one method used to determine information system criticality.8
Another method for identifying which systems to focus on is to rank applications systems based on risk factors, such as the number of users, the type of information, the use of the information (patient care, etc.), the availability of the information (Internet, etc.), the mobility of the information, the effects on the organization and patients if the system is not available, and other factors that might indicate that a system has a higher relative risk for the organization.
Step 2. Threat Identification
Once major applications and general support systems have been categorized, the next step is to identify threats. From an information security perspective, a threat is anything that could affect the confidentiality, integrity, or availability of information or an information system.
For simplicity, three groups of threats can be identified:
Conducting a thorough risk analysis does not imply that organizations need to identify every possible threat. The term “reasonably anticipated” is used three times within the HIPAA security rule (twice in the preamble and once in the actual rule) as it pertains to threats or hazards. Factors for determining what could be reasonably anticipated includes statistics, geographical location, past experiences, or industry trends. Once identified, the reasonably anticipated threats are matched to a particular application or general support system. For example, the probability of theft is more likely for a laptop or a smartphone that is transported daily in and out of an organization than for a large rack-mounted server in a data center.
System characterization is useful for dividing information assets into manageable pieces, like a puzzle, identifying the unique threats that may exist at each layer that constitutes an information system. However, the overall risk to a system will be the combination of the risks at each layer: the application, operating system, software, server, network, and desktop and laptop layers.
Steps 3 and 4. Vulnerability Identification and—Control Analysis
Because of the close relationship between vulnerabilities and controls, it is often easier to combine these two steps. If the risk analysis is being conducted on a major application or general support system that is already being used, then conducting a control analysis first usually makes more sense. If an application or system is brand-new, then the vulnerability identification should occur first because some of the security controls may not yet have been implemented fully.
A vulnerability can be described as an inherent weakness or absence of a safeguard that could be exploited by a threat. Vulnerabilities may be attributed to people, processes, or technologies. The absence of a functioning control often represents a vulnerability in an application or system. For example, antivirus software is used to prevent or detect malicious code. If this control is missing, it represents a vulnerability. Sometimes a control may be present but inadequate. Using the same example, if the antivirus software is present (control) but does not get updated regularly, then that is also a vulnerability.
Typically, threats are paired with vulnerabilities, although it is not necessarily a one-to-one relationship. Many threats may exploit a single vulnerability. One threat source may exploit more than one vulnerability. Conversely, a single control may be used to address multiple threats. Figure 2, “Sample of Threats, Controls, and Vulnerabilities” below offers samples of controls and vulnerabilities based on a specific threat for laptops.
Figure 2 - Sample of Threats, Controls, and Vulnerabilities
In general, controls may be categorized as:
NIST SP 800-53 Rev. 3, “Recommended Security Controls for Federal Information Systems and Organizations,” may be used as a means for assessing information security safeguards and controls.9 This document was created for agencies of the federal government and may specify controls that are not used commonly in many healthcare organizations.
Besides the control analysis, other sources for determining vulnerabilities include reports or results from:
Step 5. Likelihood Determination
The next step in the risk analysis process is to determine the probability or likelihood of a potential threat being successful in exploiting vulnerabilities. The likelihood determination must be made with consideration of the existing security safeguards and controls. The definitions of likelihood ratings are described in “NIST SP 800-30 Likelihood Definition,” below.
Figure 3 - NIST SP 800-30 Likelihood Definition
Step 6. Impact Analysis
The next step in the process is to determine the potential impact resulting from threats successfully exploiting vulnerabilities. Some examples of possible impacts are listed in figure 5, “Possible Impacts.” The definitions of impact ratings are described in figure 4, “NIST SP 800-30 Impact Definitions.”
Figure 4-NIST SP 800-30 Impact Definition
Healthcare organizations are encouraged to edit the NIST definitions or create their own definitions for likelihood and impact. An accurate description of what constitutes a rating of high, medium, or low is important for maintaining consistency when evaluating risk scores. A consistent standard for scoring risks ensures a better prioritization of risk.
– The OCTAVESM Approach. Boston, MA: Addison-Weley, 2002.
Step 7. Risk Determination
The purpose of this step is to assign a risk score that is based on likelihood and effect. The scoring of risks provides for appropriate prioritization of resources and focus on the areas of greatest risk. Risk can be determined by using one of the two common approaches described in the figure 6.
Regardless of the method used, prioritization of risks is the primary goal for conducting a risk analysis. This prioritization ensures that limited resources (money, people, and time) may be applied where the greatest risk reduction may be realized.
Step 8. Control Recommendations
Wherever a vulnerability exists, the control recommendation is essentially what to do to counteract the missing control. For example, figure 7 lists some samples of how a stated vulnerability can be translated into a control recommendation.
Figure 7 - Creating Control Recommendations
However, there may not always be a specific control recommendation for a given vulnerability.
Step 9. Results Documentation
The final step in the risk analysis process is the results documentation. The HIPAA security rule does not specify the form of documentation a risk analysis should take. Many organizations will use some type of spreadsheet or a summary report.
Figure 8 is a sample of a risk profile for a risk analysis conducted on laptops. A risk profile is one way to generalize and document risks efficiently. A risk profile can be done in a Word document, an Excel spreadsheet, or a database. In the sample in figure 8, this risk profile covers most laptops routinely carried in and out of the organization by its workforce. Although there may be some variations in individual configurations, management by exception is a far simpler approach than trying to conduct and document a risk analysis for every laptop used within the organization.
Figure 8 - Sample Risk Profile
Appendix B of NIST SP 800-30, “Risk Management Guide for Information Technology Systems,” provides a sample report outline. A risk analysis report contains the key findings or vulnerabilities and the control recommendations for reducing risks. The application or system owners should sign off on this report to make them aware of the residual risks—the risks that remain even with the current safeguards and controls applied—and their decision about what to do. To simplify things, there is usually one of three possible decisions by owners on how risks will be addressed:
Risks should be handled in a cost-effective manner relative to the value of the asset and the criticality and sensitivity of the data.
Often, this final step of the risk analysis process is incomplete because some technical people find completing the necessary paperwork and reports difficult. Obtaining a decision from the owner on how residual risks will be managed also can be challenging.
HIPAA requires documentation of the risk analysis be retained for six years. Documentation is critical in proving that the analysis was performed.
Risk management is the act of implementing security safeguards and controls. It also entails monitoring for changes and responding with enhanced strategies. The HIPAA security rule addresses the ongoing management of risks in several areas:
The success of the risk management process depends heavily on the commitment of those involved with safeguarding an application or system to implement the approved control recommendations. Therefore, it is strongly suggested that some type of follow-up be scheduled around two to three months after the final risk analysis report is delivered and signed. The purpose of the follow-up is to verify progress on risk reduction and maintain open communications when obstacles are encountered.
Risk analysis and risk management are ongoing processes. Federal government agencies are required by law to reassess risk to information systems every three years. This reassessment is a good benchmark from which to determine an appropriate time frame. NIST SP 800-37, “Guide for Applying the Risk Management Framework to Federal Information Systems: A Security Life Cycle Approach”, has a diagram that illustrates this ongoing risk management process, as illustrated in figure 9.
Herzig, Terrell. Information
Security in Healthcare: Managing Risk. Chicago: HIMSS, 2010.
Tom Walsh, CISSP
Angela K. Dinh, MHA, RHIA,
Prepared by (original)
Margret Amatayakul, RHIA, CHPS, FHIMSS