• Milestones in Quality Improvement Measurement

    Betty Hintch Mar 24, 2016

    Quality teams need to go beyond the common measures used in the past and identify initiatives that address the current emphasis on effective and efficient care.

    Hospital staff and clinicians are often enthusiastic about participating in quality improvement initiatives that can lead to better patient outcomes, more efficient work processes, and increased patient and staff satisfaction. But such enthusiasm is no guarantee of success. “A lot of measurement efforts fail because people say they are really into improving quality, but they don’t have a clear roadmap to guide their quality measurement journey,” says Robert Lloyd, PhD, a vice president at the Institute for Healthcare Improvement, Cambridge, Mass.

    The following key milestones should be recognized during the quality measurement journey and used to determine whether a quality team is on track, is headed in the wrong direction, or has come to a roadblock.

    Moving Beyond Concepts to Measures

    Teams typically start with a desire to improve an aspect of care or service that is encapsulated by a concept such as reduced harm, improved customer service, or more effective use of resources. But because there are no universally accepted ideal measures for these concepts, the team needs to engage in dialogue to identify a list of potential measures for each concept and reach consensus on which of the many options they will actually use. “Measures are quantifiable extensions of a concept,” Lloyd says.

    Taking the example of inpatient falls, a team might be charged with reducing falls. However, reducing falls is not a measure, Lloyd says, but rather a desired end state and a vision. “People have to move away from concepts to specific measures,” Lloyd says.

    In this case, the team members will have to decide whether they are going to measure the absolute number of falls detected each day or the percentage of inpatients that fall at least once during their stays. The team could also create an inpatient falls rate (e.g., total number of falls per 1,000 patient days). Finally, the team could decide to measure the number of days that have gone by without a fall. “Each concept earmarked for quality improvement could have many different measures, so a dialogue must occur about which measures are most appropriate for the team’s work,” Lloyd says.

    Building Operational Definitions

    Once the measures have been identified, it is time to build clear and consistent operational definitions. An operational definition is a description, in quantifiable terms, of what to measure and the steps to follow to measure it consistently. An operational definition:

    • Gives communicable meaning to a concept
    • Is clear and unambiguous
    • Specifies measurement methods and equipment
    • Identifies detailed criteria for inclusion and exclusion
    • Provides guidance on sampling

    For example, a team that wants to reduce medication errors may have decided to use the percentage of inpatient medication errors as an outcome measure. The operational definition would need to specify the numerator (i.e., the number of inpatient medication orders with one or more errors where an error is defined as wrong medication, wrong dose, wrong route, wrong time, or wrong patient) and the denominator (i.e., the number of inpatient medication orders received by the pharmacy). It’s important to remember that operational definitions are not universal truths. “People will often take issue with operational definitions, but as long as they work for the team they have utility,” Lloyd says.

    The final consideration when developing operational definitions is understanding the context for the measure. For example, if a team’s results will be combined with other quality teams’ results either at the same hospital or systemwide, extra attention should be paid to ensuring the operational definitions are consistent across all participating teams or facilities. “If you have four hospitals measuring things differently, you're looking at apples and oranges,” Lloyd says. “You're not comparing the same thing.”

    With clear operational definitions in hand, the team can proceed to develop a data collection plan.

    Gathering Data

    Taking the time to develop an appropriate data collection plan is critical. Many teams are tempted to quickly start collecting data, but that can lead to collection of too much or too little data—or of the wrong kind of data. The result may be frustrated team members and a failed quality initiative.

    Data for improvement efforts should substantiate the need for change. However, all too often, quality teams use readily available, convenient data that may not adequately address their targeted outcome and processes, Lloyd says.

    For example, average length of stay and average cost per discharge are common administrative measures used as proxy quality measures. However, today’s quality teams need to go beyond administrative data and address care structures or processes that are directly related to the outcomes they seek to improve, Lloyd says. In the example of reducing the incidence of inpatient falls, the quality team should focus on more than just the number of inpatient falls as the outcome measure. The team should also track the process measures related to fall reduction, including:

    • The percentage of patients screened upon admission for the potential of falling
    • The percentage of patients determined to be at risk for a fall
    • The percentage of at-risk patients re-screened within 24 hours
    • The percentage of staff in compliance with proper use of the falls protocol.

    In terms of collecting data, the team may find that the number of falls is available from the hospital’s risk management department but that data on the process measures are not being collected. As a result, the team will have to develop a measurement plan to gather the process measure data.

    Using Stratification

    Stratification involves putting data into distinct buckets. For example, if a team has a theory that elderly patients interact with the healthcare system differently than do middle‑aged patients, Lloyd says, they should conduct separate analyses of the two groups. Other examples of stratification categories are gender, day of week, time of day, shift, type of procedure, and prior admissions.

    Not accounting for these varying factors can produce misleading statistics and therefore lead to incorrect conclusions about data. The inaccuracies are due to the effect of mixing patient or operational characteristics that should be separated. When this effect occurs, staff may have to spend time trying to tease apart the data by category, resulting in wasted time and resources. Effective stratification requires deciding on key stratification criteria beforecollecting the data.

    Applying Sampling Techniques

    Not every quality improvement project will require sampling. Without a high volume of data, for example, it makes sense to record all the observations (e.g., if a department inserts only 10 central lines per month, then quality team leaders would include all 10 observations in the monthly analysis). But in cases when quality teams have too much data (e.g., lab test turnaround times, pharmacy orders, surgical procedures), sampling techniques save both time and resources.

    Most healthcare professionals are not trained to use sampling techniques, but they can begin by understanding that sampling techniques are classified into two groups: probability and non-probability sampling.

    Probability sampling requires using some form of random selection to obtain the data. This can be done by using a random-number table found in most statistics books or a software program that generates a random sample from a larger population. A central tenant of probability sampling is that all the members of a defined population have an equal probability of being selected in the sample.

    Non-probability sampling does not incorporate random selection methods. One danger in non-probability sampling is the possibility of introducing bias into the results, Lloyd says. For example, a quality team member assigned to survey emergency department (ED) patient experiences might choose five people from the ED who appear cheerful and calm and are unlikely to share negative information. As a result, those findings would not be representative of the total ED patient profile. 

    The time investment in learning about sampling techniques is worth the effort, and resources are plentiful. “Gaining knowledge of sampling techniques can be achieved by reading any good basic statistics book or spending time researching the topic on the Internet,” Lloyd says.

    Using Technology Wisely

    A potential roadblock in the quality measurement journey is an excessive dependence on technology to deliver meaningful results. Computer systems are valuable tools but should not be relied on as a single solution to deliver useful analytics. “People get enamored with machines,” Lloyd says. “They should rely more on the machine on top of their shoulders, rather than the one in front of them. Analysis is about inquiry. It's not about producing multi-colored three-dimensional graphs.

    “Data should form the foundation for learning, not judgment,” Lloyd adds. “It starts with asking questions, such as, ‘What is your theory as to why we have an upward trend in the number of falls?’ and ‘What are your predictions about implementing the new falls protocol?’”

    Analyzing Data

    Lack of planning for data analysis can create major roadblocks for even the most experienced teams. For example, if teams don’t establish who will input the data, what software will be used, who will analyze the data, and how the data will be stored, the initiative might stall before team members can effectively begin to use the data they have collected, Lloyd says.

    The analytical stage should provide data for a certain time frame so the quality team and senior management can put what may appear to be negative variation in perspective. With robust analysis, quality teams can demonstrate to management that what appears to be a decline may actually be random variation in the process, Lloyd says.

    “Analytically, the only way you can understand whether or not quality is getting better or worse is by looking at data over time rather than as monthly or annual averages,” Lloyd says. “This means that leaders need to understand the variation that lives within the system. If an organization is serious about quality improvement, the only way they can be successful is by looking at data in a dynamic fashion, plotted over time by hour, day, week, or month.”

    The medical model itself demonstrates the value of real-time data in making decisions, Lloyd says. “We would never think of creating the average blood pressure or respiration rate for a patient in the ICU. We connect the patient to full telemetry and track vitals moment by moment and minute by minute because we care seriously about improving the patient’s condition. This could never be done if we computed the patient’s average systolic and diastolic blood pressure or the average and standard deviation for the patient’s heart rate and made decisions only on this summary statistic. Quality is not assessed or improved by looking at aggregated numbers and summary statistics.”

    Another way to think of this distinction is that averages communicate characteristics as opposed to improvement, Lloyd says. “For example, a patient in an emergency department waiting room isn’t interested in the average time it takes for physicians to see patients,” he says. “That patient wants to know how long it will be until she is seen.”

    Acting on Findings

    The final aspect of the quality measurement journey is deciding what actions will be taken as a result of the data analysis. Data without a context for action can only be used to pass judgment or sit in a report on someone’s shelf, Lloyd says. “Teams need to go through the process of saying, ‘Here's where we made change number one; it didn't make a difference. However, here is where we made change number two, and it did make a difference.’

    “Quality improvement is about taking action to make things better for those we serve. Measurement plays a critical role in this journey, but measurement alone will not change things. You can weigh yourself several times a day, but if you do not take action to change your eating habits and lifestyle, you will never lose weight and get in better shape.”

    Energizing Around Quality

    A defined quality measurement process can result in positive outcomes and energized teams that are ready to make constructive changes based on their findings. By moving beyond concepts to measures, building operational definitions, gathering data, using stratification, applying sampling techniques, using technology wisely, analyzing data, and acting on findings, hospital and health system leaders can take a huge step toward making a quality improvement culture a central part of daily work.


    Betty Hintch is senior editor, HFMA.

    Interviewed for this article: Robert Lloyd, PhD, vice president, Institute for Healthcare Improvement, Cambridge, Mass.

    CommentsPlease login to add your comments

    Add Comment

    Text Only 2000 character limit
ADVERTISEMENTS