Analytics

Toward a More Effective Way of Validating Cost Data

February 26, 2019 9:29 am

The better the executive team understands the current state of the costing model, the easier it should be to gain support for changes that will improve its accuracy, and thus its usefulness for management decision-making.

Prior to publishing reports containing costing-system data, it is common practice to reconcile the total costs in the cost database with general ledger expenses and use that reconciliation as evidence that the cost accounting system is accurate. However, an important distinction must be made: Accurate does not necessarily mean complete.

Reconciling costing system data to the general ledger only establishes that the costs are “complete” (i.e., that no cost was inadvertently lost during the cost-assignment process). The ledger reconciliation does not account, however, for costs that might have been recorded in the wrong place. Although there are no industry standards for measuring costing accuracy, financial analysts can take steps to ensure the cost number accurately reflects the true cost of an item.

Compare System Costs with Known Amounts

First, system cost amounts can be compared with known amounts. Known amounts for procedures can come from manually computing costs for the item from a bill of materials or labor-task summary. If costs produced by the costing system for a few of the largest-volume procedures across several departments are compared with known costs for those items, some conclusions can be made about accuracy.

When manually assigning cost to individual charge codes that will be used as controls to validate the cost model output, financial analysts should use the best source of data available. This may mean going into the department and timing, with a stopwatch, a sample of procedures being done. If this step is taken, an additional item must be considered: whether the procedure occurs during the day shift, night shift, or weekend shift. In many facilities, the support services and actual tasks performed will vary greatly by shift. For example, on the day shift, a nurse may bring patients to the X-ray department, whereas night and weekend staffing requires that X-ray department staff members transport the patients.

In many departments, a single charge code is used to capture all services for a CPT code, when in fact there may be large differences by location or service line, and sometimes by employee or job class, as in the following X-ray department example:

  • X-rays may be performed in the surgical suite in conjunction with a surgical procedure.
  • X-rays may be performed on inpatients by taking portable machines to the patients’ rooms.
  • Outpatients may ambulate on their own into the procedure area.

Differences also will exist in scheduling, registration and check-in, and billing. When procedure times are sampled, at least 15 to 20 procedures should be included, and viewed in a histogram like that shown in the exhibit below, where the procedure time is on the x-axis and frequency is on the y-axis.

A procedure that appears tightly clustered in the histogram, like the blue line in the exhibit, represents an ideal situation in which the costing-system-generated numbers should not vary much from period to period. If, however, a procedure appears in the histogram more like the yellow line in the exhibit, then there is a large variability in the time it takes to perform the procedure, and accordingly, the cost may vary depending on what drove it each period.

In a worst-case scenario, a histogram like the one in the exhibit below will emerge. In this case, there are three distinct cost drivers, and while it is still possible to compute and use an average time for the procedure standards, the predictive value will be low.

To improve the accuracy of the model, the underlying reasons for the three clusters of elapsed times should be determined and then separate charge codes should be established to capture the unique costs.

Calculate the Error Percentage

Once the costs for a meaningful sample of procedure codes have been manually calculated, the results can be placed in a table, and a variance (or error) percentage can be computed, as shown in the exhibit below. The formula for this calculation is as follows:

Error Percentage = Difference/Known

For this calculation, the difference is the system-generated cost minus the known, and the known is the manually computed value by which the system-generated value will be verified.

To calculate accuracy, the average error is simply subtracted from 100 percent. However, not every procedure will appear with equal weight in every costing-system report, because some procedures may have a low volume. Thus, a high-error percentage may have no practical impact because the inaccurately costed procedure does not affect many patient records. To analyze the effective or practical consequence of these errors, the weighted error should be calculated as follows:

Weighted Error = Error x Volume

As shown in the exhibit below, which expands on the previously cited exhibit, adding rows A through D in the Weighted-Error Percentage column (column K) and dividing by the total of Rows A through D in the Volume column (column J), yields 19 percent.

Thus, we can see that although procedure C had a very large error between the benchmark cost of $65.25 and the system-generated value of $17.34, the procedure does not occur often in the costing database, and its impact therefore will be minimal. So although the accuracy of the costing system is 69 percent, the effective accuracy of the data will be closer to 81 percent.

Find Reliability

One additional measure can be applied to these data to provide a clearer understanding of the costing model’s validity: reliability—or the characteristic of a system to produce consistent results. In this case, a correlation coefficient can easily be computed with Excel functions to help show how closely the costing system numbers track with the known values.

Correlation measures the extent to which the system-generated costs move in the same direction and proportion as the known cost. No system will produce costs at 100 percent accuracy, but the costs should be consistent—ensuring, for example, that the procedures to which the costing system assigns the highest cost are in reality the highest-cost procedures. Going again to the last exhibit above, for example, placing the formula “= CORREL (F2:F5, G2:G5)” in an Excel cell yields the number 0.91, or 92 percent. (This formula assumes row A is the second row of the Excel file; and the result not shown in the exhibit). The calculation indicates that the costing-system output tracks very closely to the known costs data and provides a high degree of assurance that the error rate is consistent.

Although each institution may develop its own correlation standards over time, a threshold of at least 75 percent can be used as a starting minimum. Higher thresholds are always better but may not be attainable in the current costing configuration. Financial analysts can start by computing the correlation for each of the next few costing cycles to see what value it produces, and then work toward making the changes to the costing model necessary to raise the value for future years by some percentage.

Changes Across Periods

Another useful comparison that can yield insights into accuracy involves changes in bucket cost across multiple periods. To be accurate, one would expect a costing system to be highly reliable, and any changes in costs at the bucket level should be either easily explainable or indicative of a potential system failure. Relative value unit (RVU) costing systems, depending on the number of RVUs and bucketing details, may be best compared at the charge-code level instead of the bucket level. In general, micro-costed procedures should be fairly stable between periods, with variability less than 10 percent at the bucket level. If, over time, a database of period costs by charge code were created, specific rules could be built for accepting or rejecting the costing model data—based on historical variations specific to each charge code or department.

When a variance is larger than normal, it should be investigated. With regard to variable cost, one would normally expect wage-related buckets to increase no more than the institution’s approved wage-rate hikes.

Reconciling can help prove that no dollars were lost during the costing process, but additional procedures must be completed if one is to make any assertions regarding the accuracy of the costing data. Computing for error (or accuracy) and a correlation coefficient can provide useful information regarding the accuracy of the costing data. If the computed accuracy percentage is too low for management expectations, that information can be an excellent starting point for a discussion about strategies an institution can take to improve its costing model. The better the executive team understands the current state of the costing model, the easier it should be to gain support for changes that will improve its accuracy, and thus its usefulness for management decision-making.

Paul Selivanoff, CPA, is a principal with Simply Better Outcomes, Lincoln, Neb.

Advertisements

googletag.cmd.push( function () { googletag.display( 'hfma-gpt-text1' ); } );
googletag.cmd.push( function () { googletag.display( 'hfma-gpt-text2' ); } );
googletag.cmd.push( function () { googletag.display( 'hfma-gpt-text3' ); } );
googletag.cmd.push( function () { googletag.display( 'hfma-gpt-text4' ); } );
googletag.cmd.push( function () { googletag.display( 'hfma-gpt-text5' ); } );
googletag.cmd.push( function () { googletag.display( 'hfma-gpt-text6' ); } );
googletag.cmd.push( function () { googletag.display( 'hfma-gpt-text7' ); } );
googletag.cmd.push( function () { googletag.display( 'hfma-gpt-leaderboard' ); } );