By Gerald Young
Senior Research Associate
ICMA Center for Performance Measurement
A natural focus in risk management tends to be on averting potential hazards before they result in costly claims. You plan for the catastrophes, scrutinize the indemnity clauses, conduct employee training, and offer advice on everything from special event permits to convention center construction. If the worst happens, you’re still hit with multi-million dollar claims. If you’re successful, however, there are no headlines, no stories buried on page 27, and few risk managers’ names inscribed on new civic facilities’ dedication plaques. It can be hard to tell you’ve done anything at all.
Given that quiet anonymity equals success, how do risk managers communicate to their managers and elected officials what they’ve accomplished? And how do they even determine for themselves whether their track record is a result of hard work, luck, or some other combination of factors?
One approach to answering these questions is comparative performance measurement.
ICMA Approached to Start a Consortium
Based on an increasing interest in performance measurement throughout local government, a group of city and county administrators approached the International City/County Management Association (ICMA) in 1994 about beginning a performance measurement consortium.
ICMA has a long history of involvement in local government professional development activities, management texts, and general survey resources. While ICMA is primarily an association of local government chief executives, assistants, and other management staff, its publications have also included texts and special reports on risk management, human resources, benefits administration, and other key services.
The goals of this new consortium were to put that background to work while also bringing to the table all those with a stake in the data to be collected – city and county managers; department heads; finance, information technology, and payroll specialists; public works supervisors; police and fire employees; and risk managers. Through a series of workshops over the next five years, the original group grew from 44 large cities and counties to more than 100 jurisdictions of all population sizes.
Originally, the risk management component was one part of a generalized effort to measure support services. In 1999, the survey was retooled to reflect increased participation by risk managers, including representatives from Santa Monica, California; Richmond, Virginia; Austin, Texas; Sedgwick County, Kansas; Montgomery County, Ohio; and other communities around the nation. Staff from the Public Entity Risk Institute (PERI) and Public Risk Database Project (PRDP) also provided their input.
The greatest challenge faced in comparing performance among jurisdictions was deciding on a common set of definitions. Among the issues faced:
- When counting full-time equivalent staffing (FTEs), should jurisdictions report overtime hours (whether or not that overtime is compensated)?
- Should risk management FTEs include supervisory, clerical, and overhead staff?
- How should staff outside a central risk management office be reported (e.g., departmental safety officers, claims attorneys)?
- If a claim falls below a deductible amount or self-insured retention, should it still be reported?
- Should claims expenditures be reported as reserve funds are appropriated? In the year(s) expenses are paid? When the claim is closed?
- As new expenses are incurred on older claims, should the prior years’ results be adjusted or should these be reported in the year they are paid?
- Should total expenditures be adjusted for third party reimbursements or subrogations?
The result of the debate on these points was a set of written standards and instructions for all data points. Absent agreement on such fundamental issues, it would be impossible for jurisdictions to make any meaningful comparisons among themselves. Beyond that, however, the exercise of defining reporting procedures has aided individual cities and counties in performing comparisons internally from year-to-year. Particularly when there is staff turnover, these detailed instructions provide guidance to reporting data in a consistent manner, and they help to ensure that the same expenditure does not end up getting reported in two different fiscal years.
As interesting as it might be to see which jurisdictions spent more or had higher numbers of claims, such statistics mean nothing without context. Thus, a series of demographic statistics and descriptive questions were also included in the survey. These included:
- Square mileage
- Total general fund and all-funds operating expenditures
- Total jurisdiction employees (FTEs)
- Median household income
- Dollar value of real property insured
- Number and type of vehicles in fleet
- What is included in the risk management staff’s responsibilities?
- Are there any special risk exposures in the community?
- What, if any, liability limits apply for local governments in the state?
- Is wage continuation specified by a labor agreement or state statute?
- What presumptions apply with regard to certain heart/lung conditions or cancers being work-related?
- Do supervisors’ evaluations include consideration of their divisions’ safety records?
With such information available, it is possible to perform analyses of the raw data collected based upon per capita or per FTE calculations, state or regional groupings, or a customized set of comparables that best matches the characteristics and risk exposures of one’s own community.
As the program has grown from those original 44 cities and counties, each new jurisdiction has participated in on-site training with ICMA staff. This has included general discussions on performance measurement and individual meetings with each department. For risk managers, this has afforded an opportunity to review the instructions in detail, address questions, and offer suggestions for survey improvements.
Those survey improvements continue to be debated each year, initially at in-person meetings, and now, primarily via online message boards. This electronic forum has allowed for greater participation by all jurisdictions, particularly those that might not have the available travel funds to attend an annual meeting.
One danger in a dynamic and long-range program like this is that the survey “improvements” will change the questions so much over time that no time-series comparisons are possible. Thus, to limit such tinkering, each year’s survey reviews are guided by principles of:
- Accuracy: Would the clarification of a current question or definition elicit more accurate responses?
- Relevance: How would the addition of a question contribute to the identification and measurement of outcomes? If the question would be of limited interest or benefit, would the participants be better served by avoiding further survey expansions?
- Stability: If a particular question were changed, would significant changes in jurisdictions' data collection systems be required? Would the benefits of the revisions justify the change? What would be the effect on year-to-year analysis of results?
Data Collection and Cleaning
Once each year’s surveys are finalized, they are posted online for participants to post their data. As the tool is structured, this allows for automatic calculation of sums and ratios, pop-up windows when a response does not match expected formats or parameters (e.g., if the number of claims proceeding to litigation exceeded the total number of claims), and an administrative sign-off before the final data is submitted.
The full dataset for all jurisdictions is analyzed for outliers or other logic failures. This data cleaning might flag, for example, a cost per claim significantly higher or lower than the mean, or a response from a particular jurisdiction that is drastically different from their response the year before. Each of these items is then subject to verification by the jurisdiction staff before it can be included in the final data release.
The program’s results have taken three primary forms:
- Comparative charts of key measures (including electronic files of raw data)
- Case studies on high performer
- Efforts by individual jurisdictions to build on their lessons learned
The descriptive information starts with information like that displayed in Figure 1, Risk Exposure, but also notes such information as which jurisdictions use third party administrators, which manage their own litigation, which use automated risk management information systems, and at what confidence level they fund anticipated liabilities.
Taking these factors into account, along with differences in state law, it is easier to place in context the resulting data, such as liability claims per capita or liability expenditures.
Figure 2 illustrates traffic accidents per 100,000 miles driven for light vehicles. In this case, additional context can be gained from comparisons to: the total number of light vehicles in each jurisdiction’s fleet; average expenditures per accident; the number of accidents involving other vehicle types (e.g., police vehicles); the conduct of defensive driving, drug testing, or accident review programs; and policies regarding off-duty driving, shared pool cars, or the coverage personal vehicles.
There are many ways of comparing workers' compensation claims; two are illustrated here. In Figure 3, jurisdictions are compared to each other as well as to their prior year performance. While this particular chart shows the number of claims per 100 FTEs, it does not give an indication of the severity of those claims. Thus, as a companion to this information, a jurisdiction might also look at the expenditures per claim or the working days lost per claim.
In the case shown here, Longmont, Colorado happens to be at around the median of the dataset for claims per 100 FTEs (Figure 3), but the time lost on each of those claims is far below the median (Figure 4). With this more detailed understanding of its relative performance, the jurisdiction can then focus its further analysis and performance improvement efforts in the appropriate directions.
When high performers are identified, they can be explored in depth as case studies – something that ICMA does through its publication, What Works. There have been 13 risk management case studies to date, ranging from a discussion of safety programs in Redmond, Washington (pop. 43,610) to risk management-public works partnerships on stormwater flooding claims in Fairfax County, Virginia (pop. 968,225). Each case study runs just 1-2 pages, but provides the program details and contact information necessary to follow up on effective practices.
Jurisdictions with particularly active performance measurement programs continue their analysis on their own and report their findings not only to their managers and elected officials, but to the public as well. Among those that report either year-to-year or inter-jurisdictional performance comparisons are: Bellevue, Lynnwood, and Vancouver, Washington; Phoenix, Arizona; Miami-Dade County, Florida; San Jose, California; and Prince William County and Fairfax County, Virginia. Much of the relevant data is included in budget presentations or posted on related websites. In Sterling Heights, Michigan, all residents receive a community calendar/annual report, which includes performance data for all departments with national and/or countywide comparisons. In Redwood City, California, the risk management budget includes four-year general liability expenditure comparisons between the city and its local joint powers risk pool.
Often, benchmarking is left to the end of a budget process – the subject of a quick phone survey of targeted communities. This may work for some straightforward statistics like population, tax levies, or crime rates. However, risk management is an area that deserves detailed discussion and coordination among the jurisdictions involved to ensure that the resulting numbers reflect the same definitions and assumptions, and that relevant differences in risk profiles or legal/policy environments are appropriately explained.
Does collecting this data mean that you have an easy answer when someone asks what the risk management office has achieved? Maybe not in 25 words or less, but any time you can quantify what you’re doing and provide benchmarks for comparison, you can raise the level of understanding among managers and elected officials, and in turn, garner their support for the decisions and funding necessary to continue effective risk management.
Figure 1: Risk exposures (excerpt)
(Click image for larger view)
Figure 2: Light vehicle accidents per 100,000 miles driven, FY 2002
Figure 3: Workers compensation claims per 100 jurisdiction FTEs, FY 2000-2002
Figure 4: Number of worker days lost per claim, single jurisdiction summary, FY
About the Author
Gerald Young is a Senior Research Associate with the ICMA Center for Performance Measurement. Prior to joining ICMA in 1998, he also worked eight years for the cities of Chula Vista and Loma Linda, California.
For more information on the Center, please visit http://icma.org/performance.
About the Symposium
Benchmarking for Continuous Improvement in Risk Management is presented as a public service of the Public Entity Risk Institute (PERI), 11350 Random Hills Rd., Suite 210, Fairfax, VA 22030. Web: www.riskinstitute.org.
The Public Entity Risk Institute provides these materials "as is," for educational and informational purposes only, and without representation, guarantee or warranty of any kind, express or implied, including any warranty relating to the accuracy, reliability, completeness, currency or usefulness of the content of this material. Publication and distribution of this material is not an endorsement by PERI, its officers, directors or employees of any opinions, conclusions or recommendations contained herein. PERI will not be liable for any claims for damages of any kind based upon errors, omissions or other inaccuracies in the information or material contained here.