HAPLR Logo

Hennen's American Public Library Ratings
The ratings are based on federal data but only the author is responsible for ratings on this site.
haplr-index.com  6014 Spring Street, Racine, WI  53406   USA

Share knowledge, seek wisdom. 

Home    HAPLR ratings      Top 100 Libraries    Orders     Samples    Press    
Hennen Consulting      FAQ     Book     Presentations     Miscellaneous  


Frequently Asked Questions on the HAPLR Index

 

What are the differences between the editions?

2010

The 11th edition was published in April 2010.  It is based on data published by IMLS in July of 2009.  The data included were reported by libraries in 2008.  It is important that 2008 is the year of reporting, not the year of activity; a library reports in 2008 on 2007 activities, of course.  To further complicate things, however, the various libraries and states have differing fiscal or reporting years.  Because of a change in reporting cycle for IMLS, there will be a second edition of HAPLR for 2010. 

2009
The 10th edition was published in June 2009.  It is based on data published by IMLS in December of 2008.  The data included were reported by libraries in 2007.  It is important that 2007 is the year of reporting, not the year of activity; a library reports in 2007 on 2006 activities, of course.  To further complicate things, however, the various libraries and states have differing fiscal or reporting years. 

2008
The
ninth edition, published October 2, 2008, was delayed a full year by the late release of the 2005 federal data. The cause of the delay was the transfer of responsibility for publishing the data from FSCS to IMLS  It wa
s published on this web site and in American Libraries in October of 2008 but I made a major error.  I published the same data as the seventh edition by linking to the wrong dataset on my hard drive.  The error was corrected on line by October 10th and in American Libraries in the November edition.  

2007
No edition because of delay in publication of data (see explanation above).

2006
The eighth edition of HAPLR Ratings was based on 2004 data from FSCS as published on the World Wide Web in July of 2006.  The federal agency, FSCS, compiles the annual reports as reported by state library agencies for nearly 9,000 libraries into a single dataset. 

2002-2005
Fourth to seventh editions published in fall of each year.  The data used were submitted two years prior to publication year. 

A Fall 2003 edition of HAPLR had to be postponed and then abandoned because of delays in FSCS publication of the data.  The results for 1999 data should have been available in Spring 2001, allowing publication of HAPLR scores in Fall 2001.  But those results were delayed for almost a year and not published until May of 2002.  The 2000 data were published just 8 weeks later in July 2002.  FSCS indicated that it was their intent to publish the data in a more timely fashion from then on. 

2001
A Fall 2001 edition of HAPLR had to be postponed and then abandoned because of delays in FSCS publication of the data.  The results for 1999 data should have been available in Spring 2001, allowing publication of HAPLR scores in Fall 2001.  But those results were delayed for almost a year and not published until May of 2002.  The 2000 data were published just 8 weeks later in July 2002.  FSCS indicated that it was their intent to publish the data in a more timely fashion from then on. 

2000
The third edition was in the November 2000 issue of American Libraries. 
The third edition did not include imputed data for the 1998 data used  because, as of October of 2000, the FSCS had not yet supplied the data.  Consequently, 1,648 libraries that did not supply the needed data, usually reference queries answered or annual visits, were not included in the third edition. 

1999
The FSCS data used for the first edition did not have the needed data for 2,000 libraries and it was divided into population groupings that did not match FSCS groupings.  The second edition rectified both shortcomings of the first.  

The first edition was in the January 1999 issue of American Libraries, the second edition was published in September. Both the first and second editions were based on data from the Federal-State Cooperative Service (FSCS). The first edition was based on what FSCS calls Preliminary data, the second was based on what they call their Early Release data.

The two distinctions between the first and second editions were:
1) the number of libraries included: 7,000 in the first edition, nearly 9,000 in the second;
2) the population categories used, 4 in the first edition, 10 in the second. 

The second edition included 2,000 additional libraries because, after the first edition went to press, the FSCS in the Early Release edition began imputing data for libraries that had not reported data for key data elements.  Imputing is a bit like estimating, with a good deal more statistical validity.  The imputation added 2,000 libraries to the field for consideration.

The second edition also used the same 10 population categories used by the FSCS rather than the four arbitrary categories originally devised by the author.  The first edition broke population categories at 2,000, 10,000 and 100,000.  The second edition has breaks at 1,000; 2,500; 5,000; 10,000; 25,000; 50,000; 100,000; 250,000; and 500,000.   

What led you to do the HAPLR ratings?

Practically every time you pick up a magazine or newspaper there is another rating system for universities, places to work, hospitals, mutual funds, you name it.  But there was none for libraries.  Worse than that, the Money magazine listing of best places to live covered libraries by measuring only books per capita.  I was certain that a more comprehensive tool was needed. 

  Why don’t you consider electronic measures?

For a long time I have wanted to do so and finally for the 2005 edition, I did a limited attempt.  The federal data on which I base the ratings did include such measures until recently.  For details, see the electronic measures page.  Nevertheless, I did not incorporate these measures into the HAPLR scores because I believe them to be too unreliable. 

The 2006 federal data, published by IMLS published a new dataset for electronic use called Public Internet Users.  The LJ Index has chosen to use this new data even though it is the first time it has been included in the current form and even though the data appear quite skewed when measured on a per capita basis.  Most measures that HAPLR and the LJ Index use have a high to low range of about 8 or 10 to 1.  The range for differences for electronic use is much, much higher. 

  Why don’t you consider square feet for the building?

Up until recently, the data on which I base the ratings, did not include such measures.  For details, see the building size page.  Even though we now have the data for square footage, I have not developed any good method for incorporating building size into the ratings. 

Isn’t it really quality of service that counts; why rate quantity only?      

Of course quality counts.  As I said in the January 1999 issue of American Libraries, “data measurement cannot capture a friendly smile and a warm greeting at the circulation desk.  Nor can data measurement alone measure the excitement of a child at story time or a senior surfing the Internet for the first time.”  But we have no accepted and nationally consistent measures of quality in library services that would allow for comparisons.  I agree that numbers alone do not identify truly great libraries, quality counts too.  On the other hand, I do not believe that a library can be truly great with poor numbers.  As my logic professor taught me, the numbers are a necessary but not sufficient condition.

Who is the HAPLR author?

Thomas J. Hennen Jr., the author of the HAPLR Index, has over 30 years worth of experience in public libraries. He is Director of the Waukesha County Federated Library System.  He has a Masters Degree In Library Science from the University of Wisconsin-Milwaukee.  From 1983 to 1999 he was Administrator of Lakeshores Library System in southeastern Wisconsin.  From 1975 to 1983 he was Director of Watonwan County Library in southeastern Minnesota.  He has published in Library Journal, American Libraries and in other American Library Association publications. He had a column on rural library materials for the American Library Association's Booklist magazine. He has been a speaker for library associations throughout the U.S. and Canada.

How does the author's library rate on the HAPLR Ratings?

I coordinate the activities of 16 libraries in Waukesha County.  In a federated library system the activities of individual libraries are locally determined.  The federated library system deals with interlibrary relationships and provides leadership and overall direction.  Nevertheless, the scores of libraries in the county are mostly very good. 

What has the response been to the HAPLR Ratings?

Overwhelming would be a good description to the response to the first edition.  The web site (HAPLR-Index.com) averages about 1,000 unique annual visitors per month.  The visitors are from all over the globe, but primarily from the U.S. of course.  Press coverage has also been excellent.  Over the years, hundreds of newspapers covered the index with feature or front-page coverage about their local library’s rankings. 

   
What do you say to those who note that the information on which you base your ratings seems out of date?

Anyone involved with data gathering and statistics wishes that they could be timelier, but we do what we can.  The information was collected by the states and submitted to IMLS.  Information is checked for internal consistency and then published first on the Internet.  The IMLS imputes data for the 2,000 libraries that did not report the data necessary to calculate their rankings. As states increasingly automate their data collection and allow for filing over the Internet, the data will become closer and closer to “real time,” rather than the belated information we are now working with. 

 

Are there similar rating methods for libraries?

The independently produced HAPLR Index was the first of its kind for libraries in the United States.  Although published frequently in American Libraries it has never been editorially sponsored by that publication.   

Bibliostat, a vendor of library statistical packages and Library Journal launched  the LJ Index - Star Library project in 2009.  

The fundamental difference between the two is that HAPLR includes input measures while the LJ index does not.  The LJ Index looks at only one side of the library service equation.  HAPLR looks at both sides. 

The closest thing to the HAPLR Index was developed in Germany.  The project sponsored by the Bertelsmann Foundation is called "BIX- The Library Index."  Bertelsmann Publishing partnered with the German library association to produce BIX, a library index quite similar to HAPLR.  The main difference between BIX and HAPLR, aside from the publishing house backing, is that BIX was designed to provide comparisons of one library to another as well as over time. HAPLR compares all libraries to one another only during a given year.   An English language description of the BIX index is available at:
http://www.bertelsmann-stiftung.de/documents/Projekt_Info_Englisch_010112.pdf

There are no similar programs in either Canada, Australia or New Zealand.  I know that there is some interest in developing a similar index in Australia and New Zealand, because I published an article on the topic in APLIS, the Australasian Public Library and Information Science magazine.  

Great Britain adopted national standards, and in 2000 the Audit Commission began publishing both a summary annual reports of library conditions and individualized ratings of libraries.  Audit Commission personnel base the reports on statistical data, long-range plans, local government commitment to the library, and a site visit.  The Audit Commission is an independent body.  Every library is assigned a score. The scoring chart displays performance in two dimensions. A horizontal axis shows how good the service is at present, on a scale ranging from no stars for poor to three stars for excellent.   A vertical axis shows the improvement prospects over time of the service, also on a four-point scale.  The narrative reports, which are about 40 pages long, are very specific and quite blunt in their assessments and recommendations for improvement.  This is not quite the same thing as the HAPLR Index, but close.  See their site 
http://www.bestvalueinspections.gov.uk/

There is a project funded by DG13 of the European Commission within the Telematics Applications Programme. They are using Internet communications to develop a continuously updated database of statistics about library activities and associated costs in the context of their national economies. This project does not develop an index similar to the HAPLR-Index, however.  That information may be found at http://www.cordis.lu/libraries/en/publib.html

  How many libraries are there in each population category?

There are over 9,000 library entities included in the IMLS database. Library systems with multiple branches are counted as a single entity. The population categories used are those used by the IMLS for other comparisons, with one exception.  The IMLS data includes another category of libraries over 1 million population, but that would have provided too few libraries for purposes of HAPLR Ratings. 

Population Category

Total

a) 500 K 83
b) 250 K 100
c) 100 K 337
d) 50 K 545
e) 25 K 943
f) 10 K 1,773
g) 5 K 1,481
h) 2.5 K 1,329
i) 1 K 1,479
j) 0 K 1,010
Grand Total 9,080

Can one compare rating numbers between two or more population categories?

With care, yes, one can do so.  There are variations in the highs, lows, medians, and so forth that vary by population sizes.  So a score of say 600 may be more easily attainable in some categories than others.  But the variations are not so extreme that no comparisons across population categories are possible.

   How can you mix both input and output measures?

Some have criticized the HAPLR Index for including both input and output measures in the same product.  They note that inputs like how much money is spent on materials or how many periodicals the library owns are different from outputs such as circulation per capita or turnover rate.  

Combining the two makes it possible to have a library with good inputs and poor outputs score moderately well.  Conversely a library shortchanged by its community on funding that manages through good management to provide excellent service outcomes may rank more poorly than a library in a rich community with only moderately good management and output measures.  

I would like to get closer to answering the “are you getting what you paid for” type of question.  At this point, it appears to me that 70 to 80% of the output is traceable to good input levels.  The rest is probably traceable to good management or other factors that may not be measurable.   I hope to do further investigation on the correlation of input and output some day soon. (Research firms with grant money to spare, please take note J.)

  What does a given rating number mean, and how should I interpret it?

The HAPLR Index is similar to an ACT or SAT score with a theoretical minimum of 1 and a maximum of 1,000.  (Note that some of my critics insist that there are differences between HAPLR and ACT scores so I should not make this comparison.  That is a bit like saying all metaphors or comparisons are useless.  I reject the argument.Most libraries scored between 260 and 730, so scores above and below those numbers are remarkable.  

Consider the chart below for the libraries in the over 500,000-population category, for a short idea of the rating methodology.  A library above the 75th percentile for expenditure per capita of $38.50 will get a higher score on this measure than one below the 25th percentile. [Chart is for the 2008 edition].

Expenditure per capita is weighted more heavily than percent of budget devoted to materials.  In the HAPLR Index each library is compared to all others in its population category on all 15 measures.  The combined score is then transformed into an index score so that all can be easily compared with a single number.  For more information see the next question and follow the relevant links to rating methods.  
 

Measurement Category

HALPR Weight

75th %ile

50th %ile

25th %ile

Expenditure per capita

3

$38.50

$25.87

$19.33

Percent Budget to materials

2

18.0%

15.4%

12.9%

Materials Expend. Per capita

2

$5.63

$3.96

$2.79

FTE staff per 1,000 population

2

         0.6

         0.4

         0.3

Periodicals per 1000 residents

1

         7.9

         4.6

         3.3

Volumes per Capita

1

         3.0

         2.4

         1.7

Cost per circulation (low to high)

3

$3.38

$4.29

$5.89

Visits per capita

3

         5.1

         3.8

         3.0

Collection turnover

2

         3.9

         2.4

         1.7

Circulation per FTE Staff Hour

2

         8.9

         6.6

         4.7

Circulation per Capita

2

         8.9

         5.0

         3.9

Reference per capita

2

         2.0

         1.4

         0.8

Circulation per hour

2

     105.4

       77.8

       59.6

Visits per hour

1

       68.5

       56.2

       40.7

Circulation per visit

1

         1.9

         1.4

         1.1

 

Is weighting appropriate in a rating system?

Some object to that HAPLR gives more weight to some measures than to others.  Yet not weighting factors makes a value judgment as well. Not weighting the factors makes them equal in importance.  This begs the question; is a library visit truly equal in value to a circulation, an electronic resource session, or attendance at a program?  BIX, the German rating system uses weighting, as does HAPLR.  The table below indicates the relative weights to possible measures assigned by HAPLR, BIX, and the LJ Index.  HAPLR assigns 38% of the values to input measures, BIX assigns 56%, the LJ index assigns 0%.  Note that due to the score algorithm of the LJ Index, the final score of a library can be almost totally driven by a single factor. 

Type Measure HAPLR Relative weight BIX Relative Weight LJ Index Relative Weight
Input    Expenditure per capita  10%    
  Employees per 1,000 capita 7% 8%  
  Percent Budget to materials  7%    
  Materials Expend. Per capita  7%    
  Collection units per capita 3% 8%  
  Periodicals per 1000 residents  3%    
  Total Opening hours per year per 1000 capita   8%  
  User area in sqm per 1,000 capita   4%  
  Employee hours per opening hour   4%  
  Investment per capita   2%  
  Stock Renewal rate   12%  
  Computer services in hours per capita   4%  
  Internet Services    4%  
  Advanced training per employee   2%  
Output Number of visits per capita 10% 12% 25%
  Cost per circulation (low to high)  10%    
  Circulation per capita 7% 8% 25%
  Collection turnover  7%    
  Circulation per FTE Staff Hour  7%    
  Reference per capita  7%    
  Circulation per hour  7%    
  Visits per hour  3%    
  Circulation per visit  3%    
  Stock turnover rate   12%  
  Events per 1000 capita   4% 25%
  Total Expenditure per visit   4%  
  Acquisitions budget per loan   4%  
  Electronic Resource Use per capita     25%
Input Totals   38% 56% 0%
Output totals   62% 44% 100%
Combined totals   100% 100% 100%

Where can I see the specific calculations behind the index ratings?

An index number is always an attempt to encapsulate a lot of data into a single number.  No such index number is perfect, of course.  An explanation of how the ratings are calculated is available on the explanation of ratings page. 

 

Is a score of 500 considered median for all categories?        

Just about, but not perfectly.  Blame it on Microsoft Excel, or the vagaries of prime numbers, if blame you must. Each of the 15 measures is a ratio, such as volumes per capita. When two non-prime numbers are involved in the ratio, it is possible for two or more libraries to tie on that measure.  When ties happen, the total number of points assigned are skewed towards a higher number than would otherwise have occurred.  In the grand scheme of things, this matters little, because the median scores are affected only very marginally.

 Can you provide specific information and ratings on my library?

Not for free, this is not my day job and there are almost thousands of libraries in the database. The HAPLR Index provides a comparative rating system that librarians, trustees and the public can use to improve and extend library services in the third millennium. Order a rating sheet for your library today.  See a sample report, then get a customized report for your library. Remember that you are granted permission to reprint any reasonable number of copies needed as long as HAPLR copyright notice is included on each copy.  Standard reports are available for $15 each, specialized reports are also available.   All reports include:

ü     your library's HAPLR score, rank of all comparably sized libraries and percentile scores.

ü     a graphic comparison of the percentile ranking of your library for the 15 input and output measures.

ü     a detail report on the library's score on each of the 15 factors and the library's rank among like sized libraries.

ü     comparisons to the 5 closest sized libraries in state and nation.  

 

Updated March 2010

 

Home Ratings Order Samples Press FAQ Misc

March 2010

© haplr-index.com
Webmaster: thennen@haplr-index.com