Skip to content

Results-based accountability – beyond counting widgets

June 25, 2012

success (words) and ruler

There is increasing demand to justify investments made in biomedical research (as discussed in these papers: Macilwain, Van Noorden, Ioannidis, Lauer).  Some metrics that have been applied to individuals for performance evaluation and promotion, such as numbers of publications and citations, are now being applied to research projects.

Investigator-initiated and NIH-initiated studies are both facing increased pressure to show that they are worth the investment, but do we know how to define and create metrics to measure the value of these research investments?  What are the appropriate metrics for productivity and impact of general biomedical research, and in particular epidemiology studies?  What are the potential benefits and pitfalls?  Since results-based accountability is increasingly being applied to biomedical research, what is the best approach?

Posted by Epidemiology Branch, NHLBI

6 Comments leave one →
  1. June 29, 2012 5:49 pm

    I suppose that something could be learned from applying some evaluation to the contribution of different studies, but I think that overall it is a deeply flawed concept. The time frame is most important — many research studies pay off many years later. Human genome research has yet to pay off anywhere near to the promise of the 1990s, but it is hard to believe that this research will not have great impact eventually.

    Also, many studies are incremental, not paradigm-shifting, but are still critically important. The incremental process of hypothesis, testing, replication, and new hypothesis is fundamental to research. Already the emphasis on innovation in the NIH review process and the headline orientation of many journals is discouraging formal replication studies, and we are probably the worse off for it.

    Note that there is considerable controversy about using citation data for appointment and promotion reviews in academia. Someone doing incredibly innovative work in a small field is not going to generate the citation impact factor of someone who published a widely-cited methods paper or review. The same problem will confront the attempt to measure research project impact.

    I do believe that there are too many observational studies published that are redundant documentation of known associations (e.g., television-viewing and obesity). Sometimes this serves the training of a new investigator, but there is a time when such associations need to be studied in a trial. Of course, the never-changing $500,000 limit and the current financial situation is just about killing all investigator-initiated trial work.

    These are words of caution. Hopefully more clever people out there will have some ingeneous suggestions for how to actually help defend the importance of individual studies.

  2. Lewis H. Kuller, MD, DrPH permalink
    July 6, 2012 8:04 am

    The basic problem with much of the cross-talk is that it has little to do with epidemiology but rather picking the “toys” to collect and analyze data in noninformative populations. Epidemiology is the study of epidemics. Ancel Keys, Jerry Stamler, George Comstock, et al. focused on the huge variation in disease by time, place and person. The results of these studies were to define the parameters of the epidemic (diet, smoking, host susceptibility, BP, etc.) and then test interventions. What you need to ask is: 1) what is the epidemic of interest?; 2) what component am I studying?; 3) do I have the tools to measure variables and outcomes?; 4) what population to study?; 5) what are the key defined outcomes of the study; and 5) (most important) what will be the implications for public health, preventive medicine; 6) are the results important to require either a clinical trial or further testing in other unique populations?

    Epidemiology is also rooted in biological sciences, including the brain. Epidemiology scientists can take biological sciences in new direction(s). Perhaps the best book to read and discuss was written by John Morris over 50 years ago on “Uses of Epidemiology.” Epidemiology studies are expensive because of administrative costs and too many “chiefs” and no “workers.”

    Thus, in separating widgets from value, you need to ask what aspects of the epidemic did the researcher investigate and provide “answers” that may be null (not need more studies) 2) given the results of the study should a clinical trial be conducted, further testing in other populations, 3) need for new technology, 4) hopeless abandon efforts – how do results relate to pathophysiology? Unfortunately many studies collect large amounts of “widgets” that don’t even fit. A few good studies are far better than massive amounts of irrelevant information that generates complex irrelevant statistical analyses.

  3. July 6, 2012 9:55 am

    It is difficult to come up with an easy answer to how we might measure the impact of our work, but our discipline contains all the tools we need to conceptualize and operationalize such metrics.

    If we think of a causal pathway leading from an epidemiologic project, study or investigation, we might expect to see a series of events resulting as a consequence of the study: a publication (good, bad, number, timeliness), readership, communication in non-scientific journals, policies, actions. This may be cumbersome to apply to individual projects, but can first be modeled using specific research questions.

    An example might be the work on folic acid and neural tube defects, an area of epidemiologic research well recognized for its impact. We can document the number of studies, and perhaps, the cost, the discussions in non-scientific journals, the policy recommendations (supplementation of foods), and estimate the number of cases of neural tube defects that have been prevented (and their cost). Applying such a methodology to individual grants might be more difficult, but it is an exercise worth thinking about.

    I agree with the previous writers that time frame is very important – the “causal path” between research and impact can be long and meandering.

    Another issue, is that epidemiology can produce useful information, but in many cases, societal will, advocacy, and other forces are required to develop policies and implement changes. We often think that epidemiologic insights will jump off the page into the public realm without an intermediate process. We learned from the tobacco story, that having an impact goes beyond having strong epidemiologic evidence. Metrics to assess the impact of epidemiology need to consider the process of turning science into policy.

  4. July 6, 2012 10:03 am

    I found the article by Lauer 2011 to be the most useful of the articles mentioned in the introduction to this challenge. It might be helpful if the blog editor can summarize a couple of points from each article, to push the discussion forward. Are we talking about publishing metrics only? While scientific publications tell us something, they are only a small part of the story.

  5. David Herrington permalink
    August 9, 2012 9:28 am

    I’ll add to the chorus concerning the long timeline neccessary to determine the value of a specific epi project or for that matter any line of scientific inquiry. Short term metrics such as numbers of publications etc offers reassurance to administrators and politicians making difficult budget decisions in hard times, but frequently underestimate the ultimate value of good science. Many times the most valuable science is only best understood in light of subsequent unanticipated developments in the field that brings new meaning to the original work. I would encourage the Institute to continue to focus on programatic decisions that emphasize the importance of the research questions, the methodologic rigor employed, and innovation. High quality work that fit these criteria will stand the test of time and be a good investment of our public funds in support of the cause of human health. If you really need metrics, I suggest you look for ways to quantify these charateristics as additional descriptors of the portfolio of funded research.

  6. Jay Chawla permalink
    August 23, 2012 10:40 am

    The metrics are designed to raise up select journals and select researchers who fit in by habit, behavior, and values. The metrics are designed to give the money to those select researchers who have, over the early to middle part of their careers, impressed the right people in the right ways by doing what it is they are supposed to do. And the metrics trap those who give the funding, because they must only give the money to those who have been selected after many years of doing what it is they are supposed to do. So there doesn’t seem to be a way out — no there doesn’t seem to be any escape from the curse. What a pity. But at least they only publish what is supposed to be published…

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s