Research and Funding
Inside NIA: A Blog for Researchers

The Approach criterion: why does it matter so much in peer review?

The Approach criterion: why does it matter so much in peer review?

Posted on July 31, 2013 by Robin Barr, Director of the Division of Extramural Activities. See Robin Barr's full profile.

While preparing for a recent talk, I took a close look at our data on the scoring of grant applications. Every applicant wants great scores, and we want to help you understand how you’ll be scored, and why. For example, you may have heard that the Approach criterion score is highly correlated with the final impact score assigned to a grant application. Let’s get into the details of that.

Reviewers use 5 criteria to assess research grant applications, then discuss a final impact score.

As most applicants for NIH grants know, reviewers assess research grant applications using five criteria:

  • Significance
  • Innovation
  • Approach
  • Environment
  • Investigators

Reviewers give each criterion a separate score. Then these are considered together (along with additional criteria such as human research protections) when the overall impact rating or impact score is given.

How is a criterion score different from an impact score?

Criterion scores are given independently by each reviewer before the review meeting. An impact score is given after discussion when all the reviewers have had a chance to hear each other’s point of view. (About half of applications are not discussed at the review meeting. These not-discussed applications receive only criterion scores. They don’t get final impact scores.)

The Approach criterion score is highly correlated with impact score.

With that separation between criterion score and impact score, why does the average of the reviewers’ ratings on Approach correlate so highly with the impact score? In the several analyses (NIGMS, RockTalk) that have been conducted, the correlation usually hovers around .8. Significance and Innovation also figure in, but have lower correlations than Approach. Investigators and Environment lag a long way behind the other three criteria. Why does Approach matter so much?

What are reviewers really evaluating in the Approach criterion score?

Usually, when this tight relationship between the Approach criterion score and the impact score is mentioned someone deplores the result and so seems to demean review. These critics say that reviewers focus on the lowest common denominator in an application. They say that picking on the methods and details ensures that methodologically strong but questionably significant applications rise to the top.

In preparing for my talk (to Pepper Center junior investigators), I looked over the kind of criticisms that reviewers stated when giving poor scores on Approach. Methodological criticisms do occur, but far more commonly the criticisms were of the conceptual approach to the science. So I saw comments like these:

  • This is an old conceptual approach. So-and-so has conceived the problem differently, and that sheds new light.
  • The conceptual model is too simple. Other elements need to be added before the model will be effective in moving the field forward.

It was easy to see that these kinds of criticisms were driving down the Significance rating for the affected applications, too.

Now, I will say that, when looking at better scoring applications, methodological points were more dominant in the Approach score. Taken together, it begins to look as if the high overall correlation for Approach is driven partly by a substantive consideration (problems in the conceptual approach) and partly by methodological concerns. Thinking that through, I do think these data confirm my opinion of reviewers as constructive and thoughtful as well as assiduous and careful! The high correlation of Approach to impact score perhaps reflects how these attributes play out across different applications.

Do you agree? Or, do you have other questions about criterion scores? Let me know by submitting a comment below.

 

Read Next:

How NIH Review Criteria Affect Your Score, from NIAID

6 Comments
Share this:
Email Twitter Linkedin Facebook

Posted by CSR Reviewer on Jul 31, 2013 - 12:41 pm
I am not surprised to see the high correlation between the approach criterion score and the final impact score. I have been on NIH study sections for the past 10 years or so, and I can say that the majority of applications come from qualified investigators in good environments. So there's not going to be a lot of variance in those scores to explain impact. Most of the applications also address a problem of at least reasonable significance, so there goes that variance. What separates a good application from a great one (and we all know that a grant has to be at least great to get funded these days) is usually the approach: how the investigators have operationalized their theoretical or practical question, whether they have attended to the important methodological issues (not the nit-picky ones, the BIG ones), and so on. I have also seen innovation influencing impact scores more and more, so I won't be surprised if that correlation gets larger over time.

Posted by Anonymous on Jul 31, 2013 - 12:54 pm
The comments you cite in your examples are precisely what is wrong with the approach of many reviewers. Both reflect subjective rather than scientific judgments, i.e. not whether the approach represents good science and is likely to have an impact regardless of the experimental results, but whether it represents the reviewers' particular scientific approach. In an environment in which an unfavorable comment from a single reviewer, particularly if it is from a primary or secondary reviewer, and no one else on the study section has read the proposal, it is the kiss of death. It is also interesting that the correlation between innovation and impact is less than that of approach and impact, since the evaluation of the methodology is much easier than assessing the quality of the innovation. It represents the conservatism that has always been rampant in the peer review system but, because of current fiscal issues, is now much more likely to lead to innovative proposals falling below the payline or being triaged by reviewers who may not be particularly perceptive regarding the significance of the innovation.

Posted by anonymous II on Jul 31, 2013 - 6:28 pm
I am now on the verge of retirement after 45 years in academia, during which I have been quite nicely funded by the NIH and remained so upto this year. I was also a chartered member of a study section and served on ad hoc basis in several others. During this time I have developed a perspective based on my impressions, which may, in a certain sense, be relevant to this discourse. I believe that the composition and personalities of the study section members are a significant if not a decisive factor in the outcome. Cronyism and factionalism are very prevalent. In my view, the weak link in NIH review system is the mechanism for the selection of members. Too much discretion is left to the SRAs, who quite often were never the main players in the scientific arena and thus lack gravitas. Very good scientists have often no hope of being funded while their "professional enemies" sit on a particular Study Section, especially if members are allowed to vote out of the range and the funding level is low. I am struggling to say it nicely but I have encountered some particularly nasty people on the study sections, who never liked anything unless it came from a crony. Enough!

Posted by sillyquestion on Aug 06, 2013 - 8:41 am
As an early career person, there's a lot I don't know about the NIH review process but the comment by anonymous II especially caught my eye. It may be a silly question, but are the reviews themselves ever reviewed? What I mean is, does someone higher up the food chain check the reviews of study section members for outliers that might indicate bias/cronyism/factionalism?

Posted by DCS on Aug 06, 2013 - 12:25 pm
Because each application is usually assigned to at least three reviewers, the purported bias/cronyism/factionalism would have to be rampant/pervasive and the reviewers in study sections in collusion for the charge to stand. I have been reviewing grant applications for the NIH continuously for 16 years, twice as a chartered member of a study section and countless times as an ad hoc, and in all of those years not a single time have I ever approached or been approached by another reviewer to discuss how an application should be scored. Almost invariably, the only time I would meet the others reviewers would be at the study section meeting, meaning that my ONLY interaction with the reviewers was at the meeting itself. When do we collude (these days a reviewer does not even know who the other reviewers who were assigned the same application(s) are)? The consequences of what I just stated should be obvious but I will spell them out: (a) Do reviewers themselves get reviewed? Yes, by each other, since no application is ever assigned to a single reviewer, so that each receives multiple independent evaluations, whose validity is discussed in the open by an even larger panel to arrive at the final decision that will determine the impact score; and (b) is there bias/cronyism/factionalism? An emphatic "no", based on my 16 years of experience as described above. The absence of cronyism and factionalism also supports the validity of the response to question (a) because it means that the reviewers can indeed review one another.

Posted by A frequent reviewer on Aug 01, 2013 - 12:19 pm
In my experience as a reviewer, most reviewers are familiar with the subject of given set of application in a study section, but they may not be experts in the subject. Therefore, it is critical that an application is assigned to an expert, who is in position to evaluate proposed approaches. When a reviewer is familiar with the proposed approaches, grant applications tend to do better. I agree with comments above that if reviewers have a bias towards an approach, grant is not likely to get a fundable score.