1. Selection of individuals to submit ideas and to score questions:
Individuals representing a wide range of technical expertise
in the area of newborn health and birth outcomes were selected
by including
• Top 100 most productive researchers in the previous 5
years (2008–2012), according to the Web of Science®, in
any research that involved neonates anywhere in the
world, including (but not limited to) fundamental research,
obstetrics and gynaecology, social science, and other
fields;
• Top 50 most productive researchers in the previous 5 years
(see above) in research specifically involving neonates in
low and middle income countries (LMICs);
• Top 50 most productive researchers in the previous 5 years
(see above) in any research involving stillbirths;
• 400 program experts in newborn health, who were contacted
through the Healthy Newborn Network Database,
representing mainly national–level health programme
managers in LMICs.
2. Identification of questions to be scored:
All the identified individuals were approached and asked to
submit their three most promising ideas for improving newborn
health outcomes by 2025. An expert group meeting was
convened to review the 396 questions received from 132 experts.
After removing or merging seemingly duplicate ideas,
the submissions were consolidated into a set of 205 research
questions and clarity of the questions was improved.
3. Scoring of research questions:
A set of 5 criteria to assess the proposed 205 research questions
was agreed on.
The scoring criteria were based on CHNRI methodology
[8–12]
i. Likelihood of answering the question in an ethical way
ii. Likelihood of efficacy
iii. Likelihood of deliverability and acceptability
iv. Likelihood for an important disease burden reduction
v. Predicted effect on equity
During the preliminary meeting, 14 experts invited from the
larger pool of responders completed their scoring to test the
methodology. The remaining experts were asked independently
to answer a set of questions via an online survey on
all the chosen criteria for all listed research options. Scores
from a total of 91 experts were received.
4. Computation of scores for competing research options
and ranking:
The intermediate scores were computed for each of the five
criteria and they could potentially range between 0–100%.
Those scores indicate the “collective optimism” of the group
of scorers that a given research question would fulfil each
given criterion. The overall research priority score for each
research question was then computed as the mean of the intermediate
scores. The average expert agreement scores were
also calculated (Online Supplementary Document).