Data evaluation
To date, there is no single, standardised critical appraisal tool that can be used for all study designs. Moreover, integrative reviews allow the incorporation of a range of research designs, making global critical assessment difficult. In line with previously published integrative reviews [35], Hawker and colleague's checklist was used to evaluate study quality of both quantitative and qualitative research designs to ensure validity and methodological rigour [36]. We used critical appraisal criteria and evaluated across nine domains: 1) abstract/title; 2) introduction and aims; 3) method and data; 4) sampling; 5) data analysis; 6) ethics and bias; 7) results; 8) transferability and generalisability; and 9) implication and usefulness [36]. A study quality score was assigned to each domain on a descending scale of quality (good: 40 points; fair: 30 points; poor: 20 points; or very poor: 10 points). Each study's scores were summed and then divided by nine to get a total score [36] (see Table 3). Each domain examined has an overall question and specific statements of what evidence was required to meet each rating level. For example, for domain one, the overall question required assessment of “Did they provide a clear description of the study?”. The first author (LD) scored each paper. A “good” score was given if there was a structured abstract with full information and clear title, through to a “very poor” score being warranted where there was no abstract at all.