**Table 1.**

*Inter-rater agreement.*

four studies. When training was complete, studies were randomly assigned to one of the four undergraduate judges. The graduate student acted as second judge and independently coded all studies. Regular meetings were held to review discrepancies and reach consensus. In total, 29 of the 31 studies (93.5%) included in the meta-analysis were independently rated by two judges. The percentages of interrater agreement were between 70 and 100% for the majority of coded variables (see details in **Table 1**). Lower agreement rates were observed on variables related to NPA, a finding that can be explained by a lack of precision in reports of NPA prevalence.
