{ "v1_Abstract": "Background: YouTube is an increasingly important medium for consumer health information \u2013 with content provided by healthcare professionals, government and non-government organizations, industry, and consumers themselves. It is a rapidly developing area of study for healthcare researchers. We examine the methods used in reviews of YouTube consumer health videos to identify trends and best practices. Methods and Materials: Published reviews of consumer-oriented health-related YouTube videos were identified through PubMed. Data extracted from these studies included type of journal, topic, characteristics of the search, methods of review including number of reviewers and method to achieve consensus between reviewers, inclusion and exclusion criteria, characteristics of the videos reported, ethical oversight, and follow-up. Results: Thirty-three studies were identified. Most were recent and published in specialty journals. Typically, these included more than 100 videos, and were examined by multiple reviewers. Most studies described characterizes of the videos, number of views, and sometime characteristics of the viewers. Accuracy of portrayal of the health issue under consideration was a common focus. Conclusion: Optimal transparency and reproducibility of studies of YouTube health-related videos can be achieved by following guidance designed for systematic review reporting, with attention to several elements specific to the video medium. Particularly when seeking to replicate consumer viewing behavior, investigators should consider the method used to select search terms, and use a snowballing rather than a sequential screening approach. Discontinuation protocols for online screening of relevance ranked search results is an area identified for further development.", "v1_col_introduction": "introduction : Social media provides effective forums for consumer-to-consumer knowledge exchange and sharing of health information. As well, it is an avenue for health care providers to potentially influence care. An American survey of cancer patients showed that 92% believe that internet information empowers them to make health decisions and helps them to talk to their physicians. (McMullan 2006) Social media is increasingly used by consumers, particularly young adults(Fox and Jones 2009) and parents.(Moore and Pew Internet Project 2011)\nYouTube is a video-sharing web site that has found a place in health information dissemination. It has been used in medical education,(Wang et al. 2013) patient education about specific conditions(Mukewar et al. 2012) and health promotion.(O\u2019Mara 2012) Misinformation has also been shared (Syed-Abdul et al. 2013) and the possibility of covert industry influence has been suggested,(Freeman 2012) leading to guidelines for assessing the quality of such videos. (Gabarron et al. 2013)\nAt present, little is known about the impact of social media and video sharing on pain management practices. The casual searching and viewing of vaccination videos on YouTube revealed a number of \u201chome videos\u201d of infants receiving vaccinations and demonstrated that poor pain management during immunizations is common. We wished to conduct a systematic review of YouTube videos depicting infants receiving immunizations to ascertain what pain management practices parents and health professionals use to reduce immunization pain and distress.\nSystematic reviews synthesize research evidence using formal methods designed to safeguard against epidemiological bias. They are reported in a transparent manner that allows the reader to assess the robustness of the study and replicate it. There are a various approaches to systematic reviews. These include meta-analyses, in which results are synthesized statistically,(Moher et al. 2009) as well as qualitative and mixed methods systematic review.(Wong et al. 2013) Kastener argues that \"by matching the appropriate design to fit the question, synthesis outputs are more likely to be relevant and be useful for end users.\"(Kastner et al. 2012)\nIt was our intent to adapt this versatile methodology to systematically review YouTube videos of infant vaccination. However, YouTube was expected to pose some particular challenges to systematic inquiry.\nSystematic reviews typically synthesize research articles and reports. This evidence base is relatively stable and easily captured and manipulated, with metadata that can be retrieved from bibliographic services such as PubMed or Ovid MEDLINE. In the traditional model of systematic reviews, the body of knowledge is assumed to change, but there is a tacit assumption that the change is through the addition of new evidence. Indeed, little is removed from or modified in the corpus of published scientific literature. As of late April 2013, MEDLINE contained 949,881 with a publication year of 2012, of which only 525 represent retraction notice and 45 represent published erratum.\nIn contrast, the web and video sharing services such as YouTube are dynamic. Videos can be\n1\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45\n1 2\nPeerJ reviewing PDF | (v2012:11:96:2:1:NEW 22 Jul 2013)\nR ev ie w in g M an\nus cr ip t\nadded or removed at any time by their publishers (or by the host, for violations of copyright or community guidelines), and the order of material in search results may change from day to day. The phenomenon of web resources disappearing is known as \"decay\" or \"modification.(Bar-Ilan and Peritz 2008; Saberi and Abedi 2012)\nRecognizing that we would not be able to capture and study all YouTube videos ever posted on our topic, we instead sought to craft an approach that would let us capture the cohort of the videos in the YouTube domain on a given day, and extract the relevant information quickly to avoid the loss of any relevant videos.\nThis paper represents the findings of a preliminary step in designing our systematic review. We surveyed published studies of health-related YouTube videos to address the following question: In reviews and systematic reviews of health-related YouTube videos, what are common methodological challenges, exemplary methods and optimum reporting practices?", "v2_col_introduction": "introduction : Social media provides effective forums for consumer-to-consumer knowledge exchange and sharing of health information. As well, it is an avenue for health care providers to potentially influence care. An\u00a0American\u00a0survey\u00a0of\u00a0cancer\u00a0patients\u00a0showed\u00a0that\u00a092%\u00a0believe\u00a0that\u00a0internet\u00a0 information\u00a0empowers\u00a0them\u00a0to\u00a0make\u00a0health\u00a0decisions\u00a0and\u00a0helps\u00a0them\u00a0to\u00a0talk\u00a0to\u00a0their\u00a0physicians. (McMullan\u00a02006)\u00a0Social\u00a0media\u00a0is\u00a0increasingly\u00a0used\u00a0by\u00a0many\u00a0consumers,\u00a0but\u00a0particularly\u00a0young\u00a0 adults.(Fox\u00a0and\u00a0Jones\u00a02009)\u00a0American\u00a0data\u00a0on\u00a0the\u00a0use\u00a0of\u00a0video\u00a0sharing\u00a0sites\u00a0show\u00a0that\u00a0parents\u00a0 are\u00a0more\u00a0likely\u00a0to\u00a0use\u00a0these\u00a0services\u00a0than\u00a0nonparents,\u00a0nonwhite\u00a0more\u00a0likely\u00a0than\u00a0white\u00a0parents.\u00a0 Use\u00a0by\u00a0rural\u00a0residents\u00a0was\u00a0as\u00a0high\u00a0as\u00a0urban\u00a0usage.(Moore\u00a0and\u00a0Pew\u00a0Internet\u00a0Project\u00a02011)\nVery\u00a0little\u00a0information\u00a0exists\u00a0in\u00a0the\u00a0literature\u00a0regarding\u00a0social\u00a0media\u00a0and\u00a0its\u00a0impact\u00a0on\u00a0pain\u00a0 management\u00a0practices. Casual searching and viewing on YouTube revealed numerous \u201chome videos\u201d of infants receiving vaccinations, and demonstrated that\u00a0poor\u00a0pain\u00a0management\u00a0during\u00a0 immunizations\u00a0was\u00a0common.\nWe wished to conduct a systematic review of YouTube videos depicting infants receiving immunizations to ascertain what pain management practices parents and health professionals are using to reduce immunization pain and distress. This paper represents a preliminary step to inform that review. We surveyed published studies of health-related YouTube videos to address the following question: In reviews and systematic reviews of health-related YouTube videos, what are common reporting practices, exemplary methods and methodological challenges?", "v1_text": "results and discussion : Twelve eligible studies were identified from the initial search (Figure 1). (Backinger et al. 2011; 2 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 3 4 PeerJ reviewing PDF | (v2012:11:96:2:1:NEW 22 Jul 2013) R ev ie w in g M an us cr ip t Pant et al. 2012; Ache and Wallace 2008; Tian 2010; Lo, Esser, and Gordon 2010; Steinberg et al. 2010; Kn\u00f6sel, Jung, and Bleckmann 2011; Singh, Singh, and Singh 2012; Pandey et al. 2010; Kn\u00f6sel and Jung 2011; Keelan et al. 2012; Fat et al. 2012) Topics of the 12 initial reviews were; smoking cessation,(Backinger et al. 2011) acute myocardial infarction,(Pant et al. 2012) HPV vaccination,(Ache and Wallace 2008) organ donation,(Tian 2010) epilepsy,(Lo, Esser, and Gordon 2010) prostate cancer,(Steinberg et al. 2010) dentistry,(Kn\u00f6sel, Jung, and Bleckmann 2011) rheumatoid arthritis,(Singh, Singh, and Singh 2012) H1N1,(Pandey et al. 2010) orthodontics,(Kn\u00f6sel and Jung 2011) vaccination,(Keelan et al. 2012) and Tourette syndrome. (Fat et al. 2012) Most (8) were published in specialty journals. (Lo, Esser, and Gordon 2010; Steinberg et al. 2010; Kn\u00f6sel, Jung, and Bleckmann 2011; Singh, Singh, and Singh 2012; Kn\u00f6sel and Jung 2011; Fat et al. 2012; Pant et al. 2012; Backinger et al. 2011) Three were in general and internal medicine journals,(Ache and Wallace 2008; Keelan et al. 2012; Pandey et al. 2010) one in a health communications journal.(Tian 2010) Most were published from 2010 to 2012. The earliest was 2008 \u2013 three years after YouTube's inception. (Ache and Wallace 2008) Results are summarized in Table 2. Thirteen additional reviews were identified from the update search (Figure 2). Three were found ineligible when the full text of the article was examined, ten were found eligible (Kerson 2012; Richardson and Vallone 2012; Stephen and Cumming 2012; Jurgens, Anderson, and Moore 2012; Thomas, Mackay, and Salsbury 2012; Ehrlich, Richard, and Woodward 2012; Tourinho et al. 2012; Mukewar et al. 2012; Kerber et al. 2012; Clerici et al. 2012) and an eleventh, (Bromberg, Augustson, and Backinger 2012) cited by one of the ten as informing their methods (Richardson and Vallone 2012). methods and materials : PubMed was searched April 20, 2012, using the term \"YouTube\"; the search was limited to the Systematic Review subset. This yielded only 4 records, only 2 of which appeared to be reviews, so the limit was withdrawn. This expanded the search result to 153 records, with the earliest publication occurring in the spring of 2007. A second approach, a PubMed search of \"YouTube and (search or methods)\" yielded 86 records. The sample was augmented with two additional reviews nominated by the review team: an early review focusing on the portrayal of vaccinations that we were already aware of, and a review conducted at our institution, which was in press at the time of the search. Just prior to submission of this manuscript, an update search was conducted in PubMed for the term \u201cYouTube\u201d and publications added since the first search were identified and examined for novel features seen infrequently or not at all in the original sample (Table 1). The search results were screened by a single reviewer using the following criteria: The videos reviewed focused on consumer health rather than targeted toward health care providers or trainees and the video did not focus on adoption or use of social media. No limits were imposed regarding publication date or language. Data extracted from these studies included type of journal (general medical, specialty medical journal or internet/social media journal), topic of the review, characteristics of the search, methods of review including number of reviewers and method to achieve consensus between reviewers, inclusion and exclusion criteria, characteristics of the videos reported, ethical oversight, and follow-up. Data were extracted from the published report \u2013 we did not contact authors to seek clarification of methods. As we focused on methodological aspects as reported, we did not perform additional risk of bias assessments on the individual studies, did not plan to perform meta-analysis, and did not publish a protocol. discussion : These reviews are recent, and for the most part, clearly reported. There are examples of excellent reporting in most facets of the review \u2013 study question, inclusion and exclusion criteria, search strategy, screening and data extraction methods. Few reviews, however, reported all elements well. Improved reporting would increase transparency and allow the reader to better assess the risk of bias in the study design.(Tricco, Tetzlaff, and Moher 2010) The study design should be strong and reproducible, with methods in line with those of a well-conducted systematic review. (Moher et al. 2009) Through this manuscript, we aim to describe the array of methods and data available to those planning to undertake this type of work, recognizing that, depending on the data examined and the objectives of the review, methods will vary. While it is premature to put forth reporting guidelines \u2013 defined as a minimum set of elements that should be reported to enable the reader to understand the conduct of the study, assess the risk of bias and generalizability of results, the PRISMA checklist (Moher et al. 2009) and accompanying elaboration and explanation (Liberati et al. 2009) generalizes to many aspects of video reviews. The elements that have no real parallel in systematic reviews of research studies warrant the most consideration and some of these are elaborated in Table 3. A number of factors make video searches less stable, and thus less replicable, than the sorts of database searches used in systematic reviews where results either match the search criteria or do not and results are sorted by date. In any relevance-ranked search results, the order will change as new entries are added, existing ones removed, or the proprietary ranking algorithm is changed. Thus, a systematic approach that will accomplish the goals of the review (whether they are exhaustive identification of eligible videos or a sample of fixed size that represents the videos that the target audience is most likely to find) is needed, and should be fully described. There was one aspect of our upcoming review of immunization videos that was not informed by this survey of published studies - a discontinuation rule to stop screening when few additional studies were being found. Our initial search yielded 6,000 videos. Unlike the searches of bibliographic databases used for study identification in systematic reviews, this search result was ranked according to relevance. Spot checks showed that most of the lower ranked videos were irrelevant. Given the size of the list, screening the entire list in one sitting was not feasible. We did not know of a way to \u201cdownload\u201d the list, and had no assurance that the list would remain unchanged on subsequent screening days. Thus, a protocol was needed to discontinue screening when further screening was unlikely to yield additional eligible videos. Discontinuation rules would allow one to manage relevance-ranked search results, and are essential when screening web search results that are often large, cannot be easily captured as a whole, and are not static from day to day. Given the absence of empirical guidance, we devised a pragmatic rule: screen until twenty consecutive ineligible videos are reviewed, then assess a margin of 50 more. Depending on the number of eligible videos found in the margin, a decision could be taken to stop screening or continue. With this discontinuation rule, one needs to accept that some additional eligible videos might have been found had the entire retrieval been 7 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 13 14 PeerJ reviewing PDF | (v2012:11:96:2:1:NEW 22 Jul 2013) R ev ie w in g M an us cr ip t screened, however the likelihood of missing a large number is low given the relevance ranking. Examples of discontinuation rules can be found in several health care fields. Computerized adaptive testing often use validated stopping rules to discontinue the test when it becomes statistically unlikely that administering additional items improve the accuracy of the assessment. (Babcock and Weiss 2009) Clinical trials that have planned interim analyses may have pre-specified stopping rules designed \u201cto ensure trials yield interpretable results while preserving the safety of study participants.\u201d(Cannistra 2004) As search engine ranking algorithms improve, there is increasing opportunity for systematic reviewers to use sources that offer relevance ranking, such as Google Scholar or PubMed Related Citations. Experimental research has demonstrated that ranking algorithms can successfully place eligible records high in a search result. (Cohen et al. 2006; Sampson et al. 2006) Yet, we were unable to identify any practical guidance on stopping rules for screening in the systematic review context, nor any explicit reports of when or if screening was stopped for Internet-based searches in systematic reviews. This is an area requiring more complete reporting on the part of systematic reviewers, as well as a useful area for further research. It should be noted that factors such as screening order, the use of snowballing or other techniques to mimic consumer searching behaviour, and discontinuation rules are relevant only when there is a tacit acceptance that not all potentially relevant videos will be identified. While we have focused our efforts on informing the conduct of systematic reviews, video producers hoping to reach consumers may wish to consider several factors. As the difference in number of views varies 1000 fold, a clear marketing plan is needed for any production effort to be worthwhile. Our review suggests that videos styled as home videos appeal to a broader audience than dyadic videos. As much as reviewers use empirically defined search terms, producers will want to select titles and keywords that are likely to match what consumers type in to the search bar. Producers will need to consider the factors that rank a video high in the related list as well as those that will make it appear in the search results. As the ranking algorithms for both search engine ranking and related sidebar ranking are proprietary and subject to change, video producers will want to seek up-to-date guidance on \"Search Engine Optimization\" for YouTube, or any other video channel they intend to use. Limitations of this systematic review include the fact that there may be additional informative reviews that we did not identify and include. We only searched one traditional bibliographic database, and did not include social media such as blog postings. Also, reviews of consumer health videos are relatively new. YouTube was created in 2005, and given the time needed to conduct and publish reviews; there may be a large number in preparation. Certainly, as many appeared in the course of this project (April to November 2012) as were published prior to its start, suggesting that there may be innovations that we have not yet captured. conclusions : There are many gaps in reporting in these early studies of YouTube videos, and no known reporting standards. Some strong trends are apparent \u2013 reviewers use simple searches at the YouTube site, with restriction to English with some process to remove off-topic retrievals, duplicates and select only one of multi-part videos. Two reviewers are generally used and kappa 8 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 15 16 PeerJ reviewing PDF | (v2012:11:96:2:1:NEW 22 Jul 2013) R ev ie w in g M an us cr ip t is commonly recorded. Although reviewers often state they are attempting to mimic user behaviour, this is generally limited to including the first few pages of search results \u2013 only a few of the most recent reviews have used a more sophisticated snowballing approach. Selection of search terms is typically done by health care professionals, whose searches may be quite different from the searches that consumers would typically do; however, there are examples of empirically determined search terms. As well, health consumers are infrequently included as assessors. Finally, efficiencies can be gained by determining stopping criteria for screening large relevance ranked search results such as those provided by YouTube. In the absence of formal reporting guidelines (which might be premature), we recommend that those wishing to review consumer health videos use accepted systematic review methods as a starting point, with some of the elements specific to the video medium that we describe here. the searches : All reviews searched directly on the YouTube site, rather than through a third party interface such as Google advanced search. Ten of the 12 reviews reported the date of the search.(Ache and Wallace 2008; Pandey et al. 2010; Fat et al. 2012; Backinger et al. 2011; Keelan et al. 2012; Pant et al. 2012; Steinberg et al. 2010; Kn\u00f6sel and Jung 2011) Most included several terms in the search, and these were presumably linked with \"OR\". Most did not address the sort order, so presumably used the default values. Currently, YouTube search results are sorted by relevance as the default. One review sampled the top ranked items from searches sorted by relevance and number of views, using the default of searching all of YouTube, and then again searching only those classified by the person who posted the video as \u201ceducational\u201d (4 samples in all).(Kn\u00f6sel, Jung, and Bleckmann 2011) Only one review explained how search terms were selected \u2013 by using Google Trends to determine which topical terms were most searched.(Backinger et al. 2011) In the updated set, two additional studies used empirically derived search terms \u2013 most common brands and common search terms from Google Insights.(Richardson and Vallone 2012; Bromberg, Augustson, and Backinger 2012) Several reviews attempted to make the search realistic, that is, searching as consumers might search;(Fat et al. 2012; Pant et al. 2012; Kn\u00f6sel and Jung 2011; Backinger et al. 2011) but all seemed to have worked from the search list rather than using a snowball technique.(Grant 2004) Snowballing is a technique used in sampling for qualitative studies - cases with connection to other cases are identified and selected (Giacomini and Cook 2000) and in information retrieval - references to references are considered for relevance.(Greenhalgh and Peacock 2005) It is a useful adjunct when identifying all relevant candidates through a search engine is difficult for 3 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 5 6 PeerJ reviewing PDF | (v2012:11:96:2:1:NEW 22 Jul 2013) R ev ie w in g M an us cr ip t whatever reason.(Horsley, Dingwall, and Sampson 2011) However, three of the searches from the updated search did describe snowballing, as follows: \"As clips were viewed, additional suggestions were offered by the site and these in turn led to further suggestions.\"(Stephen and Cumming 2012) \"For each of the top 10 videos, the top three related videos (ranked by YouTube) were also coded.\"(Thomas, Mackay, and Salsbury 2012) Finally \"The search was supplemented by also reviewing the list of featured videos that accompany search results.\"(Kerber et al. 2012) Only one review imposed filters on the search (in that case, that the video had been uploaded in the past three months).(Pandey et al. 2010) The Inclusion Criteria Eight of the 12 reviews stated that only English language videos were included. (Backinger et al. 2011; Keelan et al. 2012; Singh, Singh, and Singh 2012; Lo, Esser, and Gordon 2010; Pant et al. 2012; Steinberg et al. 2010; Pandey et al. 2010; Tian 2010) None of the other 4 reported that they were language inclusive. Nine reported that they excluded \"off topic\" videos,(Keelan et al. 2012; Singh, Singh, and Singh 2012; Pant et al. 2012; Steinberg et al. 2010; Pandey et al. 2010; Tian 2010; Fat et al. 2012; Kn\u00f6sel, Jung, and Bleckmann 2011; Ache and Wallace 2008) but few gave clear criteria defining what was \u201con topic\u201d. Eight reported that they removed duplicates.(Singh, Singh, and Singh 2012; Pant et al. 2012; Steinberg et al. 2010; Pandey et al. 2010; Tian 2010; Fat et al. 2012; Ache and Wallace 2008; Backinger et al. 2011) Details of the treatment of duplicates were sparse; for instance, none stated how they selected which version to keep, or whether they aggregated view counts across all versions - although two stated that if videos had multiple parts, only one was kept, and both of these stated that they averaged views across all parts.(Singh, Singh, and Singh 2012; Pant et al. 2012) Two reviews stated that the video must have sound to be eligible.(Pant et al. 2012; Steinberg et al. 2010) One included only videos under 10 minutes in length(Steinberg et al. 2010) and one excluded videos blocked by their institutions' internet filters.(Backinger et al. 2011) sample size : The number of videos assessed ranged from 10 to 622, with a mean of 145 and median 112. Some screened the entire search results (maximum 1634 videos). More common was an approach of taking a fixed sample size and screening this set, retaining those eligible after duplicates, off topic and other ineligible material was removed. Two reviews used a fixed sample size.(Kn\u00f6sel, Jung, and Bleckmann 2011; Lo, Esser, and Gordon 2010) Several set a fixed sample size to screen.(Fat et al. 2012; Kn\u00f6sel and Jung 2011; Backinger et al. 2011) No reviews reported a formal sample size calculation. outcomes of interest : The characteristics of the videos that were evaluated varied depending on the study objectives, but videos were commonly assessed as providing true or reliable information or being positive or negative toward the health issue addressed. One review from the original set(Singh, Singh, and Singh 2012) used a validated scale as part of the assessment - DISCERN: an instrument for judging the quality of written consumer health information on treatment choices.(Charnock et al. 1999) One review identified from the update search(Mukewar et al. 2012) used two scales adapted from Inflammatory Bowel Disease patient education web sites; a detailed scale specific to IBD, and a 5-point global quality score. Two looked at knowledge translation \u2013 one of these examined whether videos posted after a change in guidelines for cardiopulmonary resuscitation employed the new or old standard. (Tourinho et al. 2012) Another investigated the integrity with which parents and carers implement the Picture Exchange Communication System (PECS), in a naturalistic setting. (Jurgens, Anderson, and Moore 2012) One review assessed the findability of the videos \u2013 having identified 33 videos that portrayed complete Epley maneuvers for benign paroxysmal positional vertigo, they searched again using very general terms \u2013 dizzy, dizziness, vertigo, positional dizziness, positional vertigo, dizziness treatment, and vertigo treatment. The investigators then determined where or if one of the videos 6 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 11 12 PeerJ reviewing PDF | (v2012:11:96:2:1:NEW 22 Jul 2013) R ev ie w in g M an us cr ip t depicting the Epley maneuver appeared in the list of relevance-ranked results for the less specific terms.(Kerber et al. 2012) descriptive characteristics collected 4 : Number of views 12 100 Length 8 67 Date posted 5 42 Number of \u201cLikes\u201d 3 25 Average rating score 3 25 Number rated by viewers 2 17 Intended audience 2 17 Production quality (Amateur/Pro) 2 17 reported characteristics of the review : Descriptive characteristics: All reviews reported on some characteristics of the videos. Elements most commonly reported were: number of views (Pant et al. 2012; Steinberg et al. 2010; Pandey et al. 2010; Keelan et al. 2012; Ache and Wallace 2008; Backinger et al. 2011; Lo, Esser, and Gordon 2010; Kn\u00f6sel, Jung, and Bleckmann 2011; Fat et al. 2012; Singh, Singh, and Singh 2012; Tian 2010; Kn\u00f6sel and Jung 2011), length in minutes,(Pant et al. 2012; Lo, Esser, and Gordon 2010; Steinberg et al. 2010; Kn\u00f6sel, Jung, and Bleckmann 2011; Singh, Singh, and Singh 2012; Pandey et al. 2010; Kn\u00f6sel and Jung 2011; Keelan et al. 2012) and date posted.(Pant et al. 2012; Lo, Esser, and Gordon 2010; Tian 2010; Singh, Singh, and Singh 2012; Pandey et al. 2010) While most reported median, or mean number of views, often with some measure of dispersion, one reported the concentration of views \u2013 five videos accounted for 85% of total views.(Kerber et al. 2012) Other characteristics reported included number of \"likes\",(Pant et al. 2012; Singh, Singh, and Singh 2012; Fat et al. 2012) rating score,(Steinberg et al. 2010; Tian 2010; Keelan et al. 2012) times rated by viewers,(Lo, Esser, and Gordon 2010; Tian 2010) intended audience,(Pant et al. 2012; Steinberg et al. 2010) amateur/pro, based on production quality,(Fat et al. 2012; Lo, Esser, 4 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 7 8 PeerJ reviewing PDF | (v2012:11:96:2:1:NEW 22 Jul 2013) R ev ie w in g M an us cr ip t and Gordon 2010) type if non-standard (i.e. song, animation, advertisement),(Pant et al. 2012) country of origin or address of author.(Tian 2010) Importantly, one from the original set(Fat et al. 2012) and three from the update set, harvested self-reported demographics of viewers.(Mukewar et al. 2012; Stephen and Cumming 2012; Richardson and Vallone 2012) Several classified the videos according to the creating source,(Pant et al. 2012; Ache and Wallace 2008; Singh, Singh, and Singh 2012; Pandey et al. 2010; Kn\u00f6sel and Jung 2011) each used its own typology, but common elements were: personal experience/patient, news reports, professional associations, NGOs such as WHO or Red Cross, pharmaceutical companies and medical institutions. Three reviews from the update set addressed the issue of covert advertising \u2013 two for a tobacco product,(Richardson and Vallone 2012; Bromberg, Augustson, and Backinger 2012) the other discussed the notion of paid testimonials appearing as consumer-posted videos. (Mukewar et al. 2012) review methods : Two reviews reported saving all eligible videos offline.(Ache and Wallace 2008; Pandey et al. 2010) Some reported viewing, screening or assessing online, at the time of discovery. Two (both by Knosel) described the reviewing conditions in some detail; videos were viewed at the same time and under the same conditions by two assessors.(Kn\u00f6sel, Jung, and Bleckmann 2011; Kn\u00f6sel and Jung 2011) Knosel also described opportunities for the reviewers to communicate \u2013 required in one review (Kn\u00f6sel, Jung, and Bleckmann 2011) and prevented in the other.(Kn\u00f6sel and Jung 2011) Eight of the 12 reviews described the reviewers.(Backinger et al. 2011; Lo, Esser, and Gordon 2010; Keelan et al. 2012; Steinberg et al. 2010; Kn\u00f6sel and Jung 2011; Singh, Singh, and Singh 2012; Kn\u00f6sel, Jung, and Bleckmann 2011; Fat et al. 2012) Most were health care professionals, however, one used lay raters \u2013 one potential patient (a youth) and one parent \u2013 to gain their perspective.(Kn\u00f6sel and Jung 2011) Ten of 12 reported on the number of reviewers \u2013 8 reported using 2 reviewers for each video, (Backinger et al. 2011; Tian 2010; Singh, Singh, and Singh 2012; Kn\u00f6sel, Jung, and Bleckmann 2011; Fat et al. 2012; Pandey et al. 2010; Keelan et al. 2012; Steinberg et al. 2010) one reported 3 (professional, parent, youth)(Kn\u00f6sel and Jung 2011) and one implied multiple reviewers without specifying the number.(Ache and Wallace 2008) No review reported having only one reviewer make assessment of content. Four of the 10 with multiple reviewers reported using a third reviewer as arbitrator.(Keelan et al. 2012; Singh, Singh, and Singh 2012; Steinberg et al. 2010; Backinger et al. 2011) Seven of the 10 computed kappa on reviewer agreement. It was not always made clear which rating was used if conflicts occurred \u2013 i.e. neither arbitration nor 5 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 9 10 PeerJ reviewing PDF | (v2012:11:96:2:1:NEW 22 Jul 2013) R ev ie w in g M an us cr ip t consensus was described.(Fat et al. 2012; Keelan et al. 2012; Pandey et al. 2010; Kn\u00f6sel, Jung, and Bleckmann 2011; Steinberg et al. 2010; Tian 2010; Backinger et al. 2011) Only two reviews described a training or calibration exercise prior to undertaking assessments. Backinger described 4 hours of training in which definitions were discussed, and 5 practice videos coded.(Backinger et al. 2011) Tian described pre-testing their code book, using 20 videos and 40 text comments.(Tian 2010) Blinding was used in two reviews. Pandey reported that reviewers were blind to the purpose of the study.(Pandey et al. 2010) Lim Fat reported, \"The individual who rated the comments was blinded to the classification of the video as being a positive, negative or neutral portrayal of Tourette syndrome. Likewise, the raters for classification of the videos were blinded to the classification of the comments.\"(Fat et al. 2012) ethical oversight : None of the original cohort of twelve reviews stated that IRB approval was obtained. One review explicitly stated that IRB was deemed unnecessary due to the nature of the study.(Fat et al. 2012) In the 11 additional studies reviewed, four made explicit statements about IRB approval, one sought approval,(Ehrlich, Richard, and Woodward 2012) three stated they were exempt. (Richardson and Vallone 2012; Mukewar et al. 2012; Kerber et al. 2012) follow-up : Three videos examined a cohort of reviews at two or more points in time looking changes in the number of hits. One review from the original set reported a follow-up at one and six months.(Lo, Esser, and Gordon 2010) Two more from the update set included follow-up, one after 7 months(Mukewar et al. 2012) and one after 1 month and 7 months.(Stephen and Cumming 2012) characteristic n : Total = 12 % Type of journal (G= general/internal medicine S=specialty, I = internet/social media journal) S=8, G=3, I=1 year of publication: median (range) 2011 (2008-2012) : search1 : Search date given 10 83 Number of terms searched: median (range) 3 (1-5) Direct search of YouTube 12 100 Source of terms explained 1 8 Used multiple searches or samples 3 25 Videos 2 Number of videos included Mean 145 Median 112 inclusion criteria 3 : English only 8 67 \"Off topic\" excluded 9 75 qualifications of reviewer described 6 50 : 2 or more reviewers 10 83 Resolution method described 6 50 Kappa reported 7 58 Training of reviewers described 2 17 Blinding of reviewers 2 17 The reader wishing guidance on these aspects of reporting may wish to consult Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement (Moher et al. 2009) and the accompanying eleboration and explanation.(Liberati et al. 2009) 1 PRISMA element 7 and 8, 2 PRISMA element 17, 3 PRISMA element 6, 4 PRIMSA element 11,5 PRISMA elements 9 and 10. PeerJ reviewing PDF | (v2012:11:96:2:1:NEW 22 Jul 2013) R ev ie w in g M an us cr ip t Table 3(on next page) Some systematic review methodological considerations specific to review of consumer health videos, with examples PeerJ reviewing PDF | (v2012:11:96:2:1:NEW 22 Jul 2013) R ev ie w in g M an us cr ip t Characteristic Examples Whether the search was intended to identify all consumer-oriented videos or a sample We reviewed videos posting: on YouTube; on the web. What video sources were selected YouTube; Vimeo; Yahoo Video How search terms were derived Search terms were chosen; by the investigator; by soliciting suggestions from consumers; based by search log data such as Google Trends Any system preferences that would have influenced the search results What sort order was used; the search was limited to reviews classification as \"educational\"; the search was limited to recently added videos) How the review of the search results was conducted Sequential screening of search results; snowballing Discontinuation rules Results were screened: until a predetermined sample size was obtained (state how the sample size was determined; until the entire search result was considered; until predetermined discontinuation criteria were met (state how that critera was determined). How the instability of rankings was addressed All screening done in a single day; Search results were captured for later assessment. Any other measures designed to neutralize bias in the identification of videos1 We using a computer outside the institutional firewall and not previously used to search YouTube; We searched through DuckDuckGo.com to avoid having our location influence the ranking of videos. 1 Many search sites customize search results based on factors such as your geographic location and search history.(Pariser, 2013) PeerJ reviewing PDF | (v2012:11:96:2:1:NEW 22 Jul 2013) R ev ie w in g M an us cr ip t Figure 1 PRISMA flow diagram for included studies Adapted from: Moher D, Liberati A, Tetzlaff J, Altman DG, The PRISMA Group (2009) . Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement. PLoS Med 6(6): e1000097. doi:10.1371/journal.pmed1000097 PeerJ reviewing PDF | (v2012:11:96:2:1:NEW 22 Jul 2013) R ev ie w in g M an us cr ip t Figure 2 PRISMA flow diagram for studies from the updating seach (supplemental set) Adapted from: Moher D, Liberati A, Tetzlaff J, Altman DG, The PRISMA Group (2009) . Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement. PLoS Med 6(6): e1000097. doi:10.1371/journal.pmed1000097 PeerJ reviewing PDF | (v2012:11:96:2:1:NEW 22 Jul 2013) R ev ie w in g M an us cr ip t", "v2_text": "results and discussion : Twelve eligible studies were identified from the initial search (Figure 1). (Backinger et al. 2011; Pant et al. 2012; Ache and Wallace 2008; Tian 2010; Lo, Esser, and Gordon 2010; Steinberg et al. 2010; Kn\u00f6sel, Jung, and Bleckmann 2011; Singh, Singh, and Singh 2012; Pandey et al. 2010; Kn\u00f6sel and Jung 2011; Keelan et al. 2012; Fat et al. 2012) Topics of the 12 initial reviews were; smoking cessation,(Backinger et al. 2011) acute myocardial infarction,(Pant et al. 2012) HPV vaccination,(Ache and Wallace 2008) organ donation,(Tian 2010) epilepsy,(Lo, Esser, and Gordon 2010) prostate cancer,(Steinberg et al. 2010) dentistry,(Kn\u00f6sel, Jung, and Bleckmann 2011) rheumatoid arthritis,(Singh, Singh, and Singh 2012) H1N1,(Pandey et al. 2010) orthodontics,(Kn\u00f6sel and Jung 2011) vaccination,(Keelan et al. 2012) and Tourette syndrome. (Fat et al. 2012) Most (8) were published in specialty journals. (Lo, Esser, and Gordon 2010; Steinberg et al. 2010; Kn\u00f6sel, Jung, and Bleckmann 2011; Singh, Singh, and Singh 2012; Kn\u00f6sel and Jung 2011; Fat et al. 2012; Pant et al. 2012; Backinger et al. 2011) Three were in general and internal medicine journals,(Ache and Wallace 2008; Keelan et al. 2012; Pandey et al. 2010) one in a health communications journal.(Tian 2010) Most were published from 2010 to 2012. The earliest was 2008 \u2013 three years after YouTube's inception. (Ache and Wallace 2008) Results are summarized in Table 2. Thirteen additional reviews identified from the update search (Figure 2). Three were found ineligible when the full text of the article was examined, ten were found eligible (Kerson 2012; Richardson and Vallone 2012; Stephen and Cumming 2012; Jurgens, Anderson, and Moore 2012; Thomas, Mackay, and Salsbury 2012; Ehrlich, Richard, and Woodward 2012; Tourinho et al. 2012; Mukewar et al. 2012; Kerber et al. 2012; Clerici et al. 2012) and an eleventh, (Bromberg, Augustson, and Backinger 2012) cited by one of the ten as informing their methods (Richardson and Vallone 2012). The Searches All reviews searched directly on the YouTube site, rather than through a third party interface such as Google advanced search. Ten of the 12 reviews reported the date of the search.(Ache and Wallace 2008; Pandey et al. 2010; Fat et al. 2012; Backinger et al. 2011; Keelan et al. 2012; Pant et al. 2012; Steinberg et al. 2010; Kn\u00f6sel and Jung 2011) Most included several terms in the search, and these were presumably linked with \"OR\". Most did not address the sort order, so presumably used the default values. Currently, YouTube search results are sorted by relevance as the default. One review sampled the top ranked items from searches sorted by relevance and number of views, using the default of searching all of YouTube, and then again searching only those classified by the person who posted the video as \u201ceducational\u201d (4 samples in all).(Kn\u00f6sel, Jung, and Bleckmann 2011) Only one review explained how search terms were selected \u2013 by using Google Trends to determine which topical terms were most searched.(Backinger et al. 2011) In the updated set, two additional studies used empirically derived search terms \u2013 most common brands and common search terms from Google Insights.(Richardson and Vallone 2012; Bromberg, Augustson, and Backinger 2012) 2 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 Pre Prin ts Pre Prin ts Several reviews attempted to make the search realistic, that is, searching as consumers might search;(Fat et al. 2012; Pant et al. 2012; Kn\u00f6sel and Jung 2011; Backinger et al. 2011) but all seemed to have worked from the search list rather than using a snowball technique.(Grant 2004) Snowballing is a technique used in sampling for qualitative studies - cases with connection to other cases are identified and selected (Giacomini and Cook 2000) and in information retrieval - references to references are considered for relevance.(Greenhalgh and Peacock 2005) It is a useful adjunct when identifying all relevant candidates through a search engine is difficult for whatever reason.(Horsley, Dingwall, and Sampson 2011) However, three of the searches from the updated search did describe snowballing, as follows: \"As clips were viewed, additional suggestions were offered by the site and these in turn led to further suggestions.\"(Stephen and Cumming 2012) \"For each of the top 10 videos, the top three related videos (ranked by YouTube) were also coded.\"(Thomas, Mackay, and Salsbury 2012) Finally \"The search was supplemented by also reviewing the list of featured videos that accompany search results.\"(Kerber et al. 2012) Only one review imposed filters on the search (in that case, that the video had been uploaded in the past three months).(Pandey et al. 2010) The Inclusion Criteria Eight of the 12 reviews stated that only English language videos were included. (Backinger et al. 2011; Keelan et al. 2012; Singh, Singh, and Singh 2012; Lo, Esser, and Gordon 2010; Pant et al. 2012; Steinberg et al. 2010; Pandey et al. 2010; Tian 2010) None of the other 4 reported that they were language inclusive. Nine reported that they excluded \"off topic\" videos,(Keelan et al. 2012; Singh, Singh, and Singh 2012; Pant et al. 2012; Steinberg et al. 2010; Pandey et al. 2010; Tian 2010; Fat et al. 2012; Kn\u00f6sel, Jung, and Bleckmann 2011; Ache and Wallace 2008) but few gave clear criteria defining what was \u201con topic\u201d. Eight reported that they removed duplicates.(Singh, Singh, and Singh 2012; Pant et al. 2012; Steinberg et al. 2010; Pandey et al. 2010; Tian 2010; Fat et al. 2012; Ache and Wallace 2008; Backinger et al. 2011) Details of the treatment of duplicates were sparse, for instance, none stated how they selected which version to keep, or whether they aggregated view counts across all versions - although two stated that if videos had multiple parts, only one was kept, and both of these stated that they averaged views across all parts.(Singh, Singh, and Singh 2012; Pant et al. 2012) Two reviews stated that the video must have sound to be eligible.(Pant et al. 2012; Steinberg et al. 2010) One included only videos under 10 minutes in length(Steinberg et al. 2010) and one excluded videos blocked by their institutions' internet filters.(Backinger et al. 2011) Reported Characteristics of the Review Descriptive characteristics: All reviews reported on some characteristics of the videos. Elements most commonly reported were: number of views (Pant et al. 2012; Steinberg et al. 2010; Pandey et al. 2010; Keelan et al. 2012; Ache and Wallace 2008; Backinger et al. 2011; Lo, Esser, and Gordon 2010; Kn\u00f6sel, Jung, and Bleckmann 2011; Fat et al. 2012; Singh, Singh, and Singh 2012; Tian 2010; Kn\u00f6sel and Jung 2011), length in minutes,(Pant et al. 2012; Lo, Esser, and Gordon 2010; Steinberg et al. 2010; Kn\u00f6sel, Jung, and Bleckmann 2011; Singh, Singh, and Singh 2012; Pandey et al. 2010; Kn\u00f6sel and Jung 2011; Keelan et al. 2012) and date posted.(Pant et al. 2012; Lo, Esser, and Gordon 2010; Tian 2010; Singh, Singh, and Singh 2012; Pandey et al. 3 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 Pre Prin ts Pre Prin ts 2010) While most reported median, or mean number of views, often with some measure of dispersion, one reported the concentration of views \u2013 five videos accounted for 85% of total views.(Kerber et al. 2012) Other characteristics reported included number of \"likes\",(Pant et al. 2012; Singh, Singh, and Singh 2012; Fat et al. 2012) rating score,(Steinberg et al. 2010; Tian 2010; Keelan et al. 2012) times rated by viewers,(Lo, Esser, and Gordon 2010; Tian 2010) intended audience,(Pant et al. 2012; Steinberg et al. 2010) amateur/pro, based on production quality,(Fat et al. 2012; Lo, Esser, and Gordon 2010) type if non-standard (i.e. song, animation, advertisement),(Pant et al. 2012) country of origin or address of author.(Tian 2010) Importantly, one from the original set(Fat et al. 2012) and three from the update set, harvested self-reported demographics of viewers.(Mukewar et al. 2012; Stephen and Cumming 2012; Richardson and Vallone 2012) Several classified the videos according to the creating source,(Pant et al. 2012; Ache and Wallace 2008; Singh, Singh, and Singh 2012; Pandey et al. 2010; Kn\u00f6sel and Jung 2011) each used its own typology, but common elements were: personal experience/patient, news reports, professional associations, NGOs such as WHO or Red Cross, pharmaceutical companies and medical institutions. Three reviews from the update set addressed the issue of covert advertising \u2013 two for a tobacco product,(Richardson and Vallone 2012; Bromberg, Augustson, and Backinger 2012) the other discussed the notion of paid testimonials appearing as consumer-posted videos. (Mukewar et al. 2012) Sample Size The number of videos assessed ranged from 10 to 622, with a mean of 145 and median 112. Some screened the entire search results (maximum 1634 videos). More common was an approach of taking a fixed sample size and screening this set, retaining those eligible after duplicates, off topic and other ineligible material was removed. Two reviews used a fixed sample size.(Kn\u00f6sel, Jung, and Bleckmann 2011; Lo, Esser, and Gordon 2010) Several set a fixed sample size to screen.(Fat et al. 2012; Kn\u00f6sel and Jung 2011; Backinger et al. 2011) No reviews reported a formal sample size calculation. Review Methods Two reviews reported saving all eligible videos offline.(Ache and Wallace 2008; Pandey et al. 2010) Some reported viewing, screening or assessing online, at the time of discovery. Two (both by Knosel) described the reviewing conditions in some detail; videos were viewed at the same time and under the same conditions by two assessors.(Kn\u00f6sel, Jung, and Bleckmann 2011; Kn\u00f6sel and Jung 2011) Knosel also described opportunities for the reviewers to communicate \u2013 required in one review (Kn\u00f6sel, Jung, and Bleckmann 2011) and prevented in the other.(Kn\u00f6sel and Jung 2011) Eight of the 12 reviews described the reviewers.(Backinger et al. 2011; Lo, Esser, and Gordon 2010; Keelan et al. 2012; Steinberg et al. 2010; Kn\u00f6sel and Jung 2011; Singh, Singh, and Singh 2012; Kn\u00f6sel, Jung, and Bleckmann 2011; Fat et al. 2012) Most were health care professionals, however, one used lay raters \u2013 one potential patient (a youth) and one parent \u2013 to gain their perspective.(Kn\u00f6sel and Jung 2011) 4 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 Pre Prin ts Pre Prin ts Ten of 12 reported on the number of reviewers \u2013 8 reported using 2 reviewers for each video, (Backinger et al. 2011; Tian 2010; Singh, Singh, and Singh 2012; Kn\u00f6sel, Jung, and Bleckmann 2011; Fat et al. 2012; Pandey et al. 2010; Keelan et al. 2012; Steinberg et al. 2010) one reported 3 (professional, parent, youth)(Kn\u00f6sel and Jung 2011) and one implied multiple reviewers without specifying the number.(Ache and Wallace 2008) No review reported having only one reviewer make assessment of content. Four of the 10 with multiple reviewers reported using a third reviewer as arbitrator.(Keelan et al. 2012; Singh, Singh, and Singh 2012; Steinberg et al. 2010; Backinger et al. 2011) Seven of the 10 computed kappa on reviewer agreement. It was not always made clear which rating was used if conflicts occurred \u2013 i.e. neither arbitration nor consensus was described.(Fat et al. 2012; Keelan et al. 2012; Pandey et al. 2010; Kn\u00f6sel, Jung, and Bleckmann 2011; Steinberg et al. 2010; Tian 2010; Backinger et al. 2011) Only two reviews described a training or calibration exercise prior to undertaking assessments. Backinger described 4 hours of training in which definitions were discussed, and 5 practice videos coded.(Backinger et al. 2011) Tian described pre-testing their code book, using 20 videos and 40 text comments.(Tian 2010) Blinding was used in two reviews. Pandey reported that reviewers were blind to the purpose of the study.(Pandey et al. 2010) Lim Fat reported, \"The individual who rated the comments was blinded to the classification of the video as being a positive, negative or neutral portrayal of Tourette syndrome. Likewise, the raters for classification of the videos were blinded to the classification of the comments.\"(Fat et al. 2012) Ethical Oversight None of the original cohort of twelve reviews stated that IRB approval was obtained. One review explicitly stated that IRB was deemed unnecessary due to the nature of the study.(Fat et al. 2012) In the 11 additional studies reviewed, four made explicit statements about IRB approval, one sought approval,(Ehrlich, Richard, and Woodward 2012) three stated they were exempt. (Richardson and Vallone 2012; Mukewar et al. 2012; Kerber et al. 2012) Followup Three videos examined a cohort of reviews at two or more points in time looking changes in the number of hits. One review from the original set reported a follow-up at one and six months.(Lo, Esser, and Gordon 2010) Two more from the update set included follow-up, one after 7 months(Mukewar et al. 2012) and one after 1 month and 7 months.(Stephen and Cumming 2012) Outcomes of Interest The characteristics of the videos that were evaluated varied depending on the study objectives, but videos were commonly assessed as providing true or reliable information or being positive or negative toward the health issue addressed. One review from the original set(Singh, Singh, and Singh 2012) used a validated scale as part of the assessment - DISCERN: an instrument for judging the quality of written consumer health information on treatment choices.(Charnock et al. 1999) One review identified from the update search(Mukewar et al. 2012) used two scales adapted from Inflammatory Bowel Disease patient education web sites; a detailed scale specific to IBD, and a 5-point global quality score. Two looked at knowledge translation \u2013 one of these examined whether videos posted after a 5 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 Pre Prin ts Pre Prin ts change in guidelines for cardiopulmonary resuscitation employed the new or old standard. (Tourinho et al. 2012) Another investigated the integrity with which parents and carers implement the Picture Exchange Communication System (PECS), in a naturalistic setting. (Jurgens, Anderson, and Moore 2012) One review assessed the findability of the videos \u2013 having identified 33 videos that portrayed complete Epley maneuvers for benign paroxysmal positional vertigo, they searched again using very general terms \u2013 dizzy, dizziness, vertigo, positional dizziness, positional vertigo, dizziness treatment, and vertigo treatment. The investigators then determined where or if one of the videos depicting the Epley maneuver appeared in the list of relevance-ranked results for the less specific terms.(Kerber et al. 2012) discussion : These reviews are recent, and for the most part, clearly reported. There are examples of excellent reporting in most facets of the review \u2013 study question, inclusion and exclusion criteria, search strategy, screening and data extraction methods. Few reviews, however, reported all elements well. Improved reporting would increase transparency and allow the reader to better assess the risk of bias in the study design.(Tricco, Tetzlaff, and Moher 2010) The study design should be strong and reproducible, with methods in line with those of a well-conducted systematic review. (Moher et al. 2009) Through this manuscript, we aim to describe the array of methods and data available to those planning to undertake this type of work, recognizing that, depending on the data examined and the objectives of the review, methods will vary. While it is premature to put forth reporting guidelines \u2013 defined as a minimum set of elements that should be reported to enable the reader to understand the conduct of the study, assess the risk of bias and generalizability of results, the PRISMA checklist (Moher et al. 2009) and accompanying elaboration and explanation (Liberati et al. 2009) generalizes to many aspects of video reviews. The elements that have no real parallel in systematic reviews of research studies warrant the most consideration and some of these are elaborated in Table 3. A number of factors make video searches less stable, and thus less replicable, than the sorts of database searches used in systematic reviews where results either match the search criteria or do not and results are sorted by date. In any relevance-ranked search results, the order will change as new entries are added, existing ones removed, or the proprietary ranking algorithm is changed. Thus, a systematic approach that will accomplish the goals of the review (whether they are exhaustive identification of eligible videos or a sample of fixed size that represents the videos that the target audience is most likely to find) is needed, and should be fully described. There was one aspect of our upcoming review of immunization videos that was not informed by this survey of published studies - a discontinuation rule to stop screening when few additional studies were being found. Our initial search yielded 6,000 videos. Unlike the searches of bibliographic databases used for study identification in systematic reviews, this search result was ranked according to relevance. Spot checks showed that most of the lower ranked videos were irrelevant. Given the size of the list, screening the entire list in one sitting was not feasible. We did not know of a way to \u201cdownload\u201d the list, and had no assurance that the list would remain unchanged on subsequent screening days. Thus, we need a protocol to discontinue screening 6 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 Pre Prin ts Pre Prin ts when further screening was unlikely to yield additional eligible videos. Discontinuation rules would allow one to manage relevance-ranked search results, and are essential when screening web search results that are often large, cannot be easily captured as a whole, and are not static from day to day. Given the absence of empirical guidance, we devised a pragmatic rule: screen until twenty consecutive ineligible videos are reviewed, then assess a margin of 50 more. Depending on the number of eligible videos found in the margin, a decision could be taken to stop screening or continue. With this discontinuation rule, one needs to accept that some additional eligible videos might have been found had the entire retrieval been screened, however the likelihood of missing a large number is low given the relevance ranking. Examples of discontinuation rules can be found in several health care fields. Computerized adaptive testing often use validated stopping rules to discontinue the test when it becomes statistically unlikely that administering additional items improve the accuracy of the assessment. (Babcock and Weiss 2009) Clinical trials that have planned interim analyses may have pre-specified stopping rules designed \u201cto ensure trials yield interpretable results while preserving the safety of study participants.\u201d(Cannistra 2004) As search engine ranking algorithms improve, there is increasing opportunity for systematic reviewers to use sources that offer relevance ranking, such as Google Scholar or PubMed Related Citations. Experimental research has demonstrated that ranking algorithms can successfully place eligible records high in a search result. (Cohen et al. 2006; Sampson et al. 2006) Yet, we were unable to identify any practical guidance on stopping rules for screening in the systematic review context, nor any explicit reports of when or if screening was stopped for Internet-based searches in systematic reviews. This is an area requiring more complete reporting on the part of systematic reviewers, as well as a useful area for further research. It should be noted that factors such as screening order, the use of snowballing or other techniques to mimic consumer searching behaviour, and discontinuation rules are relevant only when there is a tacit acceptance that not all potentially relevant videos will be identified. While we have focused our efforts on informing the conduct of systematic reviews, video producers hoping to reach consumers may wish to consider several factors. As the difference in number of views varies 1000 fold, a clear marketing plan is needed for any production effort to be worthwhile. Our review suggests that videos styled as home videos appeal to a broader audience than dyadic videos. As much as reviewers use empirically defined search terms, producers will want to select titles and keywords that are likely to match what consumers type in to the search bar. Producers will need to consider the factors that rank a video high in the related list as well as those that will make it appear in the search results. As the ranking algorithms for both search engine ranking and related sidebar ranking are proprietary and subject to change, therefore, video producers will want to seek up-to-date guidance on \"Search Engine Optimization\" for YouTube, or any other video channel they intend to use Limitations of this systematic review include the fact that there may be additional informative reviews that we did not identify and include. We only searched one traditional bibliographic database, and did not include social media such as blog postings. Also, reviews of consumer health videos are relatively new. YouTube was created in 2005, and given the time needed to 7 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 Pre Prin ts Pre Prin ts conduct and publish reviews; there may be a large number in preparation. Certainly, as many appeared in the course of this project (April to November 2012) as were published prior to its start, suggesting that there may be innovations that we have not yet captured. methods and materials : PubMed was searched April 20, 2012, using the term \"YouTube\" and the search was limited to the Systematic Review subset. This yielded only 4 records, only 2 of which appeared to be reviews, so the limit was withdrawn. This expanded the search result to 153 records, with the earliest publication occurring in the spring of 2007. A second approach, a PubMed search of \"YouTube and (search or methods)\" yielded 86 records. The sample was augmented with two additional reviews nominated by the review team: an early review focusing on the portrayal of vaccinations that we were already aware of; and a review conducted at our institution, which was in press at the time of the search. Just prior to submission of this manuscript, an update search was conducted in PubMed for the term \u201cYouTube\u201d and publications added since the first search were identified and examined for novel features seen infrequently or not at all in the original sample (Table 1). The search results were screened by a single reviewer using the following criteria: The videos reviewed focused on consumer health rather than targeted toward health care providers or trainees and the video did not focus on adoption or use of social media. No limits were imposed regarding publication date or language. Data extracted from these studies included type of journal (general medical, specialty medical journal or internet/social media journal), topic of the review, characteristics of the search, methods of review including number of reviewers and method to achieve consensus between reviewers, inclusion and exclusion criteria, characteristics of the videos reported, ethical oversight, and follow-up. 1 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 Pre Prin ts Pre Prin ts Datawere extracted from the published report \u2013 we did not contact authors to seek clarification of methods. As we focused on methodological aspects as reported, we did not perform additional risk of bias assessments on the individual studies, did not plan to perform meta-analysis, and did not publish a protocol. conclusions : There are many gaps in reporting in these early studies of YouTube videos, and no known reporting standards. Some strong trends are apparent \u2013 reviewers use simple searches at the YouTube site, with restriction to English with some process to remove off-topic retrievals, duplicates and select only one of multi-part videos. Two reviewers are generally used and kappa is commonly recorded. Although reviewers often state they are attempting to mimic user behaviour, this is generally limited to including the first few pages of search results \u2013 only a few of the most recent reviews have used a more sophisticated snowballing approach. Selection of search terms is typically done by health care professionals, whose searches may be quite different from the searches that consumers would typically do, however, there are examples of empirically determined search terms. As well, health consumers are infrequently included as assessors. Finally, efficiencies can be gained by determining stopping criteria for screening large relevance ranked search results such as those provided by YouTube. In the absence of formal reporting guidelines (which might be premature), we recommend that those wishing to review consumer health videos use accepted systematic review methods as a starting point, with some of the elements specific to the video medium that we describe here. descriptive characteristics collected 4 : Number of views 12 100 Length 8 67 Date posted 5 42 Number of \u201cLikes\u201d 3 25 Average rating score 3 25 Number rated by viewers 2 17 Intended audience 2 17 Production quality (Amateur/Pro) 2 17 characteristic n : Total = 12 % Type of journal (G= general/internal medicine S=specialty, I = internet/social media journal) S=8, G=3, I=1 year of publication: median (range) 2011 (2008-2012) : search1 : Search date given 10 83 Number of terms searched: median (range) 3 (1-5) Direct search of YouTube 12 100 Source of terms explained 1 8 Used multiple searches or samples 3 25 Videos 2 Number of videos included Mean 145 Median 112 inclusion criteria 3 : English only 8 67 \"Off topic\" excluded 9 75 review method 5 : qualifications of reviewer described 6 50 : 2 or more reviewers 10 83 Resolution method described 6 50 Kappa reported 7 58 Training of reviewers described 2 17 Blinding of reviewers 2 17 The reader wishing guidance on these aspects of reporting may wish to consult Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement (Moher et al. 2009) and the accompanying eleboration and explanation.(Liberati et al. 2009) 1 PRISMA element 7 and 8, 2 PRISMA element 17, 3 PRISMA element 6, 4 PRIMSA element 11,5 PRISMA elements 9 and 10. Pre Prin ts Pre Prin ts Table 3(on next page) Some systematic review methodological considerations specific to review of consumer health videos, with examples Pre Prin ts Pre Prin ts Characteristic Examples Whether the search was intended to identify all consumer-oriented videos or a sample We reviewed videos posting: on YouTube; on the web. What video sources were selected YouTube; Vimeo; Yahoo Video How search terms were derived Search terms were chosen; by the investigator; by soliciting suggestions from consumers; based by search log data such as Google Trends Any system preferences that would have influenced the search results What sort order was used; the search was limited to reviews classification as \"educational\"; the search was limited to recently added videos) How the review of the search results was conducted Sequential screening of search results; snowballing Discontinuation rules Results were screened: until a predetermined sample size was obtained (state how the sample size was determined; until the entire search result was considered; until predetermined discontinuation criteria were met (state how that critera was determined). How the instability of rankings was addressed All screening done in a single day; Search results were captured for later assessment. Any other measures designed to neutralize bias in the identification of videos1 We using a computer outside the institutional firewall and not previously used to search YouTube; We searched through DuckDuckGo.com to avoid having our location influence the ranking of videos. 1 Many search sites customize search results based on factors such as your geographic location and search history.(Pariser, 2013) Pre Prin ts Pre Prin ts Figure 1 PRISMA flow diagram for included studies Adapted from: Moher D, Liberati A, Tetzlaff J, Altman DG, The PRISMA Group (2009) . Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement. PLoS Med 6(6): e1000097. doi:10.1371/journal.pmed1000097 Pre Prin ts Pre Prin ts Figure 2 PRISMA flow diagram for studies from the updating seach (supplemental set) Adapted from: Moher D, Liberati A, Tetzlaff J, Altman DG, The PRISMA Group (2009) . Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement. PLoS Med 6(6): e1000097. doi:10.1371/journal.pmed1000097 Pre Prin ts Pre Prin ts", "v3_text": "methods and materials : PubMed was searched April 20, 2012, using the term \"YouTube\" and the search was limited to the Systematic Review subset. This yielded only 4 records, only 2 of which appeared to be reviews, so the limit was withdrawn. This expanded the search result to 153 records, with the earliest publication occurring in the spring of 2007. A second approach, a PubMed search of \"YouTube and (search or methods)\" yielded 86 records (Table 1), which were screened by a single reviewer using the following criteria: The videos reviewed focused on consumer health rather than targeted toward health care 1 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 Pre Prin ts Pre Prin ts providers or trainees and the video did not focus on adoption or use of social media. No limits were imposed regarding publication date or language. The sample was augmented with two additional studies nominated by the review team; an early study focusing on the portrayal of vaccinations that we were already aware of, and a study conducted at our institution, which was in press at the time of the search. Data extracted from these studies included type of journal (general medical, specialty medical journal or internet/social media journal), topic of the review, characteristics of the search, methods of review including number of reviewers and method to achieve consensus between reviewers, inclusion and exclusion criteria, characteristics of the videos reported, ethical oversight, and follow-up. These are reported in full here, and informed our own baseline examination of YouTube videos. Data was extracted from the published report \u2013 we did not contact authors to seek clarification of methods. As we focused on methodological aspects as reported, we did not perform additional risk of bias assessments on the individual studies, we did not plan to perform meta-analysis, nad did not publish a protocol. Just prior to submission of this manuscript, an update search was conducted in PubMed for the term \u201cYouTube\u201d and publications added since the first search were identified and examined for novel features seen infrequently or not at all in the original sample (Table 1). results and discussion : Twelve eligible studies were identified from the initial search (Figure 1); (Backinger et al. 2011; Pant et al. 2012; Ache and Wallace 2008; Tian 2010; Lo, Esser, and Gordon 2010; Steinberg et al. 2010; Kn\u00f6sel, Jung, and Bleckmann 2011; Singh, Singh, and Singh 2012; Pandey et al. 2010; Kn\u00f6sel and Jung 2011; Keelan et al. 2012; Fat et al. 2012) Topics of the 12 initial reviews were; smoking cessation,(Backinger et al. 2011) acute myocardial infarction,(Pant et al. 2012) HPV vaccination,(Ache and Wallace 2008) organ donation,(Tian 2010) epilepsy,(Lo, Esser, and Gordon 2010) prostate cancer,(Steinberg et al. 2010) dentistry,(Kn\u00f6sel, Jung, and Bleckmann 2011) rheumatoid arthritis,(Singh, Singh, and Singh 2012) H1N1,(Pandey et al. 2010) orthodontics,(Kn\u00f6sel and Jung 2011) vaccination,(Keelan et al. 2012) and Tourette syndrome. (Fat et al. 2012) Most (8) were published in specialty journals. (Lo, Esser, and Gordon 2010; Steinberg et al. 2010; Kn\u00f6sel, Jung, and Bleckmann 2011; Singh, Singh, and Singh 2012; Kn\u00f6sel and Jung 2011; Fat et al. 2012; Pant et al. 2012; Backinger et al. 2011) Three were in general and internal medicine journals,(Ache and Wallace 2008; Keelan et al. 2012; Pandey et al. 2010) one in a health communications journal.(Tian 2010) Most were published from 2010 to 2012. The earliest was 2008 \u2013 three years after YouTube's inception. (Ache and Wallace 2008) Results are summarized in Table 2. Thirteen additional reviews identified from the update search (Figure 2). Three were found ineligible when the full text of the article was examined, ten were found eligible (Kerson 2012; Richardson and Vallone 2012; Stephen and Cumming 2012; Jurgens, Anderson, and Moore 2012; Thomas, Mackay, and Salsbury 2012; Ehrlich, Richard, and Woodward 2012; Tourinho et al. 2012; Mukewar et al. 2012; Kerber et al. 2012; Clerici et al. 2012) and an eleventh, (Bromberg, Augustson, and Backinger 2012) cited by one of the ten as informing their methods, (Richardson and Vallone 2012). 2 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 Pre Prin ts Pre Prin ts The Searches All reviews searched directly on the YouTube site, rather than through a third party interface such as Google advanced search. Ten of the 12 reviews reported the date of the search.(Ache and Wallace 2008; Pandey et al. 2010; Fat et al. 2012; Backinger et al. 2011; Keelan et al. 2012; Pant et al. 2012; Steinberg et al. 2010; Kn\u00f6sel and Jung 2011) Most included several terms in the search, and these were presumably linked with \"OR\". Most did not address the sort order, so presumably used the default values. Currently, YouTube search results are sorted by relevance as the default. One review sampled the top ranked items from searches sorted by relevance and number of views, using the default of searching all of YouTube, and then again searching only those classified by the person who posted the video as \u201ceducational\u201d (4 samples in all).(Kn\u00f6sel, Jung, and Bleckmann 2011) Only one review explained how search terms were selected \u2013 by using Google Trends to determine which topical terms were most searched.(Backinger et al. 2011) In the updated set, two additional studies used empirically derived search terms \u2013 most common brands and common search terms from Google Insights.(Richardson and Vallone 2012; Bromberg, Augustson, and Backinger 2012) Several reviews attempted to make the search realistic, that is, searching as consumers might search;(Fat et al. 2012; Pant et al. 2012; Kn\u00f6sel and Jung 2011; Backinger et al. 2011) but all seemed to have worked from the search list rather than using a snowball technique.(Grant 2004) However, three of the searches from the updated search did so, describing their methods as follows: \"As clips were viewed, additional suggestions were offered by the site and these in turn led to further suggestions.\"(Stephen and Cumming 2012) \"For each of the top 10 videos, the top three related videos (ranked by YouTube) were also coded.\"(Thomas, Mackay, and Salsbury 2012) Finally \"The search was supplemented by also reviewing the list of featured videos that accompany search results.\"(Kerber et al. 2012) Only one review imposed filters on the search (in that case, that the video had been uploaded in the past three months).(Pandey et al. 2010) The Inclusion Criteria Eight of the 12 reviews stated that only English language videos were included. (Backinger et al. 2011; Keelan et al. 2012; Singh, Singh, and Singh 2012; Lo, Esser, and Gordon 2010; Pant et al. 2012; Steinberg et al. 2010; Pandey et al. 2010; Tian 2010) None of the other 4 reported that they were language inclusive. Nine reported that they excluded \"off topic\" videos,(Keelan et al. 2012; Singh, Singh, and Singh 2012; Pant et al. 2012; Steinberg et al. 2010; Pandey et al. 2010; Tian 2010; Fat et al. 2012; Kn\u00f6sel, Jung, and Bleckmann 2011; Ache and Wallace 2008) but few gave clear criteria defining what was \u201con topic\u201d. Eight reported that they removed duplicates.(Singh, Singh, and Singh 2012; Pant et al. 2012; Steinberg et al. 2010; Pandey et al. 2010; Tian 2010; Fat et al. 2012; Ache and Wallace 2008; Backinger et al. 2011) Details of the treatment of duplicates were sparse, for instance, none stated how they selected which version to keep, or whether they aggregated view counts across all versions - although two stated that if videos had multiple parts, only one was kept, and both of these stated that they averaged views across all parts.(Singh, Singh, and Singh 2012; Pant et al. 2012) Two reviews stated that the video must have sound to be eligible.(Pant et al. 2012; Steinberg et al. 2010) One included only videos under 10 minutes in length(Steinberg et al. 2010) and one excluded videos blocked by their institutions' internet filters.(Backinger et al. 2011) 3 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 Pre Prin ts Pre Prin ts Reported Characteristics of the Review Descriptive characteristics: All reviews reported on some characteristics of the videos. Elements were; (Pant et al. 2012; Steinberg et al. 2010; Pandey et al. 2010; Keelan et al. 2012; Ache and Wallace 2008; Backinger et al. 2011; Lo, Esser, and Gordon 2010; Kn\u00f6sel, Jung, and Bleckmann 2011; Fat et al. 2012; Singh, Singh, and Singh 2012; Tian 2010; Kn\u00f6sel and Jung 2011), length in minutes,(Pant et al. 2012; Lo, Esser, and Gordon 2010; Steinberg et al. 2010; Kn\u00f6sel, Jung, and Bleckmann 2011; Singh, Singh, and Singh 2012; Pandey et al. 2010; Kn\u00f6sel and Jung 2011; Keelan et al. 2012) date posted,(Pant et al. 2012; Lo, Esser, and Gordon 2010; Tian 2010; Singh, Singh, and Singh 2012; Pandey et al. 2010) were the most common. While most reported median, or mean number of views, often with some measure of dispersion, one reported the concentration of views \u2013 five videos accounted for 85% of total views.(Kerber et al. 2012) Other characteristics reported included number of \"likes\",(Pant et al. 2012; Singh, Singh, and Singh 2012; Fat et al. 2012) rating score,(Steinberg et al. 2010; Tian 2010; Keelan et al. 2012) times rated by viewers,(Lo, Esser, and Gordon 2010; Tian 2010) intended audience,(Pant et al. 2012; Steinberg et al. 2010) amateur/pro, based on production quality,(Fat et al. 2012; Lo, Esser, and Gordon 2010) type if non-standard (i.e. song, animation, advertisement),(Pant et al. 2012) country of origin or address of author.(Tian 2010) Importantly, one from the original set(Fat et al. 2012) and three from the update set, harvested self reported demographics of viewers.(Mukewar et al. 2012; Stephen and Cumming 2012; Richardson and Vallone 2012) Several classified the videos according to the creating source,(Pant et al. 2012; Ache and Wallace 2008; Singh, Singh, and Singh 2012; Pandey et al. 2010; Kn\u00f6sel and Jung 2011) each used its own typology, but common elements were; personal experience/patient, news reports, professional associations, NGOs such as WHO or Red Cross, pharmaceutical companies, medical institutions. Three reviews from the update set addressed the issue of covert advertising \u2013 two for a tobacco product,(Richardson and Vallone 2012; Bromberg, Augustson, and Backinger 2012) the other discussed the notion of paid testimonials appearing as consumer-posted videos. (Mukewar et al. 2012) Sample Size The number of videos assessed ranged from 10 to 622, with a mean of 145 and median 112. Some screened the entire search results (maximum 1634 videos). More common was an approach of taking a fixed sample size and screening this set, retaining those eligible after duplicates, off topic and other ineligible material was removed. Two reviews used a fixed sample size.(Kn\u00f6sel, Jung, and Bleckmann 2011; Lo, Esser, and Gordon 2010) Several set a fixed sample size to screen.(Fat et al. 2012; Kn\u00f6sel and Jung 2011; Backinger et al. 2011) No reviews reported a formal sample size calculation. Review Methods Two reviews reported saving all eligible videos offline.(Ache and Wallace 2008; Pandey et al. 2010) Some reported viewing, screening or assessing online, at the time of discovery. Two (both by Knosel) described the reviewing conditions in some detail; videos were viewed at the same time and under the same conditions by two assessors.(Kn\u00f6sel, Jung, and Bleckmann 2011; Kn\u00f6sel and Jung 2011) Knosel also described opportunities for the reviewers to communicate \u2013 required in one study(Kn\u00f6sel, Jung, and Bleckmann 2011) and prevented in the other.(Kn\u00f6sel 4 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 Pre Prin ts Pre Prin ts and Jung 2011) Eight of the 12 reviews described the reviewers.(Backinger et al. 2011; Lo, Esser, and Gordon 2010; Keelan et al. 2012; Steinberg et al. 2010; Kn\u00f6sel and Jung 2011; Singh, Singh, and Singh 2012; Kn\u00f6sel, Jung, and Bleckmann 2011; Fat et al. 2012) Most were health care professional, however, one used lay raters \u2013 one potential patient (a youth) and one parent \u2013 to gain their perspective.(Kn\u00f6sel and Jung 2011) Ten of 12 reported on the number of reviewers \u2013 8 reported using 2 reviewers for each video, (Backinger et al. 2011; Tian 2010; Singh, Singh, and Singh 2012; Kn\u00f6sel, Jung, and Bleckmann 2011; Fat et al. 2012; Pandey et al. 2010; Keelan et al. 2012; Steinberg et al. 2010) one reported 3 (professional, parent, youth)(Kn\u00f6sel and Jung 2011) and one implied multiple reviewers without specifying the number.(Ache and Wallace 2008) No review reported having only one reviewer make assessment of content. Four of the 10 with multiple reviewers reported using a third reviewer as arbitrator.(Keelan et al. 2012; Singh, Singh, and Singh 2012; Steinberg et al. 2010; Backinger et al. 2011) Seven of the 10 computed kappa on reviewer agreement. It was not always made clear which rating was used if conflicts occurred \u2013 i.e. neither arbitration nor consensus was described.(Fat et al. 2012; Keelan et al. 2012; Pandey et al. 2010; Kn\u00f6sel, Jung, and Bleckmann 2011; Steinberg et al. 2010; Tian 2010; Backinger et al. 2011) Only two reviews described a training or calibration exercise prior to undertaking assessments. Backinger described 4 hours of training in which definitions were discussed, and 5 practice videos coded.(Backinger et al. 2011) Tian described pre-testing their code book, using 20 videos and 40 text comments.(Tian 2010) Blinding was used in two reviews. Pandey reported that reviewers were blind to the purpose of the study.(Pandey et al. 2010) Lim Fat reported, \"The individual who rated the comments was blinded to the classification of the video as being a positive, negative or neutral portrayal of Tourette syndrome. Likewise, the raters for classification of the videos were blinded to the classification of the comments.\"(Fat et al. 2012) Ethical Oversight None of the original cohort of twelve reviews stated that IRB approval was obtained. One review explicitly stated that IRB was deemed unnecessary due to the nature of the study.(Fat et al. 2012) In the 11 additional studies reviewed, four made explicit statements about IRB approval, one sought approval,(Ehrlich, Richard, and Woodward 2012) three stated they were exempt. (Richardson and Vallone 2012; Mukewar et al. 2012; Kerber et al. 2012) Followup Three videos examined a cohort of reviews at two or more points in time looking changes in the number of hits. One review from the original set reported a follow-up at one and six months.(Lo, Esser, and Gordon 2010) Two more from the update set included follow-up, one after 7 months(Mukewar et al. 2012) and one after 1 month and 7 months.(Stephen and Cumming 2012) Outcomes of Interest The characteristics of the videos that were evaluated varied depending on the study objectives, but videos were commonly assessed as providing true or reliable information or being positive or 5 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 Pre Prin ts Pre Prin ts negative toward the health issue addressed. One review from the original set(Singh, Singh, and Singh 2012) used a validated scale as part of the assessment - DISCERN: an instrument for judging the quality of written consumer health information on treatment choices.(Charnock et al. 1999) One study identified from the update search(Mukewar et al. 2012) used two scales adapted from Inflammatory Bowel Disease patient education web sites; a detailed scale specific to IBD, and a 5-point global quality score. Two looked at knowledge translation \u2013 one of these examined whether videos posted after a change in guidelines for cardiopulmonary resuscitation employed the new or old standard. (Tourinho et al. 2012) Another investigated the integrity with which parents and carers implement the Picture Exchange Communication System (PECS), in a naturalistic settings. (Jurgens, Anderson, and Moore 2012) One study assessed the findability of the videos \u2013 having identified 33 videos that portrayed complete Epley maneuvers for benign paroxysmal positional vertigo, they searched again using very general terms \u2013 dizzy, dizziness, vertigo, positional dizziness, positional vertigo, dizziness treatment, and vertigo treatment. The investigators then determined where or if one of the videos depicting the Epley maneuver appeared in the list of relevance-ranked results for the less specific terms.(Kerber et al. 2012) discussion : These reviews are recent, and for the most part, clearly reported. There are examples of excellent reporting in most facets of the review \u2013 study question, inclusion and exclusion criteria, search strategy, screening and data extraction methods. Few reviews, however, reported all elements well. Improved reporting would increase transparency and allow the reader to better assess the risk of bias in the study design.(Tricco, Tetzlaff, and Moher 2010) The study design should be strong and reproducible, with methods in line with those of a well-conducted systematic review. (Moher et al. 2009) Through this manuscript, we aim to describe the array of methods and data available to those planning to undertake this type of work, recognizing that, depending on the data examined and the objectives of the review, methods will vary. There was one aspect of our study that was not informed by this survey of published studies - a discontinuation rule to stop screening when few additional studies were being found. Our initial search yielded 6,000 videos. Unlike the searches of bibliographic databases used for study identification in systematic reviews, this search result was ranked according to relevance. Spot checks showed that most of the lower ranked videos were irrelevant. Given the size of the list, screening the entire list in one sitting was not feasible. We did not know of a way to \u201cdownload\u201d the list, and had no assurance that the list would remain unchanged on subsequent screening days. Thus, we needed a protocol to discontinue screening when further screening was unlikely to yield additional eligible videos. Discontinuation rules would allow one to manage relevance-ranked search results, and are essential when screening web search results that are often large, cannot be easily captured as a whole, and are not static from day to day. Given the absence of empirical guidance, we devised a pragmatic rule: screen until twenty consecutive ineligible videos are reviewed, than assess a margin of 50 more. Depending on the number of eligible videos found in the margin, a decision could be taken to stop screening or continue. With this discontinuation rule, one needs to accept 6 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 Pre Prin ts Pre Prin ts that some additional eligible videos might have been found had the entire retrieval been screened, however the likelihood of missing a large number is low given the relevance ranking. Examples of discontinuation rules can be found in several health care fields. Computerized adaptive testing often use validated stopping rules to discontinue the test when it becomes statistically unlikely that administering additional items improve the accuracy of the assessment. (Babcock and Weiss 2009) Clinical trials that have planned interim analyses may have pre-specified stopping rules designed \u201cto ensure trials yield interpretable results while preserving the safety of study participants.\u201d(Cannistra 2004) As search engine ranking algorithms improve, there is increasing opportunity for systematic reviewers to use sources that offer relevance ranking, such as Google Scholar or PubMed Related Citations. Experimental research has demonstrated that ranking algorithms can successfully place eligible records high in a search result(Cohen et al. 2006; Sampson et al. 2006) Yet, we were unable to identify any practical guidance on stopping rules for screening in the systematic review context, nor any explicit reports of when or if screening was stopped for Internet-based searches in systematic reviews. This is an area requiring more complete reporting on the part of systematic reviewers, as well as a useful area for further research. Limitations of our study include the fact that there may be additional informative reviews that we did not identify and include. We only searched one traditional bibliographic database, and did not include social media such as blog posting. Also, reviews of consumer health videos are relatively new. YouTube was created in 2005, and given the time needed to conduct and publish reviews; there may be a large number in preparation. Certainly, as many appeared in course of this study (April to November 2012) as were published prior to its start, suggesting that there may be innovations that we have not yet captured. conclusions : There are many gaps in reporting in these early studies of YouTube videos, and no known reporting standards. Some strong trends are apparent \u2013 reviewers use simple searches at the YouTube site, with restriction to English with some process to remove off-topic retrievals, duplicates and select only one of multi-part videos. Two reviewers are generally used and kappa is commonly recorded. Although reviewers often state they are attempting to mimic user behaviour, this is generally limited to including the first few pages of search results \u2013 only a few of the most recent reviews have used a more sophisticated snowballing approach. Selection of search terms is typically done by health care professionals, whose searches may be quite different from the searches that consumers would typically do, however there are examples of empirically determined search terms. As well, health consumers are infrequently included as assessors. Finally, efficiencies can be gained by determining stopping criteria for screening large relevance ranked search results such as those provided by YouTube. In the absence of formal reporting guidelines (which might be premature), we recommend that those wishing to review consumer health videos use accepted systematic review methods as a starting point, with some of the elements specific to the video medium that we describe here. 7 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 Pre Prin ts Pre Prin ts", "url": "https://peerj.com/articles/148/reviews/", "review_1": "Jafri Abdullah \u00b7 Aug 12, 2013 \u00b7 Academic Editor\nACCEPT\nCongratulations for the manuscript being accepted. Do send more high quality manuscripts to PeerJ again.", "review_2": "Pascale Gaudet \u00b7 Aug 12, 2013\nBasic reporting\nNA\nExperimental design\nNA\nValidity of the findings\nNA\nAdditional comments\nAll my comments have been addressed.\nCite this review as\nGaudet P (2013) Peer Review #1 of \"On the reproducibility of science: unique identification of research resources in the biomedical literature (v0.2)\". PeerJ https://doi.org/10.7287/peerj.148v0.2/reviews/1", "review_3": "Reviewer 2 \u00b7 Aug 5, 2013\nBasic reporting\nThe authors have addressed my concerns.\nExperimental design\nNo comments\nValidity of the findings\nNo Comments\nCite this review as\nAnonymous Reviewer (2013) Peer Review #2 of \"On the reproducibility of science: unique identification of research resources in the biomedical literature (v0.2)\". PeerJ https://doi.org/10.7287/peerj.148v0.2/reviews/2", "review_4": "Jafri Abdullah \u00b7 Jul 2, 2013 \u00b7 Academic Editor\nMAJOR REVISIONS\nDear Authors,I hope all of you can contribute to the necessary revisions needed so that the same peer reviewers can re-review them.", "pdf_1": "https://peerj.com/articles/148v0.2/submission", "pdf_2": "https://peerj.com/articles/148v0.1/submission", "review_5": "Reviewer 1 \u00b7 Jun 29, 2013\nBasic reporting\nThe manuscript reports findings on a very important problem in biomedical literature and will serve to be very useful in many dimensions.\nExperimental design\nThe authors have carefully chosen 5 areas or reagents that are often not reported accurately in the literature. Here are some experiments that would add value or strengthen the claim of the paper.\n1) In papers where the reagent was not easy identifiable, it would be interesting to know how many other papers have cited this paper and claimed to have prepared their reagents using the methods used by that paper.\n2) The authors don't say much about whether there is a bias in the number of papers they selected. Is 135+86+17 a good representation, is that enough?How many of these papers had no relevance to this study, i.e were not experimental papers requiring reporting of constructs, antibodies etc.\n3) Would it have been possible to randomly ask a subset of authors from papers that don't have indentifiable resources to see why they did not report the details. Is it because the journals don't have a structured form to report these details? From the data presented, it almost seems like the stringency has no bearing on the # of identified resources.\n4) Is it possible that there are way too many reagents to report for any given paper and authors report the resources for just a select set of reagents that are most relevant to understanding the point of the paper?\n4) In terms of recommendations, the authors seems to suggest a lot of different options, but none of them are very decisive. So far journals have been successful in requiring submission of sequence to GenBank or structure coordinates to PDB and authors for the most part stick to this requirement irrespective of the name of the journal. How has this been successful and is there something to learn and extend for other resources?\nValidity of the findings\nThe findings reported in this paper are valid.\nCite this review as\nAnonymous Reviewer (2013) Peer Review #1 of \"On the reproducibility of science: unique identification of research resources in the biomedical literature (v0.1)\". PeerJ https://doi.org/10.7287/peerj.148v0.1/reviews/1", "review 6": "Pascale Gaudet \u00b7 Jun 24, 2013\nBasic reporting\nNo Comments\nExperimental design\nNo Comments\nValidity of the findings\n1. p.6, on the discussion of the difficulty of identifying cDNA or peptides related to a gene: have the authors also encountered issues with identifying the species in which the sequence was isolated ? There have been reports that this is problematic.\n2. I am a bit surprised about the identification of organisms: usually yeast, frogs, worms, and flies are relatively easy to unambiguously identify; moreover, there is no data for human. How was the analysis done ? For example, 0 % of the yeast were identified, but is there evidence that it should have been the case, in other words, were there any yeasts papers in the set ?\nAdditional comments\nThis paper addresses the very pertinent issue of the lack of sufficient information provided in papers to allow research to be accurately reproduced. The article is very well written, and the results are interesting and provide some quantitative measure of the extent of the problem. Minor comments:\n1. p. 7, \"Statical analysis\": the section title should be marked with underline the same way as 'Journal selection and classification' above.\n2. p. 9, Section on Cell lines: There seems to be missing something in the sentence \"A source for cell lines was rarely reported and was most common factor for their low identifiability in our study\"; please rephrase.\n3. At the end of the same paragraph the authors write \"see methods section\"; it is not clear what they refer to, since this is already the methods section.\n4. p. 11: \"While it is assuring\" -> \"While it is reassuring\"\nCite this review as\nGaudet P (2013) Peer Review #2 of \"On the reproducibility of science: unique identification of research resources in the biomedical literature (v0.1)\". PeerJ https://doi.org/10.7287/peerj.148v0.1/reviews/2", "all_reviews": "Review 1: Jafri Abdullah \u00b7 Aug 12, 2013 \u00b7 Academic Editor\nACCEPT\nCongratulations for the manuscript being accepted. Do send more high quality manuscripts to PeerJ again.\nReview 2: Pascale Gaudet \u00b7 Aug 12, 2013\nBasic reporting\nNA\nExperimental design\nNA\nValidity of the findings\nNA\nAdditional comments\nAll my comments have been addressed.\nCite this review as\nGaudet P (2013) Peer Review #1 of \"On the reproducibility of science: unique identification of research resources in the biomedical literature (v0.2)\". PeerJ https://doi.org/10.7287/peerj.148v0.2/reviews/1\nReview 3: Reviewer 2 \u00b7 Aug 5, 2013\nBasic reporting\nThe authors have addressed my concerns.\nExperimental design\nNo comments\nValidity of the findings\nNo Comments\nCite this review as\nAnonymous Reviewer (2013) Peer Review #2 of \"On the reproducibility of science: unique identification of research resources in the biomedical literature (v0.2)\". PeerJ https://doi.org/10.7287/peerj.148v0.2/reviews/2\nReview 4: Jafri Abdullah \u00b7 Jul 2, 2013 \u00b7 Academic Editor\nMAJOR REVISIONS\nDear Authors,I hope all of you can contribute to the necessary revisions needed so that the same peer reviewers can re-review them.\nReview 5: Reviewer 1 \u00b7 Jun 29, 2013\nBasic reporting\nThe manuscript reports findings on a very important problem in biomedical literature and will serve to be very useful in many dimensions.\nExperimental design\nThe authors have carefully chosen 5 areas or reagents that are often not reported accurately in the literature. Here are some experiments that would add value or strengthen the claim of the paper.\n1) In papers where the reagent was not easy identifiable, it would be interesting to know how many other papers have cited this paper and claimed to have prepared their reagents using the methods used by that paper.\n2) The authors don't say much about whether there is a bias in the number of papers they selected. Is 135+86+17 a good representation, is that enough?How many of these papers had no relevance to this study, i.e were not experimental papers requiring reporting of constructs, antibodies etc.\n3) Would it have been possible to randomly ask a subset of authors from papers that don't have indentifiable resources to see why they did not report the details. Is it because the journals don't have a structured form to report these details? From the data presented, it almost seems like the stringency has no bearing on the # of identified resources.\n4) Is it possible that there are way too many reagents to report for any given paper and authors report the resources for just a select set of reagents that are most relevant to understanding the point of the paper?\n4) In terms of recommendations, the authors seems to suggest a lot of different options, but none of them are very decisive. So far journals have been successful in requiring submission of sequence to GenBank or structure coordinates to PDB and authors for the most part stick to this requirement irrespective of the name of the journal. How has this been successful and is there something to learn and extend for other resources?\nValidity of the findings\nThe findings reported in this paper are valid.\nCite this review as\nAnonymous Reviewer (2013) Peer Review #1 of \"On the reproducibility of science: unique identification of research resources in the biomedical literature (v0.1)\". PeerJ https://doi.org/10.7287/peerj.148v0.1/reviews/1\nReview 6: Pascale Gaudet \u00b7 Jun 24, 2013\nBasic reporting\nNo Comments\nExperimental design\nNo Comments\nValidity of the findings\n1. p.6, on the discussion of the difficulty of identifying cDNA or peptides related to a gene: have the authors also encountered issues with identifying the species in which the sequence was isolated ? There have been reports that this is problematic.\n2. I am a bit surprised about the identification of organisms: usually yeast, frogs, worms, and flies are relatively easy to unambiguously identify; moreover, there is no data for human. How was the analysis done ? For example, 0 % of the yeast were identified, but is there evidence that it should have been the case, in other words, were there any yeasts papers in the set ?\nAdditional comments\nThis paper addresses the very pertinent issue of the lack of sufficient information provided in papers to allow research to be accurately reproduced. The article is very well written, and the results are interesting and provide some quantitative measure of the extent of the problem. Minor comments:\n1. p. 7, \"Statical analysis\": the section title should be marked with underline the same way as 'Journal selection and classification' above.\n2. p. 9, Section on Cell lines: There seems to be missing something in the sentence \"A source for cell lines was rarely reported and was most common factor for their low identifiability in our study\"; please rephrase.\n3. At the end of the same paragraph the authors write \"see methods section\"; it is not clear what they refer to, since this is already the methods section.\n4. p. 11: \"While it is assuring\" -> \"While it is reassuring\"\nCite this review as\nGaudet P (2013) Peer Review #2 of \"On the reproducibility of science: unique identification of research resources in the biomedical literature (v0.1)\". PeerJ https://doi.org/10.7287/peerj.148v0.1/reviews/2\n", "LLM_extracted_review": "['Review 1: Congratulations for the manuscript being accepted. Do send more high quality manuscripts to PeerJ again.\\n\\nReview 2: All my comments have been addressed.\\n\\nReview 3: The authors have addressed my concerns.\\n\\nReview 4: I hope all of you can contribute to the necessary revisions needed so that the same peer reviewers can re-review them.\\n\\nReview 5: \\n1) In papers where the reagent was not easy identifiable, it would be interesting to know how many other papers have cited this paper and claimed to have prepared their reagents using the methods used by that paper.\\n2) The authors don\\'t say much about whether there is a bias in the number of papers they selected. Is 135+86+17 a good representation, is that enough? How many of these papers had no relevance to this study, i.e were not experimental papers requiring reporting of constructs, antibodies etc.\\n3) Would it have been possible to randomly ask a subset of authors from papers that don\\'t have identifiable resources to see why they did not report the details? Is it because the journals don\\'t have a structured form to report these details? From the data presented, it almost seems like the stringency has no bearing on the # of identified resources.\\n4) Is it possible that there are way too many reagents to report for any given paper and authors report the resources for just a select set of reagents that are most relevant to understanding the point of the paper?\\n5) In terms of recommendations, the authors seem to suggest a lot of different options, but none of them are very decisive. So far journals have been successful in requiring submission of sequence to GenBank or structure coordinates to PDB and authors for the most part stick to this requirement irrespective of the name of the journal. How has this been successful and is there something to learn and extend for other resources?\\n\\nReview 6: \\n1. Have the authors also encountered issues with identifying the species in which the sequence was isolated? There have been reports that this is problematic.\\n2. I am a bit surprised about the identification of organisms: usually yeast, frogs, worms, and flies are relatively easy to unambiguously identify; moreover, there is no data for human. How was the analysis done? For example, 0% of the yeast were identified, but is there evidence that it should have been the case, in other words, were there any yeast papers in the set?\\n3. Minor comments: \\n - p. 7, \"Statical analysis\": the section title should be marked with underline the same way as \\'Journal selection and classification\\' above.\\n - p. 9, Section on Cell lines: There seems to be missing something in the sentence \"A source for cell lines was rarely reported and was most common factor for their low identifiability in our study\"; please rephrase.\\n - At the end of the same paragraph the authors write \"see methods section\"; it is not clear what they refer to, since this is already the methods section.\\n - p. 11: \"While it is assuring\" -> \"While it is reassuring\".']" }