{ "v1_col_introduction": "introduction : Sharing\u00a0information\u00a0facilitates\u00a0science.\u00a0Publicly\u00a0sharing\u00a0detailed\u00a0research\u00a0data\u2013sample\u00a0attributes,\u00a0 clinical\u00a0factors,\u00a0patient\u00a0outcomes,\u00a0DNA\u00a0sequences,\u00a0raw\u00a0mRNA\u00a0microarray\u00a0measurements\u2013with\u00a0 other\u00a0researchers\u00a0allows\u00a0these\u00a0valuable\u00a0resources\u00a0to\u00a0contribute\u00a0far\u00a0beyond\u00a0their\u00a0original\u00a0analysis.\u00a0 In\u00a0addition\u00a0to\u00a0being\u00a0used\u00a0to\u00a0confirm\u00a0original\u00a0results,\u00a0raw\u00a0data\u00a0can\u00a0be\u00a0used\u00a0to\u00a0explore\u00a0related\u00a0or\u00a0 new\u00a0hypotheses,\u00a0particularly\u00a0when\u00a0combined\u00a0with\u00a0other\u00a0publicly\u00a0available\u00a0data\u00a0sets.\u00a0Real\u00a0data\u00a0is\u00a0 indispensable\u00a0when\u00a0investigating\u00a0and\u00a0developing\u00a0study\u00a0methods,\u00a0analysis\u00a0techniques,\u00a0and\u00a0 software\u00a0implementations.\u00a0The\u00a0larger\u00a0scientific\u00a0community\u00a0also\u00a0benefits:\u00a0sharing\u00a0data\u00a0encourages multiple\u00a0perspectives,\u00a0helps\u00a0to\u00a0identify\u00a0errors,\u00a0discourages\u00a0fraud,\u00a0is\u00a0useful\u00a0for\u00a0training\u00a0new\u00a0 researchers,\u00a0and\u00a0increases\u00a0efficient\u00a0use\u00a0of\u00a0funding\u00a0and\u00a0patient\u00a0population\u00a0resources\u00a0by\u00a0avoiding\u00a0 duplicate\u00a0data\u00a0collection.\nMaking\u00a0research\u00a0data\u00a0publicly\u00a0available\u00a0also\u00a0has\u00a0challenges\u00a0and\u00a0costs.\u00a0Some\u00a0costs\u00a0are\u00a0borne\u00a0by\u00a0 society:\u00a0For\u00a0example,\u00a0data\u00a0archives\u00a0must\u00a0be\u00a0created\u00a0and\u00a0maintained.\u00a0Many\u00a0costs,\u00a0however,\u00a0are\u00a0 borne\u00a0by\u00a0the\u00a0datacollecting\u00a0investigators:\u00a0Data\u00a0must\u00a0be\u00a0documented,\u00a0formatted,\u00a0and\u00a0uploaded.\u00a0 Investigators\u00a0may\u00a0be\u00a0afraid\u00a0that\u00a0other\u00a0researchers\u00a0will\u00a0find\u00a0errors\u00a0in\u00a0their\u00a0results,\u00a0or\u00a0\"scoop\"\u00a0 additional\u00a0analyses\u00a0they\u00a0have\u00a0planned\u00a0for\u00a0the\u00a0future.\u00a0\nPersonal\u00a0incentives\u00a0are\u00a0important\u00a0to\u00a0balance\u00a0these\u00a0personal\u00a0costs.\u00a0Scientists\u00a0report\u00a0that\u00a0receiving\u00a0 additional\u00a0citations\u00a0is\u00a0an\u00a0important\u00a0motivator\u00a0for\u00a0publicly\u00a0archiving\u00a0their\u00a0data\u00a0(Tenopir\u00a0et.\u00a0al.\u00a0 2011).\nThere\u00a0is\u00a0evidence\u00a0that\u00a0studies\u00a0that\u00a0make\u00a0their\u00a0data\u00a0available\u00a0do\u00a0indeed\u00a0receive\u00a0more\u00a0citations\u00a0than similar\u00a0studies\u00a0that\u00a0do\u00a0not\u00a0(Gleditsch\u00a0&\u00a0Strand,\u00a02003\u00a0;\u00a0Piwowar\u00a0et.\u00a0al.\u00a02007\u00a0;\u00a0Ioannidis\u00a0et.\u00a0al.\u00a0 2009\u00a0;\u00a0Pienta\u00a0et.\u00a0al.\u00a02010\u00a0;\u00a0Henneken\u00a0&\u00a0Accomazzi,\u00a02011\u00a0;\u00a0Sears,\u00a02011\u00a0;\u00a0Dorch,\u00a02012).\u00a0These\u00a0 findings\u00a0have\u00a0been\u00a0referenced\u00a0by\u00a0new\u00a0policies\u00a0that\u00a0encourage\u00a0and\u00a0require\u00a0data\u00a0archiving\u00a0(e.g.\u00a0 (Rausher\u00a0et.\u00a0al.\u00a02010)),\u00a0demonstrating\u00a0the\u00a0appetite\u00a0for\u00a0evidence\u00a0of\u00a0personal\u00a0benefit.\nIn\u00a0order\u00a0for\u00a0journals,\u00a0institutions\u00a0and\u00a0funders\u00a0to\u00a0craft\u00a0good\u00a0data\u00a0archiving\u00a0policy,\u00a0it\u00a0is\u00a0important\u00a0to have\u00a0an\u00a0accurate\u00a0estimate\u00a0of\u00a0the\u00a0citation\u00a0differential.\u00a0Estimating\u00a0an\u00a0accurate\u00a0citation\u00a0differential\u00a0is made\u00a0difficult\u00a0by\u00a0the\u00a0many\u00a0confounding\u00a0factors\u00a0that\u00a0influence\u00a0citation\u00a0rate.\u00a0In\u00a0past\u00a0studies,\u00a0it\u00a0has\u00a0 seldom\u00a0been\u00a0possible\u00a0to\u00a0adequately\u00a0control\u00a0these\u00a0confounders\u00a0statistically,\u00a0much\u00a0less\u00a0 experimentally.\u00a0Here,\u00a0we\u00a0perform\u00a0a\u00a0large\u00a0multivariate\u00a0analysis\u00a0of\u00a0the\u00a0citation\u00a0differential\u00a0for\u00a0 studies\u00a0in\u00a0which\u00a0gene\u00a0expression\u00a0microarray\u00a0data\u00a0either\u00a0was\u00a0or\u00a0was\u00a0not\u00a0made\u00a0available\u00a0in\u00a0a\u00a0public repository.\nEstimating\u00a0the\u00a0citation\u00a0differential\u00a0is\u00a0not\u00a0enough:\u00a0crafting\u00a0good\u00a0data\u00a0archiving\u00a0policy\u00a0requires\u00a0an\u00a0 understanding\u00a0of\u00a0its\u00a0origins.\u00a0\u00a0How\u00a0quickly\u00a0do\u00a0data\u00a0reuse\u00a0citations\u00a0accrue?\u00a0\u00a0Do\u00a0the\u00a0additional\u00a0 citations\u00a0arise\u00a0due\u00a0to\u00a0data\u00a0reuse\u00a0\u00a0as\u00a0we\u00a0might\u00a0expect\u00a0\u00a0or\u00a0simply\u00a0from\u00a0increased\u00a0exposure\u00a0or\u00a0 trust\u00a0in\u00a0the\u00a0original\u00a0study?\u00a0\u00a0How\u00a0often\u00a0do\u00a0data\u00a0reuse\u00a0studies\u00a0attribute\u00a0data\u00a0from\u00a0more\u00a0than\u00a0one\u00a0 source?\u00a0\nExamining\u00a0data\u00a0reuse\u00a0patterns\u00a0on\u00a0a\u00a0large\u00a0scale\u00a0is\u00a0difficult\u00a0because\u00a0it\u00a0is\u00a0difficult\u00a0to\u00a0automatically\u00a0 isolate\u00a0reuse\u00a0that\u00a0has\u00a0been\u00a0attributed\u00a0through\u00a0a\u00a0citation\u00a0from\u00a0citations\u00a0made\u00a0for\u00a0other\u00a0purposes.\u00a0\u00a0 In\u00a0this\u00a0study\u00a0we\u00a0approach\u00a0this\u00a0issue\u00a0in\u00a0two\u00a0ways.\u00a0\u00a0First,\u00a0we\u00a0conduct\u00a0a\u00a0smallscale\u00a0manual\u00a0review\u00a0 of\u00a0citation\u00a0contexts\u00a0to\u00a0understand\u00a0the\u00a0proportion\u00a0of\u00a0citations\u00a0that\u00a0are\u00a0made\u00a0in\u00a0the\u00a0context\u00a0of\u00a0data\u00a0\n19 20 21 22 23 24 25 26 27 28 29\n30 31 32 33 34\n35 36 37\n38 39 40 41 42\n43 44 45 46 47 48 49\n50 51 52 53 54\n55 56 57 58\nPeerJ reviewing PDF | (v2013:04:376:1:0:NEW 8 Sep 2013)\nR ev ie w in g M an\nus cr ip t\nreuse.\u00a0\u00a0Second,\u00a0we\u00a0use\u00a0attribution\u00a0through\u00a0mentions\u00a0of\u00a0data\u00a0accession\u00a0numbers,\u00a0rather\u00a0than\u00a0 citations,\u00a0to\u00a0explore\u00a0patterns\u00a0in\u00a0data\u00a0reuse\u00a0on\u00a0a\u00a0much\u00a0larger\u00a0scale.\u00a0\nWe\u00a0seek\u00a0to\u00a0improve\u00a0on\u00a0prior\u00a0work\u00a0in\u00a0two\u00a0key\u00a0ways.\u00a0First,\u00a0the\u00a0sample\u00a0size\u00a0of\u00a0this\u00a0analysis\u00a0is\u00a0large\u00a0 \u2013\u00a0over\u00a0two\u00a0orders\u00a0of\u00a0magnitude\u00a0larger\u00a0than\u00a0the\u00a0first\u00a0citation\u00a0study\u00a0of\u00a0gene\u00a0expression\u00a0microarray\u00a0 data\u00a0(Piwowar\u00a0et.\u00a0al.\u00a02007),\u00a0giving\u00a0us\u00a0the\u00a0statistical\u00a0power\u00a0to\u00a0account\u00a0for\u00a0a\u00a0larger\u00a0number\u00a0of\u00a0 cofactors\u00a0in\u00a0the\u00a0analyses.\u00a0Thus,\u00a0the\u00a0resulting\u00a0estimates\u00a0isolate\u00a0the\u00a0association\u00a0between\u00a0data\u00a0 availability\u00a0and\u00a0citation\u00a0rate\u00a0with\u00a0more\u00a0accuracy.\u00a0Second,\u00a0this\u00a0report\u00a0goes\u00a0beyond\u00a0citation\u00a0analysis to\u00a0include\u00a0analysis\u00a0of\u00a0data\u00a0reuse\u00a0attribution\u00a0directly.\u00a0We\u00a0explore\u00a0how\u00a0data\u00a0reuse\u00a0patterns\u00a0change\u00a0 over\u00a0both\u00a0the\u00a0lifespan\u00a0of\u00a0a\u00a0data\u00a0repository\u00a0and\u00a0the\u00a0lifespan\u00a0of\u00a0a\u00a0dataset,\u00a0as\u00a0well\u00a0as\u00a0examine\u00a0the\u00a0 distribution\u00a0of\u00a0reuse\u00a0across\u00a0datasets\u00a0in\u00a0a\u00a0repository.", "v1_text": "how many articles did the : journal publish in 2008? journal.num.articles.2008.tr 0.25 How many years had elapsed since the last author published his/her first paper? last.author.year.first.pub.ago.tr 0.24 What was the mean citation score of the corresponding author\u2019s institution? institution.mean.norm.citation.score 0.24 How many citations had been made from PMC to the first author\u2019s previous papers? first.author.num.prev.pmc.cites.tr 0.24 How many of the journal\u2019s studies were identified as having created microarray data? journal.microarray.creating.count.tr 0.23 How many years had elapsed since the first author published his/her first paper? first.author.year.first.pub.ago.tr 0.22 materials and methods : The primary analysis in this paper addresses the citation count of a gene expression microarray experiment relative to availability of the experiment's data, accounting for a large number of potential confounders. Relationship between data availability and citation Data collection To begin, we needed to identify a sample of studies that had generated gene expression microarray data in their experimental methods. We used a sample that had been collected previously (Piwowar, 2011d ; Piwowar, 2011c); briefly, a fulltext query uncovered papers that described wetlab methods related to gene expression microarray data collection. The fulltext query had been characterized as having high precision (90%, with a 95% CI of 86% to 93%) and moderate recall (56%, CI of 52% to 61%) for this task. Running the query in PubMed Central, HighWire Press, and Google Scholar identified 11603 distinct gene expression microarray papers published between 2000 and 2009. Citation counts for 10,555 of these papers were found in Scopus and exported in November 2011. Although Scopus now has an API that would facilitate easy programmatic access to citation counts, at the time of data collection the authors were not aware of any methods for querying and exporting data other than through the Scopus website. The Scopus website limited length of a query and the number of citations that could be exported at once. To work within these restrictions we concatenated 500 PubMed IDs at a time into 22 queries, each of the form \"PMID(1234) OR PMID(5678) OR ...\". The independent variable of interest was the availability of gene expression microarray data. Data availability had been previously determined for our sample articles in (Piwowar, 2011d), so we directly reused that dataset. Datasets were considered to be publicly available if they were discoverable in either of the two most widelyused gene expression microarray repositories: NCBI's Gene Expression Omnibus (GEO), and EBI's ArrayExpress. GEO was queried for links to the PubMed identifiers in the analysis sample using \u201cpubmed_gds [filter]\u201d and ArrayExpress was queried by searching for each PubMed identifier in a downloaded copy of the ArrayExpress database. An evaluation of this method found that querying GEO and ArrayExpress with PubMed article identifiers recovered 77% of the associated publicly available datasets (Piwowar & Chapman, 2010). 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 PeerJ reviewing PDF | (v2013:04:376:1:0:NEW 8 Sep 2013) R ev ie w in g M an us cr ip t results : acknowledgements : The authors thank Angus Whyte for suggestions on study design. We thank Jonathan Carlson and Estephanie Sta. Maria for their hard work on data collection and annotation. Michael Whitlock and the Biodiversity Research Centre at the University of British Columbia provided community and resources. We are grateful to everyone who helped with access to Scopus, particularly Andre Vellino, CISTI, Tom Pollard, and friends at the British Library. Finally, the authors thank their peers for feedback on preliminary and preprint versions of this manuscript. The methods and results were previously discussed on an author\u2019s blog (e.g. http://researchremix.wordpress.com/2012/07/16/manydatasetsarereusednotjustanelitefew/) and in presentations (e.g. http://dx.doi.org/10.7287/peerj.preprints.14v1 and http://www.slideshare.net/tjvision/visionievobio12), and were published online as a preprint manuscript (http://dx.doi.org/10.7287/peerj.preprints.1v1). The first paragraph of the introduction is verbatim from (Piwowar et. al. 2007); its original publication was under a CCBY license. Publication references are available in a publiclyavailable Mendeley group to facilitate exploration (http://www.mendeley.com/groups/2223913/11kcitation/papers). discussion : The open data citation benefit One of the primary findings of this analysis is that papers with publicly available microarray data received more citations than similar papers that did not make their data available, even after controlling for many variables known to influence citation rate. We found the open data citation benefit for this sample to be 9% overall (95% confidence interval: 5% to 13%), but the benefit depended heavily on the year the dataset was made available. Datasets deposited very recently have so far received no (or few) additional citations, while those deposited in 20042005 showed a clear benefit of about 30% (confidence intervals 15% to 48%). Older datasets also appeared to receive a citation benefit, but the estimate is less precise because relatively little microarray data was collected or archived in the early 2000s. The citation benefit reported here is smaller than that reported in the previous study by (Piwowar et. al. 2007), which estimated a citation benefit of 69% for human cancer gene expression microarray studies published before 2003 (95% confidence intervals of 18 to 143%). Our attempt to replicate the (Piwowar et. al. 2007) study here suggests that aspects of both the data and analysis can help to explain the quantitatively different results. It appears that clinically relevant datasets released early in the history of microarray analysis had a particularly strong impact. Importantly, however, the new analysis also suggested that the previous estimate was confounded by significant citation correlates, including the total number of authors and the citation history of the last author. This finding reinforces the importance of accounting for covariates through multivariate analysis and the need for large samples to support full analysis: the 69% estimate is probably too high, even for its highimpact sample. Nonetheless, a 1030% is citation benefit may still be an effective motivator for data deposit, given that prestigious journals have been known to advertise their impact factors to three decimal places (Smith, 2006). A paper with open data may be cited for reasons other than data reuse, and open data may be reused without citation of the original paper. Ideally, we would like to separate these two phenomena (data reuse and paper citation) and measure how often the latter is driven by the former. In our manual analysis of 138 citations to papers with open data, we observed that 6% (95% CI: 3% to 11%) of citations were in the context of data reuse. Although this methodology and the sample size do not allow us to estimate with any precision the proportion of the citation benefit attributable to data reuse, the result is consistent with data reuse being a major contributor. Another important result of the citation analysis is that the number of papers based on self data reuse declined steeply after two years, while data reuse papers by thirdparty authors continued to 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 PeerJ reviewing PDF | (v2013:04:376:1:0:NEW 8 Sep 2013) R ev ie w in g M an us cr ip t accumulate even after six years. This finding suggests that although researchers may have some incentive for protecting their own exclusive use of data close to the time of the initial publication, the equation changes dramatically after a short period. This finding provides some evidence to guide policy decisions regarding the length of data embargoes allowed by journal archiving policies such as the Joint Data Archiving Policy described by (Rausher et. al. 2010). While we cannot generalize from these detailed patterns of data reuse and citation to other datatypes or domains, the cumulative citation benefit seems to be quantitatively similar in a number of different fields (Gleditsch & Strand, 2003 ; Piwowar et. al. 2007 ; Ioannidis et. al. 2009 ; Pienta et. al. 2010 ; Henneken & Accomazzi, 2011 ; Sears, 2011 ; Dorch, 2012). Challenges collecting citation data This study required obtaining citation counts for thousands of articles using PubMed IDs. This process was not supported at the time of data collection using either Thomson Reuter's Web of Science or Google Scholar. Although this type of query was supported by Elsevier's Scopus database, we lacked institutional access to Scopus, individual subscriptions were not available, and attempts to request access through Scopus staff were unsuccessful. One of us (HP) attempted to use the British Library's walkin access of Scopus while visiting the UK. Unfortunately, the British Library\u2019s policies did not permit any method of electronic input of the PubMed identifier list (the list is 10,000 elements long). HP eventually obtained official access to Scopus through a Research Worker agreement with Canada's National Research Library (NRCCISTI), after being fingerprinted to obtain a police clearance certificate because she had recently lived in the United States. Our understanding of research practice suffers because access to tools and data is so difficult. Patterns of data reuse To better understand patterns of data reuse, a larger sample of reuse instances was needed than could easily be assembled through manual classification of citation context. To that end, we used a complementary source of information about reuse of the same datasets: direct mention of GEO or ArrayExpress accession numbers within the body of a fulltext research article. The large number of instances of reuse identified this way allowed us to ask questions about the distribution of reuse over time and across datasets. The results indicate that dataset reuse has been increasing over time (excluding the initial years of GEO and ArrayExpress when few datasets were deposited and reuse appears to have been atypically broad). Recent reuse analyses include more datasets, on average, than older reuse studies. Also, the fact that reuse was greatest for datasets published between three and six years previously suggests that the lower citation benefit we observed for recent papers is due, at least in part, to a relatively short followup time. Extrapolating to all of PubMed, we estimate the number of reuse papers published per year is on the same order of magnitude and likely greater than the number of datasets made available. This data reuse curve is remarkably constant for data deposited between 2004 and 2009. This finding reinforces the conclusions of an earlier analysis: even modest data reuse can provide an impressive return on investment for science funders (Piwowar et. al. 2011b). Finally, we observed a moderate proportion of datasets being reused by third parties (more than 20% of the datasets deposited between 2003 and 2007). It is important to recognize that this is likely a gross underestimate. It includes only those instances of reuse that can be recognized 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 PeerJ reviewing PDF | (v2013:04:376:1:0:NEW 8 Sep 2013) R ev ie w in g M an us cr ip t through the mention of accession number in PubMed Central. No attempt has been made to extrapolate these distribution statistics to all of PubMed, nor to identify additional attributions through paper citations or mentions of the archive name alone. Further, many important instances of data reuse leave no trace in the published literature, such as those in education and training. Reasons for the data citation benefit While we cannot exclude that the open data citation benefit is driven entirely by thirdparty data reuse, there may be other factors contributing to the effect either directly or indirectly. The literature that has considered the possibility of an Open Access citation benefit (e.g. Craig et. al. 2007) indicates a number of other factors that may also be relevant to open data. Building upon this work, we suggest several possible sources for an \"Open Data citation benefit\": 1. Data Reuse. Papers with available datasets can be used in ways that papers without data cannot, and may receive additional citations as a result. 2. Credibility Signalling. The credibility of research findings may be higher for research papers with available data. Such papers may be preferentially chosen as background citations or the foundation of additional research. 3. Increased Visibility. Third party researchers may be more likely to encounter a paper with available data, either by a direct link from the data or indirectly through crosspromotion. For example, links from a data repository to a paper may increase the search ranking of the research paper. 4. Early View. When data is made available before a paper is published, some citations may accrue earlier than they would otherwise because of accelerated awareness of the methods, findings, and so on. 5. Selection Bias. Authors may be more likely to publish data for papers they judge to be their best quality work, because they are particularly proud or confident of the results (Wicherts et. al. 2011). Importantly, almost all of these mechanisms are aligned with more efficient and effective scientific progress: increased data use, facilitated credibility determination, earlier access, improved discoverability, and a focus on best work through data availability are good for both investigators and the science community as a whole. Working through the one area where incentives between scientific good and author incentives conflict, finding weaknesses or faults in published research may require mandates. Or, instead, the research community may eventually come to associate withheld data with poor quality research, as it does today for findings that are not disclosed in a peerreviewed paper (Ware, 2008). The citation benefit observed in the current study is consistent with data reuse found in this study and the smallscale annotation reported in (Rung & Brazma, 2013). Nonetheless, it is possible some of the other sources suggested above may have contributed citations for the studies with available data. Further work will be needed to understand the relative contributions from each source. For example, indepth analyses of all publications from a set of datacollecting authors could support measurement of selection bias. Observing search behavior of researchers, and the returned search hit results, could characterize increased visibility because of data availability. Hypothetical examples could be provided to authors to determine whether they would be 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 PeerJ reviewing PDF | (v2013:04:376:1:0:NEW 8 Sep 2013) R ev ie w in g M an us cr ip t systematically more likely to cite a paper with available data in situations in which they are considering the credibility of research findings. Future work Future work could improve on these results by considering and integrating all methods of data use attribution. This holistic effort would include identifying citations to the paper that describes the data collection, mentions of the dataset identifier itself whether in full text, the references section, or supplementary information citations to the dataset as a firstclass research object, and even mentions of the data collection investigators in acknowledgement sections. The citations and mentions would need classification based on context to ensure they are in the context of data reuse. The obstacles encountered in obtaining the citation data needed for this study, as described earlier in the Discussion, demonstrate that improvements in tools and practice are needed to make impact tracking easier and more accurate, for daytoday analyses as well as studies for evidencebased policy. Such research is hamstrung without programmatic access to the fulltext of the research literature and to the citation databases that underpin impact assessment. The lack of conventions and tool support for data attribution (Mooney & Newton, 2012) is also a significant obstacle, undoubtedly leading to undercounting in the present study. There is much room for improvement, and we are hopeful about recent steps toward data citation standards taken by initiatives such as DataCite. Data from current and future studies could begin to be used to estimate the impact of policy decisions. For example, do embargo periods decrease the level of data reuse? Do restrictive or poorly articulated licensing terms decrease data reuse? Which types of data reuse are facilitated by robust data standards and which types are unaffected? Qualitative assessment of data reuse is an essential complement to largescale quantitative analyses. Repeating and extending previous studies will help develop an understanding of the potential of data reuse, areas of progress, and remaining challenges (e.g. (Zimmerman, 2003 ; Wan & Pavlidis, 2007 ; Wynholds et. al. 2012 ; Rolland & Lee, 2013)). Usage statistics from primary data repositories and valueadded repositories are also useful sources of insight into reuse patterns (Rung & Brazma, 2013). Citations are blind to many important types of data reuse. The impact of data on practitioners, educators, data journalists, and industry researchers is not captured by attribution patterns in the scientific literature. Altmetrics indicators reveal discussions in social social media, syllabi, patents, and theses: analyzing such indicators for datasets would provide valuable evidence of reuse beyond the scientific literature. As evaluators move away from assessing research based on journal impact factor and toward articlelevel metrics, postpublication metrics rates will become increasingly important indicators of research impact(Piwowar, 2013). conclusions : We found a statistically wellsupported citation benefit from open data, although a smaller one than previously reported. We conclude there is a direct effect of thirdparty data reuse that persists for years beyond the time when researchers have published most of the papers reusing their own data. We further conclude that, at least for gene expression microarray data, a 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 PeerJ reviewing PDF | (v2013:04:376:1:0:NEW 8 Sep 2013) R ev ie w in g M an us cr ip t substantial portion of archived datasets are reused, and that the intensity of dataset reuse has been steadily increasing since 2003. It is important to remember that the primary rationale for making research data available has nothing to do with evaluation metrics or citation benefits: giving a full account of experimental process and findings is a tenent of science, and publiclyfunded science is a public resource (Smith, 2006). We also recognize that scientists may weigh a variety of both positive and negative incentives when deciding whether and how to share their data, and the potential for increasing citations is only one of these. Nonetheless, evidence of personal benefit will help as science transitions from \"data not shown\" to a culture that simply expects data to be part of the published record. author contributions : Both authors contributed to the study design, discussed the methodology, results, and interpretations and collaboratively revised the manuscript. HP conceived the initial idea, performed the data collection and statistical analysis, and drafted the initial manuscript. References \u2022 Carl Boettiger, (2013) knitcitations: Citations for knitr markdown files. https://github.com/cboettig/knitcitations \u2022 Gregory Bolker, Lodewijk Bonebakker, Robert Gentleman, Wolfgang Liaw, Thomas Lumley, Martin Maechler, Arni Magnusson, Steffen Moeller, Marc Schwartz, Bill Venables, (2012) gplots: Various R programming tools for plotting data. http://CRAN.R project.org/package=gplots 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 PeerJ reviewing PDF | (v2013:04:376:1:0:NEW 8 Sep 2013) R ev ie w in g M an us cr ip t \u2022 I Craig, A Plum, M McVeigh, J Pringle, M Amin, (2007) Do open access articles have greater citation impact?A critical review of the literature. Journal of Informetrics 1 (3) 239 248 10.1016/j.joi.2007.04.001 \u2022 Bertil Dorch, (2012) On the Citation Advantage of linking to data. hprints http://hprints.org/hprints00714715 \u2022 John Fox, (2010) polycor: Polychoric and Polyserial Correlations. http://CRAN.R project.org/package=polycor \u2022 Lawrence Fu, Constantin Aliferis, (2008) Models for predicting and explaining citation count of biomedical articles. AMIA Symposium 2226 http://www.pubmedcentral.nih.gov/articlerender.fcgi? artid=2656101&tool=pmcentrez&rendertype=abstract \u2022 Nils Gleditsch, Havard Strand, (2003) Posting your data: will you be scooped or will you be famous?. International Studies Perspectives 4 (1) 8997 http://www.prio.no/Researchand Publications/Publication/?oid=55406 \u2022 David Hajage, (2011) ascii: Export R objects to several markup languages. http://CRAN.R project.org/package=ascii \u2022 Frank Jr Harrell, (2012) rms: Regression Modeling Strategies. http://CRAN.R project.org/package=rms \u2022 Edwin Henneken, Alberto Accomazzi, (2011) Linking to Data Effect on Citation Rates in Astronomy. arXiv 4 http://arxiv.org/abs/1111.3618 \u2022 John Ioannidis, David Allison, Catherine Ball, Issa Coulibaly, Xiangqin Cui, Aed' n Culhane, Mario Falchi, Cesare Furlanello, Laurence Game, Giuseppe Jurman, Jon Mangion, Tapan Mehta, Michael Nitzberg, Grier Page, Enrico Petretto, Vera Noort, (2009) Repeatability of published microarray gene expression analyses.. Nature genetics 41 (2) 149 55 10.1038/ng.295 \u2022 Hailey Mooney, Mark Newton, (2012) The Anatomy of a Data Citation: Discovery, Reuse, and Credit. Journal of Librarianship and Scholarly Communication 1 (1) http://jlsc pub.org/jlsc/vol1/iss1/6 \u2022 Amy Pienta, George Alter, Jared Lyle, (2010) The Enduring Value of Social Science Research: The Use and Reuse of Primary Research Data. The Organisation, Economics and Policy of Scientific Research workshop http://hdl.handle.net/2027.42/78307 \u2022 Heather Piwowar, Roger Day, Douglas Fridsma, (2007) Sharing detailed research data is associated with increased citation rate. PLoS ONE 2 (3) http://dx.doi.org/10.1371/journal.pone.0000308 \u2022 Heather Piwowar, Wendy Chapman, (2010) Recall and bias of retrieving gene expression microarray datasets through PubMed identifiers.. Journal of biomedical discovery and collaboration 5 720 http://www.ncbi.nlm.nih.gov/pubmed/20349403 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 PeerJ reviewing PDF | (v2013:04:376:1:0:NEW 8 Sep 2013) R ev ie w in g M an us cr ip t \u2022 Heather Piwowar, Jonathan Carlson, Todd Vision, (2011a) Beginning to track 1000 datasets from public repositories into the published literature. Proceedings of the American Society for Information Science and Technology 48 (1) 14 10.1002/meet.2011.14504801337 \u2022 Heather Piwowar, Todd Vision, Michael Whitlock, (2011b) Data archiving is a good investment.. Nature 473 (7347) 285 10.1038/473285a \u2022 Heather Piwowar, (2011c) Data from: Who shares? Who doesn't? Factors associated with openly archiving raw research data. Dryad Digital Repository 10.5061/dryad.mf1sd \u2022 Heather Piwowar, (2011d) Who Shares? Who Doesn't? Factors Associated with Openly Archiving Raw Research Data. PLoS ONE 6 (7) e18657 10.1371/journal.pone.0018657 \u2022 Heather Piwowar, (2013) Value all research products.. Nature 493 (7431) 159 10.1038/493159a \u2022 Mark Rausher, Mark McPeek, Allen Moore, Loren Rieseberg, Michael Whitlock, (2010) Data archiving.. Evolution; international journal of organic evolution 64 (3) 6034 http://www.ncbi.nlm.nih.gov/pubmed/20050907 \u2022 Betsy Rolland, Charlotte Lee, (2013) Beyond trust and reliability. 435 http://dl.acm.org/citation.cfm?id=2441776.2441826 \u2022 Johan Rung, Alvis Brazma, (2013) Reuse of public genomewide gene expression data.. Nature reviews. Genetics 14 (2) 8999 http://www.ncbi.nlm.nih.gov/pubmed/23269463 \u2022 Jon Sears, (2011) Data Sharing Effect on Article Citation Rate in Paleoceanography KomFor. http://www.komfor.net/blog/unbenanntemitteilung \u2022 Richard Smith, (2006) Commentary: the power of the unrelenting impact factoris it a force for good or harm?. International journal of epidemiology 35 (5) 112930 http://ije.oxfordjournals.org/content/35/5/1129.full \u2022 Carol Tenopir, Suzie Allard, Kimberly Douglass, Arsev Aydinoglu, Lei Wu, Eleanor Read, Maribeth Manoff, Mike Frame, (2011) Data sharing by scientists: practices and perceptions.. PLoS one 6 (6) e21101 10.1371/journal.pone.0021101 \u2022 Xiang Wan, Paul Pavlidis, (2007) Sharing and reusing gene expression profiling data in neuroscience.. Neuroinformatics 5 (3) 16175 http://www.pubmedcentral.nih.gov/articlerender.fcgi? artid=2980754&tool=pmcentrez&rendertype=abstract \u2022 Mark Ware, (2008) Peer review: benefits, perceptions and alternatives. PRC Summary Papers 4 http://www.publishingresearch.net/documents/PRCsummary4Warefinal.pdf \u2022 Jelte Wicherts, Marjan Bakker, Dylan Molenaar, (2011) Willingness to share research data is related to the strength of the evidence and the quality of reporting of statistical results.. PloS one 6 (11) e26828 10.1371/journal.pone.0026828 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 PeerJ reviewing PDF | (v2013:04:376:1:0:NEW 8 Sep 2013) R ev ie w in g M an us cr ip t \u2022 Hadley Wickham, (2007) Reshaping Data with the reshape Package. Journal of Statistical Software 21 (12) 120 http://www.jstatsoft.org/v21/i12/ \u2022 Hadley Wickham, (2009) ggplot2: elegant graphics for data analysis. http://had.co.nz/ggplot2/book \u2022 Hadley Wickham, (2011) The SplitApplyCombine Strategy for Data Analysis. Journal of Statistical Software 40 (1) 129 http://www.jstatsoft.org/v40/i01/ \u2022 Laura Wynholds, Jillian Wallis, Christine Borgman, Ashley Sands, Sharon Traweek, (2012) Data, data use, and scientific inquiry. 19 10.1145/2232817.2232822 \u2022 Yihui Xie, (2012) knitr: A generalpurpose package for dynamic report generation in R. http://CRAN.Rproject.org/package=knitr \u2022 Ann Zimmerman, (2003) Data Sharing and Secondary Use of Scientific Data: Experiences of Ecologists. Dissertations and Theses (Ph.D. and Master's) http://hdl.handle.net/2027.42/39373 581 582 583 584 585 586 587 588 589 590 591 592 593 PeerJ reviewing PDF | (v2013:04:376:1:0:NEW 8 Sep 2013) R ev ie w in g M an us cr ip t Table 1(on next page) Univariate correlations between article attributes and number of citations. Citations were log transformed and count variables were square root transformed. Pearson correlations were used for numeric variables and polyserial correlations for binary and categorical variables. PeerJ reviewing PDF | (v2013:04:376:1:0:NEW 8 Sep 2013) R ev ie w in g M an us cr ip t Attribute Variable name Correlation How many citations did the study receive? nCitedBy.log 1.00 What was the impact factor of the journal that published the study? journal.impact.factor.tr 0.45 How many citations had been made from PMC to the last author\u2019s previous papers? last.author.num.prev.pmc.cites.tr 0.30 funding : This study was funded by DataONE (OCI0830944), Dryad (DBI0743720), and a Discovery grant to Michael Whitlock from the Natural Sciences and Engineering Research Council of Canada. primary analysis : The core of our analysis was a set of multivariate linear regressions to evaluate the association between the public availability of a study's microarray data and the number of citations received by the study. To explore which variables to include in these regressions, we investigated correlations between the number of citations and a set of candidate variables. We also calculated correlations amongst all variables to investigate collinearity. We explored a subset of the 124 attributes from (Piwowar, 2011d) previously shown or suspected to correlate with citation rate. We selected covariates found to have a strong pairwise correlation (positive or negative) with citation rate, using Pearson correlations for numeric variables and polyserial correlations for binary and categorical variables. These covariates included: date of publication, journal that published the study, journal impact factor, journal citation halflife, number of articles published by the journal, journal open access policy, journal status as a core clinical journal by MEDLINE, number of authors of the study, country of the corresponding author, citation score of the institution of the corresponding author, publishing experience of the first and last author, and subject of the study itself (Table 1). Publishing experience was characterized by the number of years since an author's first paper in PubMed, the number of papers the author had published, and the number of citations the author had received in PubMed Central, estimated using Authority Clusters. The topic of the study was characterized according to the article\u2019s Medical Subject Heading (MeSH) indexing terms assigned by the National Library of Medicine classifiing the article as related to cancer, animals, or plants. For more information on study attributes see (Piwowar, 2011d). Citation count was log transformed to be consistent with prior literature. Other count variables were squareroot transformed. Continuous variables were represented with 3part spines in the regression, using the rcs function in the R rms library. The independent variable of data availability was represented as 0 or 1 in the regression, indicating whether or not associated data had been found in either of the two data repositories. Because citation counts were log transformed, the relationship of data availability to citation count was described with 95% confidence intervals after raising the regression coefficient to the power of e. Comparison to 2007 study We ran two modified analyses to attempt to reproduce the findings of (Piwowar et. al. 2007) using the larger dataset of the current study. First, we used a subset of studies with roughly the same inclusion criteria as the earlier paper studies on cancer in humans, published prior to 2003 and the same regression coefficients: publication date, impact factor, and whether the corresponding author's address is in the USA. We followed that with a second regression that included several additional important covariates: number of authors and number of previous citations by the last author. Stratification by year Because publication date is a strong correlate with both citation rate and data availability, we performed a separate analysis stratifying the sample by publication year, in addition to including publication date as a covariate. Fewer covariates could be included in these yearly regressions because they included fewer datapoints than the full regression. The yearly regressions included date of publication, journal that published the study, journal impact factor, journal's open access 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 PeerJ reviewing PDF | (v2013:04:376:1:0:NEW 8 Sep 2013) R ev ie w in g M an us cr ip t policy, number of authors of the study, citation score of the institution of the corresponding author, previous number of PubMed Central citations received by the first and last author, whether the topic was cancer, and whether the study used animals. Manual review of citation context We manually reviewed the context of citations to data collection papers to estimate how many citations to data collection papers were made in the context of data reuse. We (Jonathan Carlton, in acknowledgements) reviewed 50 citations chosen randomly from the set of all citations to 100 data collection papers. Specifically, we randomly selected 100 datasets deposited in GEO in 2005. For each dataset, we located the data collection article within Thomson Reuter\u2019s Web of Science based on its title and authors, and exported the list of all articles that cited this data collection article. From this, we selected 50 random citations stratified by the total number of times the data collection article had been cited. By manual review of the relevant fulltext of each paper, we determined whether the data from the associated dataset had been reused within the study. Web of Science was used to identify citations for this step rather than the Scopus citation database used in previous steps. This is because extracting citations in this step did not require the (at the time) Scopusonly feature of searching by PubMed ID and the investigators had access to the Web of Science through an institutional subscription. Data reuse patterns from accession number attribution A second, independent dataset was collected to correlate with reuse attributions made through mentions of accession numbers rather than formal citations. Data collection Datasets are sometimes attributed directly through mention of the dataset identifier (or accession number) in the fulltext, in which case the reuse may not contribute to the citation count of the original paper. To capture these instances of reuse, we collected a separate dataset to study reuse patterns based on direct data attribution. We used the NCBI eUtils library and custom Python code to obtain a list of all datasets deposited into the Gene Expression Omnibus data repository, then searched PubMed Central for each of these dataset identifiers (using queries of the form \"'GSEnnnn' OR 'GSE nnnn'\"). For each hit we recorded the PubMed Central ID of the paper that mentioned the accession number, the year of paper publication, and the author surnames. We also recorded the dataset accession number, the year of dataset publication, and the investigator names associated with the dataset record. Statistical analysis To focus on data reuse by third party investigators (rather than authors attributing datasets they had collected themselves), we excluded papers with author surnames matching those of authors who deposited the original dataset, as in (Piwowar et. al. 2011a). PubMed Central contains only a subset of papers recorded in PubMed. As described in (Piwowar et. al. 2011a), to extrapolate from the number of data reuses in PubMed Central to all possible data reuses in PubMed, we divided the yearly number of hits by the ratio of papers in PMC to papers in PubMed for this domain (domain was measured as the number of articles indexed with the MeSH term \u201cgene expression profiling\u201d). 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 PeerJ reviewing PDF | (v2013:04:376:1:0:NEW 8 Sep 2013) R ev ie w in g M an us cr ip t We retained papers published between 2001 and 2010 as reuse candidates. We excluded 2011 because it had a dramatically lower proportion of papers in PubMed Central at the time of our data collection: the NIH requirement to deposit a paper into PMC permits a 12 month embargo. To understand our findings on a perdataset basis, we stratified reuse estimates by year of dataset submission and normalized our reuse findings by the number of datasets deposited that year. Data and script availability Statistical analyses were last run on Wednesday April 3, 2013 with R version 2.15.1 (20120622). Packages used included reshape2 (Wickham, 2007), plyr (Wickham, 2011), rms (Harrell, 2012), polycor (Fox, 2010), ascii (Hajage, 2011), ggplot2 (Wickham, 2009), gplots (Bolker et. al. 2012), knitr (Xie, 2012), and knitcitations (Boettiger, 2013). Pvalues were twotailed. Raw data and statistical scripts are available in the Dryad data repository at [data uploaded to Dryad at the time of article acceptance; citation will be included once known]. Data collection scripts are on GitHub at https://github.com/hpiwowar/georeuse and https://github.com/hpiwowar/pypub. The Markdown version of this manuscript with interleaved statistical scripts (Xie, 2012) is on GitHub https://github.com/hpiwowar/citation11k. Publication references are available in a publiclyavailable Mendeley group to facilitate exploration. description of cohort : We identified 10,557 articles published between 2001 and 2009 as collecting gene expression microarray data. Publicly available datasets in GEO or ArrayExpress had been found for 2,617 of these articles (25%, 95% confidence interval 24% to 26%). The papers were published in 667 journals, with the top 12 journals accounting for 30% of the papers (Table 2). Microarray papers were published more frequently in later years: 2% of articles in our sample were published in 2001, compared to 15 % in 2009 (Table 3). The papers were cited between 0 and 2,643 times, with an average of 32 citations per paper and a median of 16 citations. Data availability is associated with citation benefit Without accounting for any confounding factors, the distribution of citations was similar for papers with and without archived data. That said, we hasten to mention several strong confounding factors. For example, the number of citations a paper has received is strongly correlated to the date it was published: older papers have had more time to accumulate citations. Furthermore, the probability of data archiving is also correlated with the age of an article more recent articles are more likely to archive data (Piwowar, 2011d). Accounting for publication date, the distribution of citations for papers with available data is rightshifted relative to the distribution for those without, as seen in Figure 1. Other variables have been shown to correlate with citation rate (Fu & Aliferis, 2008). Because singlevariable correlations can be misleading, we performed multivariate regression to isolate the relationship between data availability and citation rate from confounders. The multivariate regression included attributes representing an article's journal, journal impact factor, date of publication, number of authors, number of previous citations of the fist and last 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 PeerJ reviewing PDF | (v2013:04:376:1:0:NEW 8 Sep 2013) R ev ie w in g M an us cr ip t author, number of previous publications of the last author, whether the paper was about animals or plants, and whether the data was made publicly available. Citations were 9% higher for papers with available data, independent of other variables (p < 0.01, 95% confidence intervals [5% to 13% ]). We also analyzed a subset of manually curated articles. The findings were similar to those of the whole sample, supporting our assumption that errors in automated inclusion criteria determination did substantially influence the estimate (see Supplementary Article S1). More covariates led to a more conservative estimate Our estimate of citation benefit, 9% as per the multivariate regression, is notably smaller than the 69% (95% confidence intervals of 18 to 143%) citation advantage found by (Piwowar et. al. 2007), even though both studies examined publicly available gene expression microarray data. There are several possible reasons for this difference. First, (Piwowar et. al. 2007) concentrated on datasets from highimpact studies: human cancer microarray trials published in the early years of microarray analysis (between 1999 and 2003). By contrast, the current study included gene expression microarray data studies on any subject published between 2001 and 2009. Second, because the (Piwowar et. al. 2007) sample was small (85 papers), the previous analysis included only a few covariates: publication date, journal impact factor, and country of the corresponding author. We attempted to reproduce the (Piwowar et. al. 2007) methods with the current sample. Limiting the inclusion criteria to studies with MeSH terms \"human\" and \"cancer\", and to papers published between 2001 and 2003, reduced the cohort to 308 papers. Running this subsample with the covariates used in the (Piwowar et. al. 2007) paper resulted in a comparable estimate to the 2007 paper: a citation increase of 47% (95% confidence intervals of 6% to 103%). The subsample of 308 papers was large enough to include a few additional covariates: number of authors and citation history of the last author. Including these important covariates decreased the estimated effect to 18% with a confidence interval that spanned a loss of 17% citations to a benefit of 66%. Citation benefit over time After completing our comparison to prior results, we returned to the whole sample. Because publication date is such as strong correlate with both citation rate and data availability, we ran regressions for each publication year individually. The estimate of citation benefit varied by year of publication. The citation benefit was greatest for data published in 2004 and 2005, at about 30%. Earlier years showed citation benefits with wider confidence intervals due to relatively small sample sizes, while more recently published data showed a less pronounced citation benefit (Figure 2). Data reuse is a demonstrable component of citation benefit To estimate the proportion of the citation benefit directly attributable to data reuse, we randomly selected and manually reviewed 138 citations. We classified eight (6%) of the citations as attributions for data reuse (95% CI: 3% to 11%). 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 PeerJ reviewing PDF | (v2013:04:376:1:0:NEW 8 Sep 2013) R ev ie w in g M an us cr ip t Evidence of reuse from mention of dataset identifiers in full text A complementary dataset was collected and analyzed to characterize data reuse: direct mention of dataset accession numbers in the full text of papers. In total there were 9274 mentions of GEO datasets in papers published between 2000 and 2010 within PubMed Central across 4543 papers written by author teams whose last names did not match the names of those who deposited the data. Extrapolating this to all of PubMed, we estimated there may be about 1.4081 \u00d7 104 third party reuses of GEO data attributed through accession numbers in all of PubMed for papers published between 2000 and 2010. The number of reuse papers started to grow rapidly several years after data archiving rate started to grow. In recent years both the number of datasets and the number of reuse papers have been growing rapidly, at about the same rate, as shown in Figure 3. The level of thirdparty data use was high: for 100 datasets deposited in year 0, we estimate that 40 papers in PubMed reused a dataset by year 2, 100 by year 4, and more than 150 by year 5. This data reuse curve is remarkably constant for data deposited between 2004 and 2009. The reuse growth trend for data deposited in 2003 has been slower, perhaps because 2003 data is not as groundbreaking as earlier data, and probably not as standardscompliant and technically relevant as later data. We found that most instances of selfreuse (identified by surname overlap with data submission record) were published within two years of dataset publication. This pattern contrasts sharply with third party data reuse, as shown in Figure 4. The cumulative number of thirdparty reuse papers is illustrated in Figure 5; separate lines are displayed for different dataset publication years. Because the number of datasets published has grown dramatically with time, it is instructive to consider the cumulative number of thirdparty reuses normalized by the number of datasets deposited each year (Supplementary Figure 1). In the earliest years for which data is available, 20012002, there were relatively few data deposits, but these datasets have been disproportionately reused. We excluded the early years from the plot to examine the pattern of data reuse once gene expression datasets became more common. Since 2003, the rate at which individual datasets were reused increased with each year of data publication. Growth in the number of datasets in each reuse paper over time The number of distinct datasets used in a reuse paper was found to increase over time (Figure 6). From 2002 to 2004 almost all reuse papers only used one or two datasets. By 2010, 25% of reuse papers used 3 or more datasets. Distribution of reuse across datasets It is useful to know the distribution of reuse amongst datasets. Because our methods only detect reuse by papers in PubMed Central (a small proportion of the biomedical literature) and only when the accession number is given in the full text, our estimates of reuse are extremely conservative. Despite this, we found that reuse was not limited to only a few papers (Figure 7). Nearly all datasets published in 2001 were reused at least once. The proportion of reused datasets declined in subsequent years, with a plateau of about 20% for data deposited between 2003 and 2007. The actual rate of reuse across all methods of attribution, and extrapolated to all of PubMed, is probably much higher 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 PeerJ reviewing PDF | (v2013:04:376:1:0:NEW 8 Sep 2013) R ev ie w in g M an us cr ip t 20% of the datasets deposited between 2003 and 2007 had been reused at least once by : third parties. PeerJ reviewing PDF | (v2013:04:376:1:0:NEW 8 Sep 2013) R ev ie w in g M an us cr ip t CONCLUSION: After accounting for other factors affecting citation rate, we find a robust citation benefit from open data, although a smaller one than previously reported. We conclude there is a direct effect of third-party data reuse that persists for years beyond the time when researchers have published most of the papers reusing their own data. Other factors that may also contribute to the citation boost are considered. We further conclude that, at least for gene expression microarray data, a substantial fraction of archived datasets are reused, and that the intensity of dataset reuse has been steadily increasing since 2003. PeerJ reviewing PDF | (v2013:04:376:1:0:NEW 8 Sep 2013) R ev ie w in g M an us cr ip t Data reuse and the open data citation advantage Heather A Piwowar hpiwowar@gmail.com National Evolutionary Synthesis Center, Durham, NC, USA Department of Biology, Duke University, Durham, NC, USA Todd J Vision tjv@bio.unc.edu Department of Biology, University of North Carolina Chapel Hill, Chapel Hill, NC, USA National Evolutionary Synthesis Center, Durham, NC, USA Department of Biology, Duke University, Durham, NC, USA 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 PeerJ reviewing PDF | (v2013:04:376:1:0:NEW 8 Sep 2013) R ev ie w in g M an us cr ip t distribution of the age of reused data : We found the authors of thirdparty data reuse papers were most likely to use data that was 36 years old by the time their paper was published, normalized for how many datasets were deposited each year (Figure 8). For example, in aggregate, microarray reuse papers from 2005 mentioned the accession numbers of more than 5% of all datasets that had been submitted two years earlier, in 2003. Reuse papers from 2008 mentioned about 7% of the datasets submitted two years earilier (in 2006), more than 10% of the datasets submitted 3 and 4 years previously (2005 and 2004), and about 7% of the datasets submitted 5 years earlier, in 2003. was the corresponding author\u2019s : address in the USA? country.usa 0.18 How many authors did the study have? num.authors.tr 0.17 Was the study published in a journal considered a core clinical journal by MEDLINE? pubmed.is.core.clinical.journal 0.17 How many previous papers had the last author published? last.author.num.prev.pubs.tr 0.15 Did the study involve human subjects? pubmed.is.humans 0.08 Was the study funded by the NIH? pubmed.is.funded.nih 0.07 Was the study funded by an Rgrant from the NIH? has.R.funding 0.07 Did the study involve plants? pubmed.is.plants 0.07 How many previous papers had the first author published? first.author.num.prev.pubs.tr 0.06 Did the study involve cancer? pubmed.is.cancer 0.06 How many cumulative years of NIH funding did the study receive? nih.cumulative.years.tr 0.03 Was the corresponding author\u2019s address in the UK? country.uk 0.03 How many NIH grants did the study receive? num.grants.via.nih.tr 0.02 What was the sum of the annual grants received from the NIH? nih.sum.avg.dollars.tr 0.01 Did the study involve bacteria? pubmed.is.bacteria 0.01 PeerJ reviewing PDF | (v2013:04:376:1:0:NEW 8 Sep 2013) R ev ie w in g M an us cr ip t in GEO or ArrayExpress? dataset.in.geo.or.ae 0.01 How many of the last author\u2019s previous papers were identified as creating gene expression microarray data? last.author.num.prev.microarray.creat ions.tr 0.01 Did the study use cultured cells? pubmed.is.cultured.cells -0.01 How many of the first author\u2019s previous papers were identified as creating gene expression microarray data? first.author.num.prev.microarray.crea tions.tr -0.01 that had reused data from GEO? pubmed.is.geo.reuse -0.01 Was the corresponding author\u2019s institution a government institution? institution.is.govnt -0.01 Was the corresponding author\u2019s address in Australia? country.australia -0.02 Did the study receive interamural NIH funding? pubmed.is.funded.nih.intramural -0.03 Was the corresponding author\u2019s address in Canada? country.canada -0.05 What is the rank of the corresponding author\u2019s institution? institution.rank -0.06 Was the last author female? last.author.female -0.07 Was the first author female? first.author.female -0.08 Was the corresponding author\u2019s address in Japan? country.japan -0.10 Did the study involve animals? pubmed.is.animals -0.11 Was the corresponding author\u2019s address in China? country.china -0.19 Was the corresponding author\u2019s address in Korea? country.korea -0.26 Was the journal that published the study considered an open access journal? pubmed.is.open.access -0.30 What year was the study published? pubmed.year.published -0.58 What date was the study published? pubmed.date.in.pubmed -0.59 PeerJ reviewing PDF | (v2013:04:376:1:0:NEW 8 Sep 2013) R ev ie w in g M an us cr ip t Table 2(on next page) Proportion of sample published in most common journals PeerJ reviewing PDF | (v2013:04:376:1:0:NEW 8 Sep 2013) R ev ie w in g M an us cr ip t | Cancer Res | 0.04 | | Proc Natl Acad Sci U S A | 0.04 | | J Biol Chem | 0.04 | | BMC Genomics | 0.03 | | Physiol Genomics | 0.03 | | PLoS One | 0.02 | | J Bacteriol | 0.02 | | J Immunol | 0.02 | | Blood | 0.02 | | Clin Cancer Res | 0.02 | | Plant Physiol | 0.02 | | Mol Cell Biol | 0.01 | 1 1 1 2 3 4 5 6 7 8 9 10 11 12 PeerJ reviewing PDF | (v2013:04:376:1:0:NEW 8 Sep 2013) R ev ie w in g M an us cr ip t Table 3(on next page) Proportion of sample published each year PeerJ reviewing PDF | (v2013:04:376:1:0:NEW 8 Sep 2013) R ev ie w in g M an us cr ip t | | 2001 | 2002 | 2003 | 2004 | 2005 | 2006 | 2007 | 2008 | 2009 | |---|------|------|------|------|------|------|------|------|------| | | 0.02 | 0.05 | 0.08 | 0.11 | 0.13 | 0.12 | 0.17 | 0.18 | 0.15 | 1 1 1 2 3 4 PeerJ reviewing PDF | (v2013:04:376:1:0:NEW 8 Sep 2013) R ev ie w in g M an us cr ip t Figure 1 Citation density for papers with and without publicly available microarray data, by year of study publication PeerJ reviewing PDF | (v2013:04:376:1:0:NEW 8 Sep 2013) R ev ie w in g M an us cr ip t Figure 2 Increased citation count for studies with publicly available data, by year of publication Estimates from multivariate analysis, lines indicate 95% confidence intervals PeerJ reviewing PDF | (v2013:04:376:1:0:NEW 8 Sep 2013) R ev ie w in g M an us cr ip t Figure 3 Cumulative number of datasets deposited in GEO each year, and cumulative number of third-party reuse papers published that directly attribute GEO data published each year, log scale. PeerJ reviewing PDF | (v2013:04:376:1:0:NEW 8 Sep 2013) R ev ie w in g M an us cr ip t Figure 4 Number of papers mentioning GEO accession numbers. Each panel represents reuse of a particular year of dataset submissions, with number of mentions on the y axis, years since the initial publication on the x axis, and a line for reuses by the data collection team and a line for third-party investigators. PeerJ reviewing PDF | (v2013:04:376:1:0:NEW 8 Sep 2013) R ev ie w in g M an us cr ip t Figure 5 Cumulative number of third-party reuse papers, by date of reuse paper publication Separate lines are displayed for different dataset submission years. PeerJ reviewing PDF | (v2013:04:376:1:0:NEW 8 Sep 2013) R ev ie w in g M an us cr ip t Figure 6 Scatterplot of year of publication of third-party reuse paper (with jitter) vs number of GEO datasets mentioned in the paper (log scale). The line connects the mean number of datasets attributed in reuse papers vs publication year. PeerJ reviewing PDF | (v2013:04:376:1:0:NEW 8 Sep 2013) R ev ie w in g M an us cr ip t Figure 7 Proportion of data reused by third-party papers vs year of data submission Lower bound, because only considers reuse by papers in PubMed Central, and only when reuse is attributed through direct mention of a GEO accession number. PeerJ reviewing PDF | (v2013:04:376:1:0:NEW 8 Sep 2013) R ev ie w in g M an us cr ip t Figure 8 Proportion of data submissions that contributed to data reuse papers, by year of reuse paper publication and dataset submission Each panel includes a cohort of data reuse papers published in a given year. The lines indicate the proportion of datasets that were mentioned, in aggregate, by the data reuse papers, by the year of dataset publication. The proportion is relative to the total number of datasets submitted in a given year. PeerJ reviewing PDF | (v2013:04:376:1:0:NEW 8 Sep 2013) R ev ie w in g M an us cr ip t PeerJ reviewing PDF | (v2013:04:376:1:0:NEW 8 Sep 2013) R ev ie w in g M an us cr ip t", "v2_text": "materials and methods : The main analysis in this paper examines the citation count of a gene expression microarray experiment relative to availability of the experiment's data, accounting for a large number of potential confounders. Relationship between data availability and citation Data collection To begin, we needed to identify a sample of studies that had generated gene expression microarray data in their experimental methods. We used a sample that had been collected previously (Piwowar, 2011d ; Piwowar, 2011c); briefly, a fulltext query uncovered papers that described wetlab methods related to gene expression microarray data collection. The fulltext query had been characterized characterized as having high precision (90%, with a 95% CI of 86% to 93%) and moderate recall (56%, CI of 52% to 61%) for this task. Running the query in PubMed Central, HighWire Press, and Google Scholar identified 11603 distinct gene expression microarray papers published between 2000 and 2009. Citation counts for 10,555 of these papers were found in Scopus and exported in November 2011. Although Scopus now has an API that would facilitate easy programmatic access to citation counts, at the time of data collection the authors were not aware of any way to query and export data other than through the Scopus website. The Scopus website had a limit to the length of query and the number of citations that could be exported at once. To work within these restrictions we concatenated 500 PubMed IDs at a time into 22 queries, each of the form \"PMID(1234) OR PMID(5678) OR ...\". The independent variable of interest was the availability of gene expression microarray data. Data availability had been previously determined for our sample articles in (Piwowar, 2011d), so we directly reused that dataset. Datasets were considered to be publicly available if they were discoverable in either of the two most widelyused gene expression microarray repositories: NCBI's Gene Expression Omnibus (GEO), and EBI's ArrayExpress. GEO was queried for links to the PubMed identifiers in the analysis sample using \u201cpubmed_gds [filter]\u201d and ArrayExpress was queried by searching for each PubMed identifier in a downloaded copy of the ArrayExpress database. An evaluation of this method found that querying GEO and ArrayExpress with PubMed article identifiers recovered 77% of the associated publicly available datasets (Piwowar & Chapman, 2010). Primary analysis The core of our analysis is a set of multivariate linear regressions to evaluate the association between the public availability of a study's microarray data and the number of citations received by the study. To explore what variables to include in these regressions, we first looked at correlations between the number of citations and a set of candidate variables, using Pearson correlations for numeric variables and polyserial correlations for binary and categorical variables. We also calculated correlations amongst all variables to investigate collinearity. Citation counts for 10555 papers were exported from Scopus in November 2011. 2 2 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 PeerJ reviewing PDF | (v2013:04:376:0:0:NEW 4 Apr 2013) R ev ie w in g M an us cr ip t We used a subset of the 124 attributes from (Piwowar, 2011d) previously shown or suspected to correlate with citation rate (Table 1). The main analysis was run across all papers in the sample with covariates found to a have significant pairwise correlation with citation rate. These included: the date of publication, the journal which published the study, the journal impact factor, the journal citation halflife, the number of articles published by the journal, the journal's open access policy, whether the journal is considered a core clinical journal by MEDLINE, the number of authors of the study, the country of the corresponding author, the citation score of the institution of the corresponding author, the publishing experience of the first and last author, and the subject of the study itself. Publishing experience was characterized by the number of years since the author's first paper in PubMed, the number of papers the author has published, and the number of citations the author has received from PubMed Central, estimated using Authority Clusters. The topic of the study was characterized by whether the MeSH terms classified it as related to cancer, animals, or plants. For more information on study attributes see (Piwowar, 2011d). Citation count was log transformed to be consistent with prior literature. Other count variables were squareroot transformed. Continuous variables were represented with 3part spines in the regression, using the rcs function in the R rms library. The independent variable of data availability was represented as 0 or 1 in the regression, describing whether or not associated data had been found in either of the two data repositories. Because citation counts were log transformed, the relationship of data availability to citation count was described with 95% confidence intervals after raising the regression coefficient to the power of e. Comparison to 2007 study We ran two modified analyses to attempt to reproduce the findings of (Piwowar et. al. 2007) with the larger dataset of the current study. First, we used a subset of studies with roughly the same inclusion criteria as the earlier paper studies on cancer, with humans, published prior to 2003 and the same regression coefficients: publication date, impact factor, and whether the corresponding author's address is in the USA. We followed that with a second regression that included several additional important covariates: number of authors and number of previous citations by the last author. Stratification by year Because publication date is such a strong correlate with both citation rate and data availability, we performed a separate analysis stratifying the sample by publication year, in addition to including publication date as a covariate. Fewer covariates could be included in these yearly regressions because included fewer datapoints than the full regression. The yearly regressions included date of publication, the journal which published the study, the journal impact factor, the journal's open access policy, the number of authors of the study, the citation score of the institution of the corresponding author, the previous number of PubMed Central citations received by the first and last author, whether the study was on the topic of cancer, and whether it used animals. 3 3 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 PeerJ reviewing PDF | (v2013:04:376:0:0:NEW 4 Apr 2013) R ev ie w in g M an us cr ip t Manual review of citation context We manually reviewed the context of citations to data collection papers to estimate how many citations to data collection papers were made in the context of data reuse. We (Jonathan Carlton, in acknowledgements) reviewed 50 citations chosen randomly from the set of all citations to 100 data collection papers. Specifically, we randomly selected 100 datasets deposited in GEO in 2005. For each dataset, we located the data collection article within ISI Web of Science based on its title and authors, and exported the list of all articles that cited this data collection article. From this, we selected 50 random citations stratified by the total number of times the data collection article had been cited. By manual review of the relevant fulltext of each paper, we determined if the data from the associated dataset had been reused within the study. Data reuse patterns from accession number attribution A second, independent dataset was collected to correlate with reuse attributions made through mentions of accession numbers, rather than formal citations. Data collection Datasets are sometimes attributed directly through mention of the dataset identifier (or accession number) in the fulltext, in which case the reuse may not contribute to the citation count of the original paper. To capture these instances of reuse, we collected a separate dataset to study reuse patterns based on direct data attribution. We used the NCBI eUtils library and custom Python code to obtain a list of all datasets deposited into the Gene Expression Omnibus data repository, then searched PubMed Central for each of these dataset identifiers (using queries of the form \"'GSEnnnn' OR 'GSE nnnn'\"). For each hit we recorded the PubMed Central ID of the paper that mentioned the accession number, the year of paper publication, and the author surnames. We also recorded the dataset accession number, the year of dataset publication, and the investigator names associated with the dataset record. Statistical analysis To focus on data reuse by third party investigators (rather than authors attributing datasets they had collected themselves), we excluded papers with author surnames in common with those authors who deposited the original dataset, as in (Piwowar et. al. 2011a). PubMed Central contains only a subset of papers recorded in PubMed. As described in (Piwowar et. al. 2011a), to extrapolate from the number of data reuses in PubMed Central to all possible data reuses in PubMed, we divided the yearly number of hits by the ratio of papers in PMC to papers in PubMed for this domain (domain was measured as the number of articles indexed with the MeSH term \u201cgene expression profiling\u201d). We retained papers published between 2001 and 2010 as reuse candidates. We excluded 2011 because it had a dramatically lower proportion of papers in PubMed Central at the time of our data collection: the NIH requirement to deposit a paper into PMC permits a 12 month embargo. To understand our findings on a perdataset basis, we stratified reuse estimates by year of dataset submission and normalized our reuse findings by the number of datasets deposited that year. 4 4 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 PeerJ reviewing PDF | (v2013:04:376:0:0:NEW 4 Apr 2013) R ev ie w in g M an us cr ip t results : acknowledgements : The authors thank Angus Whyte for suggestions on study design. We thank Jonathan Carlson and Estephanie Sta. Maria for their hard work on data collection and annotation. Michael Whitlock and the Biodiversity Research Centre at the University of British Columbia provided community and resources. Finally, we are grateful to everyone who helped with access to Scopus, particularly Andre Vellino, CISTI, and friends at the British Library. discussion : The open data citation boost One of the primary findings of this analysis is that papers with publicly available microarray data received more citations than similar papers that did not make their data available, even after controlling for many variables known to influence citation rate. We found the open data citation boost for this sample to be 9% overall (95% confidence interval: 5% to 13%), but the boost depended heavily on the year the dataset was made available. Datasets deposited very recently have so far received no (or few) additional citations, while those deposited in 20042005 showed a clear boost of about 30% (confidence intervals 15% to 48%). Older datasets also appeared to receive a citation boost, but the estimate is less precise because relatively little microarray data was collected or archived in the early 2000s. The citation boost reported here is smaller than that reported in the previous study by (Piwowar et. al. 2007), which estimated a citation boost of 69% for human cancer gene expression microarray studies published before 2003 (95% confidence intervals of 18 to 143%). Our attempt to replicate the (Piwowar et. al. 2007) study here suggests that aspects of both the data and analysis can help to explain the quantitatively different results. It appears that clinically relevant datasets released early in the history of microarray analysis were particularly impactful. Importantly, however, the new analysis also suggested that the previous estimate was confounded by significant citation correlates, including the total number of authors and the citation history of the last author. This finding reinforces the importance of accounting for covariates through multivariate analysis and the need for large samples to support full analysis: the 69% estimate is probably too high, even for its highimpact sample. Nonetheless, a 1030% is citation 8 8 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 PeerJ reviewing PDF | (v2013:04:376:0:0:NEW 4 Apr 2013) R ev ie w in g M an us cr ip t boost may still be an effective motivator for data deposit, given that prestigious journals have been known advertise their impact factors to three decimal places (Smith, 2006). A paper with open data may be cited for reasons other than data reuse, and open data may be reused without citation of the original paper. Ideally, we would like to separate these two phenomena (data reuse and paper citation) and measure how often the latter is driven by the former. In our manual analysis of 138 citations to papers with open data, we observed that 6% (95% CI: 3% to 11%) of citations were in the context of data reuse. While this methodology and sample size does not allow us to estimate with any precision the proportion of the data citation boost that can be attributed to data reuse, the result is consistent with data reuse being a major contributor. Another result of importance from the citation analysis is that papers based on self data reuse dropped off steeply after two years, while data reuse papers by thirdparty authors continued to accumulate even after six years. This suggests that while researchers may have some incentive for protecting their own exclusive use of data close to the time of the initial publication, the equation changes dramatically after a short period. This provides some evidence to guide policy decisions regarding the length of data embargoes allowed by journal archiving policies such as the Joint Data Archiving Policy described by (Rausher et. al. 2010). Challenges collecting citation data This study required obtaining citation counts for thousands of articles using PubMed IDs. This was not supported at the time of data collection using either Thomson Reuter's Web of Science or Google Scholar. While this type of query was (and is) supported by Elsevier's Scopus database, we lacked institutional access to Scopus, individual subscriptions were not available, and attempts to request access through Scopus staff were unsuccessful. One of us (HP) attempted to use the British Library's walkin access of Scopus while visiting the UK. Unfortunately, the British Library\u2019s policies did not permit any method of electronic input of the PubMed identifier list (the list is 10,000 elements long). HP eventually obtained official access to Scopus through a Research Worker agreement with Canada's National Research Library (NRCCISTI), after being fingerprinted to obtain a police clearance certificate because she had recently lived in the United States. Our understanding of research practice suffers because access to tools and data is so difficult. Patterns of data reuse To better understand patterns of data reuse, a larger sample of reuse instances is needed than can easily be assembled through manual classification of citation context. To that end, we looked at a complementary source of information about reuse of the same datasets: direct mention of GEO or ArrayExpress accession numbers within the body of a fulltext research article. The large number of instances of reuse identified this way allowed us to ask questions about the distribution of reuse over time and across datasets. The results indicate that dataset reuse has been increasing over time (excluding the initial years of GEO and ArrayExpress when few datasets were deposited and reuse appears to have been atypically broad). Recent reuse analyses include more datasets, on average, 9 9 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 PeerJ reviewing PDF | (v2013:04:376:0:0:NEW 4 Apr 2013) R ev ie w in g M an us cr ip t than older reuse studies. Also, the fact that reuse was greatest for datasets published between three and six years previously suggests that the lower citation boost we observed for recent papers is due, at least in part, to a relatively short followup time. Extrapolating to all of PubMed, we estimate the number of reuse papers published per year is on the same order of magnitude, and likely greater, than the number of datasets made available. This data reuse curve is remarkably constant for data deposited between 2004 and 2009. This reinforces the conclusions of an earlier analysis: even modest data reuse can provide an impressive return on investment for science funders (Piwowar et. al. 2011b). We have observed a moderate proportion of datasets being reused by third parties (more than 20% of the datasets deposited between 2003 and 2007). It is important to recognize that this is likely a gross underestimate. It includes only those instances of reuse that can be recognized through the mention of accession number in PubMed Central. No attempt has been made to extrapolate these distribution statistics to all of PubMed, nor to reflect additional attributions through paper citations or mentions of the archive name alone. Further, many important instances of data reuse leave no trace in the published literature, such as those in education and training. Reasons for the data citation boost While we cannot exclude that the open data citation boost is driven entirely by thirdparty data reuse, there may be other factors contributing to the effect either directly or indirectly. The literature on possible reasons for an \u201cOpen Access citation benefit\u201d suggests a number of factors that may also be relevant to open data (Craig et. al. 2007). Building upon this work, we suggest several possible sources for an \"Open Data citation benefit\": 1. Data Reuse. Papers with available datasets can be used in ways that papers without data cannot, and may receive additional citations as a result. 2. Credibility Signalling. The credibility of research findings may be higher for research papers with available data. Such papers may be preferentially chosen as background citations and/or the foundation of additional research. 3. Increased Visibility. Third party researchers may be more likely to encounter a paper that has available data, either by a direct link from the data or indirectly due to crosspromotion. For example, links from a data repository to a paper may increase the search ranking of the research paper. 4. Early View. When data is made available before a paper is published, some citations may accrue earlier than they would otherwise because of accelerated awareness of the methods, findings, etc. 5. Selection Bias. Authors may be more likely to publish data for papers they judge to be their best quality work, because they are particularly proud or confident in the results (Wicherts et. al. 2011). 10 10 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 PeerJ reviewing PDF | (v2013:04:376:0:0:NEW 4 Apr 2013) R ev ie w in g M an us cr ip t Importantly, almost all of these mechanisms are aligned with more efficient and effective scientific progress: increased data use, facilitated credibility determination, earlier access, improved discoverability, and a focus on best work through data availability are good for both investigators and the science community as a whole. Working through the one area where incentives between scientific good and author incentives conflict, finding weaknesses or faults in published research, may require mandates. Or, instead, perhaps the research community will eventually come to associate withheld data with poor quality research, as it does today for findings that are not disclosed in a peerreviewed paper (Ware, 2008). The citation boost in the current study is consistent with data reuse observed in this study and the smallscale annotation reported in (Rung & Brazma, 2013). Nonetheless, it is possible some of the other sources postulated above may have contributed citations for the studies with available data. Further work will be needed to understand the relative contributions from each source. For example, indepth analyses of all publications from a set of datacollecting authors could support measurement of selection bias. Observing search behavior of researchers, and the returned search hit results, could characterize increased visibility due to data availability. Hypothetical examples could be provided to authors to determine whether they would be systematically more likely to cite a paper with available data in situations where they are considering the credibility of research findings. Future work Additional future work can improve on these results by considering and integrating all methods of data use attribution. This holistic effort would include identifying citations to the paper that describes the data collection, mentions of the dataset identifier itself whether in full text, the references section, or supplementary information citations to the dataset as a firstclass research object, and even mentions of the data collection investigators in acknowledgement sections. The citations and mentions would need classification based on context to ensure they are in the context of data reuse. The obstacles encountered in obtaining the citation data needed for this study, as described earlier in the Discussion, demonstrate that improvements in tools and practice are needed to make impact tracking easier and more accurate, for daytoday analysis as well as studies for evidencebased policy. Such research is hamstrung without programmatic access to the fulltext of the research literature and to the citation databases that underpin impact assessment. The lack of conventions and tool support for data attribution (Mooney & Newton, 2012) is also a significant obstacle, and undoubtedly led to undercounting in the present study. There is much room for improvement, and we are hopeful about recent steps toward data citation standards taken by initiatives such as DataCite. Data from current and future studies can start to be used to estimate the impact of policy decisions. For example, do embargo periods decrease the level of data reuse? Do restrictive or poorly articulated licensing terms decrease data reuse? Which types of data reuse are facilitates by robust data standards and which types are unaffected? 11 11 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 PeerJ reviewing PDF | (v2013:04:376:0:0:NEW 4 Apr 2013) R ev ie w in g M an us cr ip t Qualitative assessment of data reuse is an essential complement to largescale quantitative analyses. Repeating and extending previous studies will help us to understand the potential of data reuse, areas of progress, and remaining challenges (e.g. (Zimmerman, 2003 ; Wan & Pavlidis, 2007 ; Wynholds et. al. 2012 ; Rolland & Lee, 2013)). Usage statistics from primary data repositories and valueadded repositories are also useful sources of insight into reuse patterns (Rung & Brazma, 2013). Citations are blind to many important types of data reuse. The impact of data on practitioners, educators, data journalists, and industry researchers are not captured by attribution patterns in the scientific literature. Altmetrics indicators uncover discussions in social social media, syllabi, patents, and theses: analyzing such indicators for datasets would provide valuable evidence of reuse beyond the scientific literature. As evaluators move away from assessing research based on journal impact factor and toward articlelevel metrics, postpublication metrics rates will become increasingly important indicators of research impact(Piwowar, 2013). conclusions : We find a robust citation benefit from open data, although a smaller one than previously reported. We conclude there is a direct effect of thirdparty data reuse that persists for years beyond the time when researchers have published most of the papers reusing their own data. We further conclude that, at least for gene expression microarray data, a substantial fraction of archived datasets are reused, and that the intensity of dataset reuse has been steadily increasing since 2003. It is important to remember that the primary rationale for making research data available has nothing to do with evaluation metrics or citation benefits: a full account of experimental process and findings is a tenant of science, and publiclyfunded science is a public resource (Smith, 2006). Nonetheless, robust evidence of personal benefit will help as science transitions from \"data not shown\" to a culture that simply expects data to be part of the published record. author contributions : Both authors contributed to the study design, discussed the results and implications and collaboratively revised the manuscript. HP conceived the initial idea, performed the data collection and statistical analysis, and drafted the initial manuscript. data and script availability : Statistical analyses were last run on Wed Apr 3 13:14:39 2013 with R version 2.15.1 (20120622). Packages used included reshape2 (Wickham, 2007), plyr (Wickham, 2011), rms (Harrell, 2012), polycor (Fox, 2010), ascii (Hajage, 2011), ggplot2 (Wickham, 2009), gplots (Bolker et. al. 2012), knitr (Xie, 2012), and knitcitations (Boettiger, 2013). Pvalues were twotailed. Raw data and statistical scripts are available in the Dryad data repository at [data uploaded to Dryad at the time of article acceptance; citation will be included once known]. Data collection scripts are on GitHub at https://github.com/hpiwowar/georeuse and https://github.com/hpiwowar/pypub. The Markdown version of this manuscript with interleaved statistical scripts (Xie, 2012) is on GitHub https://github.com/hpiwowar/citation11k. Publication references are available in a publiclyavailable Mendeley group to facilitate exploration. funding : This study was funded by DataONE (OCI0830944), Dryad (DBI0743720), and a Discovery grant to Michael Whitlock from the Natural Sciences and Engineering Research Council of Canada. 12 12 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 PeerJ reviewing PDF | (v2013:04:376:0:0:NEW 4 Apr 2013) R ev ie w in g M an us cr ip t description of cohort : We identified 10557 articles published between 2001 and 2009 as collecting gene expression microarray data. The papers were published in 667 journals, with the top 12 journals accounting for 30% of the papers (Table 1). Microarray papers were published more frequently in later years: 2% of articles in our sample were published in 2001, compared to 15 % in 2009 (Table 2). The papers were cited between 0 and 2643 times, with an average of 32 citations per paper and a median of 16 citations. The GEO and ArrayExpress repositories had links to associated datasets for 24.8% of these papers. Data availability is associated with citation boost Without accounting for any confounding factors, the distribution of citations was similar for papers with and without archived data. That said, we hasten to mention several strong confounding factors. For example, the number of citations a paper has received is strongly correlated to the date it was published: older papers have had more time to accumulate citations. Furthermore, the probability of data archiving is also correlated with the age of an article more recent articles are more likely to archive data (Piwowar, 2011d). Accounting for publication date, the distribution of citations for papers with available data is rightshifted relative to the distribution for those without, as seen in Figure 1. Other variables have been shown to correlate with citation rate (Fu & Aliferis, 2008). Because singlevariable correlations can be misleading, we performed multivariate 5 5 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 PeerJ reviewing PDF | (v2013:04:376:0:0:NEW 4 Apr 2013) R ev ie w in g M an us cr ip t regression to isolate the relationship between data availability and citation rate from confounders. The multivariate regression included attributes to represent an article's journal, journal impact factor, date of publication, number of authors, number of previous citations of the fist and last author, number of previous publications of the last author, whether the paper was about animals or plants, and whether the data was made publicly available. Citations were 9% higher for papers with available data, independent of other variables (p < 0.01, 95% confidence intervals [5% to 13% ]). We also performed an analysis on a subset of manually curated articles. The findings were similar to those of the whole sample, supporting our assumption that errors in automated inclusion criteria determination did not have substantial influence on the estimate (see Supplementary Article S1). More covariates led to a more conservative estimate Our estimate of citation boost, 9% as per the multivariate regression, is notably smaller than the 69% (95% confidence intervals of 18 to 143%) citation advantage found by (Piwowar et. al. 2007), even though both studies looked at publicly available gene expression microarray data. There are several possible reasons for this difference. First, (Piwowar et. al. 2007) concentrated on datasets from highimpact studies: human cancer microarray trials published in the early years of microarray analysis (between 1999 and 2003). By contrast, the current study included gene expression microarray data studies on any subject published between 2001 and 2009. Second, because the (Piwowar et. al. 2007) sample was small (85 papers), the previous analysis included only a few covariates: publication date, journal impact factor, and country of the corresponding author. We attempted to reproduce the (Piwowar et. al. 2007) methods with the current sample. Limiting the inclusion criteria to studies with MeSH terms \"human\" and \"cancer\", and to papers published between 2001 and 2003, reduced the cohort to 308 papers. Running this subsample with the covariates used in the (Piwowar et. al. 2007) paper resulted in a comparable estimate to the 2007 paper: a citation increase of 47% (95% confidence intervals of 6% to 103%). The subsample of 308 papers was large enough to include a few additional covariates: number of authors and citation history of the last author. Including these important covariates decreased the estimated effect to 18% with a confidence interval that spanned a loss of 17% citations to a boost of 66%. Citation boost over time After completing our comparison to prior results, we returned to the whole sample. Because publication date is such as strong correlate with both citation rate and data availability, we ran regressions for each publication year individually. The estimate of citation boost varied by year of publication. The citation boost was greatest for data published in 2004 and 2005, at about 30%. Earlier years showed citation boosts with 6 6 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 PeerJ reviewing PDF | (v2013:04:376:0:0:NEW 4 Apr 2013) R ev ie w in g M an us cr ip t wider confidence intervals due to relatively small sample sizes, while more recently published data showed a less pronounced citation boost (Table 3, Figure 2). Data reuse is a demonstrable component of citation boost To estimate the proportion of the citation boost directly attributable to data reuse, we randomly selected and manually reviewed 138 citations. We classified 8 (6%) of the citations as attributions for data reuse (95% CI: 3% to 11%). Evidence of reuse from mention of dataset identifiers in full text A complementary dataset was collected and analyzed to characterize data reuse: direct mention of dataset accession numbers in the full text of papers. In total there were 9274 mentions of GEO datasets in papers published between 2000 and 2010 within PubMed Central across 4543 papers written by author teams whose last names did not overlap those who deposited the data. Extrapolating this to all of PubMed, we estimate there may be about 1.4081 \u00d7 104 thirdparty reuses of GEO data attributed through accession numbers in all of PubMed for papers published between 2000 and 2010. The number of reuse papers started to grow rapidly several years after data archiving rate started two grow. In recent years both the number of datasets and the number of reuse papers have been growing rapidly, at about the same rate, as seen in Figure 3. The level of thirdparty data use was high: for 100 datasets deposited in year 0, we estimate that 40 papers in PubMed reused a dataset by year 2, 100 by year 4, and more than 150 by year 5. This data reuse curve is remarkably constant for data deposited between 2004 and 2009. The reuse growth trend for data deposited in 2003 has been slower, perhaps because 2003 data is not as groundbreaking as earlier data, and is likely less standardscompliant and technically relevant than later data. We found that almost all instances of self reuse (identified by surname overlap with data submission record) were published within two years of dataset publication. This pattern contrasts sharply with third party data reuse, as seen in Figure 4. The cumulative number of thirdparty reuse papers is illustrated in Figure 5. Separate lines are displayed for different dataset publication years. Because the number of datasets published has grown dramatically with time, it is instructive to consider the cumulative number of thirdparty reuses normalized by the number of datasets deposited each year (Figure 6). In the earliest years for which data is available, 20012002, there were relatively few data deposits, but these datasets have been disproportionately reused. We exclude the early years from the plot to examine the pattern of data reuse once gene expression datasets became more common. Since 2003, the rate at which individual datasets are reused has increased with each year of data publication. Growth in the number of datasets in each reuse paper over time The number of distinct datasets used in a reuse paper was found to increase over time (Figure 7). In 20022004 almost all reuse papers only used one or two datasets. By 2010, 25% of reuse papers used 3 or more datasets. 7 7 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 PeerJ reviewing PDF | (v2013:04:376:0:0:NEW 4 Apr 2013) R ev ie w in g M an us cr ip t Distribution of reuse across datasets It is useful to know the distribution of reuse amongst datasets. Since our methods only detect reuse by papers in PubMed Central (a small proportion of the biomedical literature) and only when the accession number is given in the full text, our estimates of reuse are extremely conservative. Despite this, we found that reuse was not limited to just a few papers (Figure 8). Nearly all datasets published in 2001 were reused at least once. The proportion of reused datasets declined in subsequent years, with a plateau of about 20% for data deposited between 2003 and 2007. The actual rate of reuse across all methods of attribution, and extrapolated to all of PubMed, is likely much higher Distribution of the age of reused data We found the authors of thirdparty data reuse papers were most likely to use data that was 36 years old by the time their paper was published, normalized for how many datasets were deposited each year (Figure 9). For example, in aggregate, we found that microarray reuse papers from 2005 mentioned the accession numbers of more than 5% of all datasets that had been submitted two years earlier, in 2003. Reuse papers from 2008 mentioned about 7% of the datasets submitted two years prior (in 2006), more than 10% of the datasets submitted 3 and 4 years prior (2005 and 2004), and about 7% of the datasets submitted 5 years earlier, in 2003. 2005, at about 30%. authors published most papers using their own datasets within two years of their : first publication on the dataset, whereas data reuse papers published by third-party investigators continued to accumulate for at least six years. To study patterns of data reuse directly, we compiled 9,724 instances of third party data reuse via mention of geo or arrayexpress accession numbers in : the full text of papers. The level of third-party data use was high: for 100 datasets deposited in year 0, we estimated that 40 papers in PubMed reused a dataset by year 2, 100 by year 4, and more than 150 data reuse papers had been published by year 5. Data reuse was distributed across a broad base of datasets: a very conservative estimate found that 20% of the datasets deposited between 2003 and 2007 had been reused at least once by third parties. CONCLUSION: After accounting for other factors affecting citation rate, we find a robust citation benefit from open data, although a smaller one than previously reported. We conclude there is a direct PeerJ reviewing PDF | (v2013:04:376:0:0:NEW 4 Apr 2013) R ev ie w in g M an us cr ip t effect of third-party data reuse that persists for years beyond the time when researchers have published most of the papers reusing their own data. Other factors that may also contribute to the citation boost are considered. We further conclude that, at least for gene expression microarray data, a substantial fraction of archived datasets are reused, and that the intensity of dataset reuse has been steadily increasing since 2003. PeerJ reviewing PDF | (v2013:04:376:0:0:NEW 4 Apr 2013) R ev ie w in g M an us cr ip t Introduction \"Sharing information facilitates science. Publicly sharing detailed research data\u2013sample attributes, clinical factors, patient outcomes, DNA sequences, raw mRNA microarray measurements\u2013with other researchers allows these valuable resources to contribute far beyond their original analysis. In addition to being used to confirm original results, raw data can be used to explore related or new hypotheses, particularly when combined with other publicly available data sets. Real data is indispensable when investigating and developing study methods, analysis techniques, and software implementations. The larger scientific community also benefits: sharing data encourages multiple perspectives, helps to identify errors, discourages fraud, is useful for training new researchers, and increases efficient use of funding and patient population resources by avoiding duplicate data collection.\" (Piwowar et. al. 2007) Making research data publicly available also has costs. Some of these costs are borne by society: For example, data archives must be created and maintained. Many costs, however, are borne by the datacollecting investigators: Data must be documented, formatted, and uploaded. Investigators may be afraid that other researchers will find errors in their results, or \"scoop\" additional analyses they have planned for the future. Personal incentives are important to balance these personal costs. Scientists report that receiving additional citations is an important motivator for publicly archiving their data (Tenopir et. al. 2011). There is evidence that studies that make their data available do indeed receive more citations than similar studies that do not (Gleditsch & Strand, 2003 ; Piwowar et. al. 2007 ; Ioannidis et. al. 2009 ; Pienta et. al. 2010 ; Henneken & Accomazzi, 2011 ; Sears, 2011 ; Dorch, 2012). These findings have been referenced by new policies that encourage and require data archiving (e.g. (Rausher et. al. 2010)), demonstrating the appetite for evidence of personal benefit. In order for journals, institutions and funders to craft good data archiving policy, it is important to have an accurate estimate of the citation differential. Estimating an accurate citation differential is made difficult by the many confounding factors that influence citation rate. In past studies, it has seldom been possible to adequately control these confounders statistically, much less experimentally. Here, we perform a large multivariate analysis of the citation differential for studies in which gene expression microarray data either was or was not made available in a public repository. We seek to improve on prior work in two key ways. First, the sample size of this analysis is large \u2013 over two orders of magnitude larger than the first citation study of gene expression microarray data (Piwowar et. al. 2007), which gives us the statistical power to account for a larger number of cofactors in the analyses. The resulting estimates thus isolate the association between data availability and citation rate with more accuracy. Second, this report goes beyond citation analysis to include analysis of data reuse attribution directly. We explore how data reuse patterns change over both the lifespan of a data repository and the lifespan of a dataset, as well as looking at the distribution of reuse across datasets in a repository. 1 1 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 PeerJ reviewing PDF | (v2013:04:376:0:0:NEW 4 Apr 2013) R ev ie w in g M an us cr ip t", "url": "https://peerj.com/articles/176/reviews/", "review_1": "Valeria Souza \u00b7 Sep 13, 2013 \u00b7 Academic Editor\nACCEPT\nThe paper is now much more focused and it reads better. All the reviewer suggestions have been complied resulting in a great improvement on the structure of the manuscript. It should be accepted for publication.", "review_2": "Valeria Souza \u00b7 Jul 9, 2013 \u00b7 Academic Editor\nMAJOR REVISIONS\nPlease follow all the comments, in particular of Reviewer 1, who suggested more imprtant changes than reviewer 2.", "review_3": "Reviewer 1 \u00b7 Jul 8, 2013\nBasic reporting\nSome of the references used in the Introduction and Discussion should be revised because there are not the mos appropriate.\nExperimental design\nThe authors should better describe how were the samples obtained and transported. where were the different methods used or applied.\nValidity of the findings\nPlease check general comments\nAdditional comments\nI think that it is essential that the authors separate MRSA from MSSA and then describe the characterization of each of the classes. Antibiotypes, ST and spa types, presence of PVL are meaningful if presented in the way I suggest. Both in the text and in the Tables and figures this separation is required. The Authors can then compare the data obtained with results from other countries namely China.\n\nConcerning frequency of sarX among ST239 the authors need to revise the numbers. For example in the paper quoted from Holden et al 2010 I do not find any mention to the frequency of sarX.\nCite this review as\nAnonymous Reviewer (2013) Peer Review #1 of \"Antimicrobial resistance and molecular epidemiology of Staphylococcus aureus from Ulaanbaatar, Mongolia (v0.1)\". PeerJ https://doi.org/10.7287/peerj.176v0.1/reviews/1", "review_4": "Reviewer 2 \u00b7 Jun 12, 2013\nBasic reporting\nNo Comments\nExperimental design\nNo Comments\nValidity of the findings\nSee below for specifics.\nAdditional comments\nThis manuscript describes a characterization of Staphylococcus aureus from Mongolia. The study was well-designed and aware of limitations. The authors need to clarify some statements and results, and they need to reconsider some of their interpretations, as described below.\n\n1) It is not clear whether the association between mecA(or methicillin or antimicrobials?) and agr function is positive or negative? The wording is poor. Egs, pg7, ln181; pg8,ln199; pg10, ln245-249. How does their finding compare with that described by Rudkin et al. 2012 J Infect Dis 205:798?\n\n2) Pg8, ln204, says 11 \"confirmed\" non-typeable spa sequences. Does this mean they got no amplicon upon repeated PCR attempts, or they got a novel spa sequence? If the former, then this is an unexpectedly high number of isolates, and may call for a more thorough examination of the result (eg different PCR conditions, different primers).\n\n3) Pg9, ln210 and Pg10, ln240, the authors note a \"greater diversity\" among the MSSAs. Three points here: 1) Are they simply equating number of spa types with diversity? They have not statistically compared diversity between MSSA and MRSA. Number of types can follow sample size, so it may be better to quantify diversity with an \"index\" (eg Simpson's) that incorporates number of types and frequency of occurrence, along with an appropriate confidence interval. 2) Many references exist that describe more \"diversity\" among MSSA; some should be cited. 3) Finding more diversity among MSSAs than MRSAs does not necessarily support their statement about MRSAs arising from exisiting MSSAs (ln241-242) - it depends on whether the MRSA types are a subset of the MSSA types. They could be completely different types yet of different \"diversity\".\n\n4) Pg10, ln252-254, Because they find some of the same MLST-STs in the two time periods they suggest there may be minimal or absent mutations in S. aureus core genome? This logic is flawed. MLST examines only a small part of the genome, so there could be dozens to hundreds of SNPs outside of MLST genes that are not detected. In addition, spa t589 was noted to include both ST45 and \"new ST\" - is the \"new ST\" simply a single bp variant of ST45 or is it something quite different? If it is a new, single bp variant of ST45, then they have evidence against their statement of minimal/absent mutations.\n\n5) It is also not clear what the strategy was for selecting isolates for MLST. Eg the spa t8677 group was the 3rd most common, yet no MLST was done; several rare spa types were done with MLST. One of the more interesting results is that ST45 and ST121 might be quite common in Mongolia; they are not so common elsewhere. Also could the spa types in Table 2 that include MRSAs be indicated with asterisks (or 2 asterisks if solely MRSA)?\n\n\nComments of a minor nature:\n\n6) In the abstract, when you say \"a high multidrug resistance profile\" do you mean high level MICs to at least 3 antibiotics or resistance to many more than 3 antibiotics?\n\n7) The quality of the Mongolia 2013 reference is not clear; suggest delete from the paper (pg3, ln72, as you already have 2 quality references for that statement).\n\n8) On pg 5, ln135, when you say \">= 3 discrete antimicrobial categories\" do you include oxacillin and imipenam in the same category, they are both beta-lactams but different subclasses. All the other antibiotics are different classes.\n\n9) On pg6, ln150, it is probably worth indicating that the study reporting a high prevalence of sasX focused on isolates from (part of) China. We know very little about the geographic distribution of this gene.\n\n10) Pg6, ln153, indicate how many replicates were done for the agr functionality assay per isolate.\n\n11) Pg7, ln170, it is not clear that patients were \"enrolled\" as this study did not have human subjects involvement with consent forms, etc. Possibly the authors mean to say \"The patients from which the isolates had been collected as part of routine microbiological work...\" or something like that.\n\n12) Pg9, ln209 and Table 2 and Fig 2 legends. It is better to call it a \"putative\" founder, as this BURP algorithm has not been rigorously examined for accuracy in identifying founders.\n\n13) Pg11, ln277, does \"unable to identify any duplicate isolates\" mean they did not have the information to do so (eg which isolates are from which patients) or does it mean they did not identify any duplicates?\n\n14) Pg12, ln292, 38.8% should be 8.8% as stated in the Abstract?\nCite this review as\nAnonymous Reviewer (2013) Peer Review #2 of \"Antimicrobial resistance and molecular epidemiology of Staphylococcus aureus from Ulaanbaatar, Mongolia (v0.1)\". PeerJ https://doi.org/10.7287/peerj.176v0.1/reviews/2", "pdf_1": "https://peerj.com/articles/176v0.2/submission", "pdf_2": "https://peerj.com/articles/176v0.1/submission", "all_reviews": "Review 1: Valeria Souza \u00b7 Sep 13, 2013 \u00b7 Academic Editor\nACCEPT\nThe paper is now much more focused and it reads better. All the reviewer suggestions have been complied resulting in a great improvement on the structure of the manuscript. It should be accepted for publication.\nReview 2: Valeria Souza \u00b7 Jul 9, 2013 \u00b7 Academic Editor\nMAJOR REVISIONS\nPlease follow all the comments, in particular of Reviewer 1, who suggested more imprtant changes than reviewer 2.\nReview 3: Reviewer 1 \u00b7 Jul 8, 2013\nBasic reporting\nSome of the references used in the Introduction and Discussion should be revised because there are not the mos appropriate.\nExperimental design\nThe authors should better describe how were the samples obtained and transported. where were the different methods used or applied.\nValidity of the findings\nPlease check general comments\nAdditional comments\nI think that it is essential that the authors separate MRSA from MSSA and then describe the characterization of each of the classes. Antibiotypes, ST and spa types, presence of PVL are meaningful if presented in the way I suggest. Both in the text and in the Tables and figures this separation is required. The Authors can then compare the data obtained with results from other countries namely China.\n\nConcerning frequency of sarX among ST239 the authors need to revise the numbers. For example in the paper quoted from Holden et al 2010 I do not find any mention to the frequency of sarX.\nCite this review as\nAnonymous Reviewer (2013) Peer Review #1 of \"Antimicrobial resistance and molecular epidemiology of Staphylococcus aureus from Ulaanbaatar, Mongolia (v0.1)\". PeerJ https://doi.org/10.7287/peerj.176v0.1/reviews/1\nReview 4: Reviewer 2 \u00b7 Jun 12, 2013\nBasic reporting\nNo Comments\nExperimental design\nNo Comments\nValidity of the findings\nSee below for specifics.\nAdditional comments\nThis manuscript describes a characterization of Staphylococcus aureus from Mongolia. The study was well-designed and aware of limitations. The authors need to clarify some statements and results, and they need to reconsider some of their interpretations, as described below.\n\n1) It is not clear whether the association between mecA(or methicillin or antimicrobials?) and agr function is positive or negative? The wording is poor. Egs, pg7, ln181; pg8,ln199; pg10, ln245-249. How does their finding compare with that described by Rudkin et al. 2012 J Infect Dis 205:798?\n\n2) Pg8, ln204, says 11 \"confirmed\" non-typeable spa sequences. Does this mean they got no amplicon upon repeated PCR attempts, or they got a novel spa sequence? If the former, then this is an unexpectedly high number of isolates, and may call for a more thorough examination of the result (eg different PCR conditions, different primers).\n\n3) Pg9, ln210 and Pg10, ln240, the authors note a \"greater diversity\" among the MSSAs. Three points here: 1) Are they simply equating number of spa types with diversity? They have not statistically compared diversity between MSSA and MRSA. Number of types can follow sample size, so it may be better to quantify diversity with an \"index\" (eg Simpson's) that incorporates number of types and frequency of occurrence, along with an appropriate confidence interval. 2) Many references exist that describe more \"diversity\" among MSSA; some should be cited. 3) Finding more diversity among MSSAs than MRSAs does not necessarily support their statement about MRSAs arising from exisiting MSSAs (ln241-242) - it depends on whether the MRSA types are a subset of the MSSA types. They could be completely different types yet of different \"diversity\".\n\n4) Pg10, ln252-254, Because they find some of the same MLST-STs in the two time periods they suggest there may be minimal or absent mutations in S. aureus core genome? This logic is flawed. MLST examines only a small part of the genome, so there could be dozens to hundreds of SNPs outside of MLST genes that are not detected. In addition, spa t589 was noted to include both ST45 and \"new ST\" - is the \"new ST\" simply a single bp variant of ST45 or is it something quite different? If it is a new, single bp variant of ST45, then they have evidence against their statement of minimal/absent mutations.\n\n5) It is also not clear what the strategy was for selecting isolates for MLST. Eg the spa t8677 group was the 3rd most common, yet no MLST was done; several rare spa types were done with MLST. One of the more interesting results is that ST45 and ST121 might be quite common in Mongolia; they are not so common elsewhere. Also could the spa types in Table 2 that include MRSAs be indicated with asterisks (or 2 asterisks if solely MRSA)?\n\n\nComments of a minor nature:\n\n6) In the abstract, when you say \"a high multidrug resistance profile\" do you mean high level MICs to at least 3 antibiotics or resistance to many more than 3 antibiotics?\n\n7) The quality of the Mongolia 2013 reference is not clear; suggest delete from the paper (pg3, ln72, as you already have 2 quality references for that statement).\n\n8) On pg 5, ln135, when you say \">= 3 discrete antimicrobial categories\" do you include oxacillin and imipenam in the same category, they are both beta-lactams but different subclasses. All the other antibiotics are different classes.\n\n9) On pg6, ln150, it is probably worth indicating that the study reporting a high prevalence of sasX focused on isolates from (part of) China. We know very little about the geographic distribution of this gene.\n\n10) Pg6, ln153, indicate how many replicates were done for the agr functionality assay per isolate.\n\n11) Pg7, ln170, it is not clear that patients were \"enrolled\" as this study did not have human subjects involvement with consent forms, etc. Possibly the authors mean to say \"The patients from which the isolates had been collected as part of routine microbiological work...\" or something like that.\n\n12) Pg9, ln209 and Table 2 and Fig 2 legends. It is better to call it a \"putative\" founder, as this BURP algorithm has not been rigorously examined for accuracy in identifying founders.\n\n13) Pg11, ln277, does \"unable to identify any duplicate isolates\" mean they did not have the information to do so (eg which isolates are from which patients) or does it mean they did not identify any duplicates?\n\n14) Pg12, ln292, 38.8% should be 8.8% as stated in the Abstract?\nCite this review as\nAnonymous Reviewer (2013) Peer Review #2 of \"Antimicrobial resistance and molecular epidemiology of Staphylococcus aureus from Ulaanbaatar, Mongolia (v0.1)\". PeerJ https://doi.org/10.7287/peerj.176v0.1/reviews/2\nReview 5: nan\nReview 6: \n", "LLM_extracted_review": "['Review 1: The paper is now much more focused and it reads better. All the reviewer suggestions have been complied resulting in a great improvement on the structure of the manuscript. It should be accepted for publication.\\n\\nReview 2: Please follow all the comments, in particular of Reviewer 1, who suggested more important changes than reviewer 2.\\n\\nReview 3: Some of the references used in the Introduction and Discussion should be revised because there are not the most appropriate. The authors should better describe how the samples were obtained and transported. It is essential that the authors separate MRSA from MSSA and then describe the characterization of each of the classes. Concerning frequency of sarX among ST239, the authors need to revise the numbers.\\n\\nReview 4: The study was well-designed and aware of limitations. The authors need to clarify some statements and results, and they need to reconsider some of their interpretations. It is not clear whether the association between mecA and agr function is positive or negative. The authors need to clarify the meaning of \"confirmed\" non-typeable spa sequences. The authors have not statistically compared diversity between MSSA and MRSA. The logic regarding minimal or absent mutations in the S. aureus core genome is flawed. It is also not clear what the strategy was for selecting isolates for MLST. Comments of a minor nature include clarifications on multidrug resistance profiles, the quality of references, and specific details about the study methodology.\\n\\nReview 5: [No review provided]\\n\\nReview 6: [No review provided]']" }