| { |
| "v1_col_introduction": "introduction : The assessment of the clinical competence of a medical student is challenging. A\ncompetency is \"... an observable ability of a health professional related to a specific activity that integrates knowledge, skills, values, and attitudes. Since they are observable, they can be measured and assessed.\" Although seemingly straight forward, competency based education is of limited usefulness in guiding the design and implementation of educational experiences if they are not tied to specific learning objectives.(1) Additionally, learning objectives are of limited usefulness if they are not available to students and faculty when interacting with patients. Finally, observation and assessment help neither students nor patients if they are not captured and documented in a way that facilitates learner specific plans for improvement and excellence. We present a generalizable initiative that makes national curricula functional in local learning environments, and improves and simplifies observation based assessments and performancebased data tracking for faculty and learners.\nMaterials\u00a0&\u00a0Methods Content\u00a0Manager\nWe developed a mobile, cloud-based application called just in time medicine (or JIT) that\nfunctions effectively on smart phones, tablets and laptop computers. The mobile application is supported by a self-service web-based content management system designed with the explicit aim of enabling users with average computing skills to build their own customizable content, including criterion-based checklists that can then be delivered to any internet enabled device such as a smart phone or tablet.\nFor this project, we utilized nineteen core training problems from the nationally validated\nClerkship Directors in Internal Medicine (CDIM) curriculum and combined these training\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n27 28\n29\n30\n31\n32\n33\n34\n35\n36\nPeerJ reviewing PDF | (v2013:05:520:1:0:NEW 17 Aug 2013)\nR ev ie w in g M an\nus cr ip t\nproblems with the observable competencies of communication skills, history taking and physical examination to create problem and task specific checklists. For each assessment, the software calculates the students\u2019 performance by determining the percentage of all potential items performed correctly, and an algorithm generated grade of \u201cnot done/unsatisfactory\u201d, \u201cneeds improvement\u201d or \u201cwell done\u201d is calculated depending on the percentage of items performed correctly. In general if a student achieved 80% of the expected items correctly they received a \u201cwell done grade\u201d; performing 30 \u2013 79% of the expected items resulted in a \u201cneeds improvement\u201d grade, and < 30% an unsatisfactory grade. Figures 1 - 2 present screen shots for the process of building checklists using our content manager for the problem altered mental status and the competency history taking. Additionally, Figures 3 - 4 show how the assessment tools are displayed on the user\u2019s device. Figures 5 - 7 show the permanent cloud-based reporting options associated with individual assessments. A fully functional version of JIT can be accessed at: www.justintimemedicine.com/mobile; log in username is testuser@journal.com, and the password is test.\nGoals\u00a0and\u00a0hypotheses \u00a0 In introducing the JIT in our clerkship, we hypothesized that JIT would: 1) facilitate the direct observation and provision of feedback to trainees on their clinical competencies; 2) generally be accepted by faculty; 3) provide a means for recording the observations of trainee performance, and 4) possess adequate reliability and validity. Setting\nThe College of Human Medicine (CHM) at Michigan State University is a community-\nbased medical school with clinical training in 7 communities throughout Michigan. Between July 2010 and October 2012 we implemented JIT as an integral part of the internal medicine clerkship among 367 students. Each student was required to complete ten directly observed\n37\n38\n39\n40\n41\n42\n43\n44\n45\n46\n47\n48\n49\n50\n51 52 53\n54\n55\n56\n57\n58\n59\n60\n61\nPeerJ reviewing PDF | (v2013:05:520:1:0:NEW 17 Aug 2013)\nR ev ie w in g M an\nus cr ip t\nclinical evaluation exercises (i.e. CEX\u2019s) with real patients in authentic clinical settings. A CEX is a short (generally < 20 minutes) directly observed trainee \u2013 patient interaction (e.g. historytaking, examination, counseling etc.); faculty observes rates and provides written comments on the interaction. Students received an orientation to the CEX application and were required to become familiar with the software. Evaluators (attending faculty and residents) received an email on the importance of direct observation and the basic functionality of the CEX application.\nIn general, students chose the patient, problem and competency upon which to be\nassessed. At the time of the assessment, students handed their mobile device, with the checklists displayed for evaluator use during the assessed interaction. A total of 516 evaluators subsequently used JIT to guide their observations and assessments of students\u2019 interacting with patients.\nData\u00a0Collection\nWe collected the following data: the specific training problems and competencies\nobserved and assessed by the evaluators, the grades associated with the observation and descriptive data from faculty on the use of JIT. Descriptive data was collected from the faculty via \u201cpull-down\u201d menus located on the last screen of each assessment. A screen shot of the interface is displayed in figure 4.\nReliability\u00a0and\u00a0validity\u00a0assessments\u00a0\nA group of 17 evaluators, 9 internal medicine residents and 8 general internist faculty\nmembers viewed and rated six scripted videotaped encounters using JIT. Each case was scripted for both satisfactory and unsatisfactory performance. These cases have been previously validated by Holmboe as representing levels of competence which range from unequivocally poor to satisfactory.(2) The sample of raters reflected the number we could reasonably obtain given our small general internal medicine faculty and residency program. We felt it was adequate to provide\n62\n63\n64\n65\n66\n67\n68\n69\n70\n71\n72\n73\n74\n75\n76\n77\n78\n79\n80\n81\n82\n83\n84\n85\nPeerJ reviewing PDF | (v2013:05:520:1:0:NEW 17 Aug 2013)\nR ev ie w in g M an\nus cr ip t\na stable estimate of the inter rater reliability of the assessment process. We calculated the inter rater reliability using a formula developed by Ebel and implemented using software developed by one of the authors.(3, 4) All other statistical analyses were performed with SPSS version 21.", |
| "v2_col_introduction": "introduction : The assessment of the clinical competence of a medical student is challenging. A\ncompetency is \"... an observable ability of a health professional related to a specific activity that integrates knowledge, skills, values, and attitudes. Since they are observable, they can be measured and assessed.\" Although seemingly straight forward, competency based education is of limited usefulness in guiding the design and implementation of educational experiences if they are not tied to specific learning objectives.(1) Additionally, learning objectives are of limited usefulness if they are not available to students and faculty when interacting with patients. Finally, observation and assessment help neither students nor patients if they are not captured and documented in a way that facilitates learner specific plans for improvement and excellence. We present a generalizable initiative that makes national curricula functional in local learning environments, and improves and simplifies observation based assessments and performance-based data tracking for faculty and learners.\nMaterials\u00a0&\u00a0Methods Content\u00a0Manager\nWe developed a mobile, Cloud-based application called just in time medicine (or JIT)\nthat functions effectively on smart phones, tablets and laptop computers. The mobile application is supported by a self-service web-based content management system designed with the explicit aim of enabling users with average computing skills to build their own customizable content, including criterion-based checklists that can then be delivered to any internet enabled device such as a smart phone or tablet.\n2\n16 17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n27\n28\n29 30 31 32\n33\n34\n35\n36\n37\nPeerJ reviewing PDF | (v2013:05:520:0:1:NEW 26 May 2013)\nR ev ie w in g M an\nus cr ip t\nFor this project, we utilized nineteen core training problems from the nationally\nvalidated Clerkship Directors in Internal Medicine (CDIM) curriculum and combined these training problems with the observable competencies of communication skills, history taking and physical exam to create problem and task specific checklists. For each assessment, the software calculates the students\u2019 performance by determining the percentage of all potential items performed correctly, and an algorithm generated grade of \u201cnot done/unsatisfactory\u201d, \u201cneeds improvement\u201d or \u201cwell done\u201d is calculated depending on the percentage of items performed correctly. Figures 1 - 3 present screen shots for the process of building checklists using our content manager for the problem altered mental status and the competency history taking. Additionally, Figures 4 \u2013 6b show the permanent Cloud-based reports associated with the assessments. Access JIT at www.justintimemedicine.com/mobile; UN: testuser@msu.edu PW: testuser.\nGoals\u00a0and\u00a0hypotheses \u00a0 In introducing the JIT in our clerkship, we hypothesized that JIT would: 1) facilitate the direct observation and provision of feedback to trainees on their clinical competencies; 2) generally be accepted by faculty; 3) provide a means for recording the observations of trainee performance, and 4) possess adequate reliability and validity. Setting\nThe College of Human Medicine (CHM) at Michigan State University is a\ncommunity-based medical school with clinical training in 7 communities throughout Michigan. Between July 2010 and October 2012 we implemented JIT as an integral part of the internal medicine clerkship among 367 students. Each student was required to complete ten directly observed clinical evaluation exercises (i.e. CEX\u2019s) with real patients in authentic clinical\n3\n38\n39\n40\n41\n42\n43\n44\n45\n46\n47\n48\n49\n50 51 52\n53\n54\n55\n56\n57\n58\n59\n60\n61\nPeerJ reviewing PDF | (v2013:05:520:0:1:NEW 26 May 2013)\nR ev ie w in g M an\nus cr ip t\nsettings. Students received an orientation to the CEX app and were required to become familiar with the software. Evaluators (attending faculty and residents) received an email on the importance of direct observation and the basic functionality of the CEX app.\nIn general, students chose the patient, problem and competency upon which to be\nassessed. At the time of the assessment, students handed their mobile device, with the checklists displayed for evaluator use during the assessed interaction. A total of 516 evaluators subsequently used JIT to guide their observations and assessments of students\u2019 interacting with patients.\nData\u00a0Collection\nWe collected the following data: the specific training problems and competencies\nobserved and assessed by the evaluators, the grades associated with the observation and descriptive data from faculty on the use of JIT.\nReliability\u00a0and\u00a0validity\u00a0assessments\u00a0\nA group of 17 evaluators viewed and rated six scripted videotaped encounters using JIT.\nEach case was scripted for both satisfactory and unsatisfactory performance. These cases have been previously validated by Holmboe as representing levels of competence which range from unequivocally poor to satisfactory.(2) To assess predictive validity, we also correlated \u201cgateway\u201d performance assessment examinations taken by 282 students at the end of their third year required clerkships with the CEX assessments obtained by JIT.\nHuman\u00a0Use\nOur medical school has created an \u201cHonest Broker System\u201d for conducting research on\nstudent performance data that are collected as a regular part of the students\u2019 educational\n4\n62\n63\n64\n65\n66\n67\n68\n69\n70 71 72\n73\n74\n75 76 77\n78\n79\n80\n81\n82\n83 84 85\n86\nPeerJ reviewing PDF | (v2013:05:520:0:1:NEW 26 May 2013)\nR ev ie w in g M an\nus cr ip t\nactivities. A designated employee of the medical school, with access to these data, has been designated as the \u201cHonest Broker\u201d. In an honest broker system a person or agency that has access to multiple human subject datasets collected for non-research purposes creates a de-identified dataset that can be used for research purposes without posing risk to the subjects. (3) This approach has been used in various types of clinical research \u2013 though, to the best of our knowledge, it has not been applied in educational research at other institutions.\nAt Michigan State University this designated individual created the dataset used in this\nstudy and made it available to our research team after removing all identifiers. The Social Science/Behavioral/Education Institutional Review Board (SIRB) of Michigan State University has reviewed and determined based on 45 CRF 46(f) these data do not involve human subjects and does not need IRB review.", |
| "v1_text": "results : Number and types of evaluations Five hundred sixteen evaluators used the application to assess 367 students for a total of 3567 separate assessments. The number of CEX\u2019s completed per student was 9.7 (\u00b1 1.8) and the average number of CEX\u2019s completed per faculty was 6.9 (\u00b1 15.8). The average number of training problems a student was assessed on was 6.7; of the three competency domains of communication skills, history taking, and physical examination 68% of the students had at least one evaluation in each of the three categories. In terms of the grades, time variables and satisfaction, ~ 83% of the encounters were associated with a \u201cwell done\u201d grade, and on average students were credited with performing ~ 86% of the items correctly. (Figure 8) Between 43 \u2013 50% of the CEX assessments took < 10 minutes as estimated by the faculty, and in ~ 69% of the encounters feedback was estimated to occur in less than 10 minutes. In 92% of the encounters, faculty rated that they were either satisfied or highly satisfied with the CEX. The estimated inter-rater reliability of a single rater observing the videotaped encounters was 0.69 (slightly higher for faculty at 0.74 vs. residents at .64). In judging the same simulated patient case scripted to be satisfactory and non-satisfactory, the residents and faculty using JIT discriminated between the satisfactory and non-satisfactory performance. The mean number of items checked for the videotapes scripted for unsatisfactory performance was 35% vs. 59% for those scripted for more satisfactory performance. We believe this provides evidence supporting the construct validity of JIT. 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 PeerJ reviewing PDF | (v2013:05:520:1:0:NEW 17 Aug 2013) R ev ie w in g M an us cr ip t To assess predictive validity, we calculated a Pearson product moment correlation between a \u201cgateway\u201d performance assessment examinations taken by 282 students at the end of their third year required clerkships with the CEX assessments obtained by JIT. There was a small (but statistically significant 0.144, p = .008) correlation between students\u2019 CEX scores and communications skills in the gateway performance assessment exam. discussion : Although national learning objectives have been published for all core clerkships, their usefulness for assessing learning outcomes has been limited. As an example, the core competency gathering essential and accurate information seems relatively straight forward. However, when applied to a single condition such as chronic obstructive pulmonary disease, there are at least 28 specified clinical tasks related to history taking and performing a physical examination that a student should demonstrate to meet the expected outcomes as defined in the Clerkship Directors in Internal Medicine (CDIM) curricular objectives for that problem. Of these 28, how many will a faculty evaluator remember when assessing the student? More importantly how many can they remember and what level of consistency will there be among preceptors providing feedback to students? If we take almost any clinical skill and start to dissect it, we find very quickly that existing human memory is insufficient in recalling all of the explicit steps related to potentially hundreds of conditions that help frame the expected outcomes of a trainee\u2019s educational experience and curricula. As the expectations for assessment of discrete competencies increases, the evaluation burden for educators, students and administrators becomes progressively more educationally incomplete and logistically unmanageable. 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 PeerJ reviewing PDF | (v2013:05:520:1:0:NEW 17 Aug 2013) R ev ie w in g M an us cr ip t The inability of faculty to remember and accurately assess for outcomes related to potentially hundreds of discrete educational objectives while evaluating trainees in clinical settings is one of the major reasons faculty have a hard time reliably discriminating unsatisfactory from satisfactory performance, as has been noted by many authors over the past decade using paper-based systems.(2, 5) For example, in a study of mini-CEX evaluations among 300 medical students, Hill noted that problems existed \u201cin trying to ensure that everyone was working to the same or similar standards.\u201d(6) In another study of 400 mini-CEX assessments, Fernando concluded faculty evaluators were unsure of the level of performance expected of the learners.(7) Hasnain noted that poor agreement among faculty evaluating medical students on a Family Medicine clerkship was due to the fact that \u201cStandards for judging clinical competence were not explicit\u201d.(8) In a randomized trial of a faculty development effort, Holmboe studied the accuracy of faculty ratings by having them view videotaped trainee-patient encounters that were scripted to portray three levels of proficiency; unsatisfactory, marginal or satisfactory. Faculty viewing the exact same encounter varied widely in their assessment of trainee competence, with ratings from unequivocally unsatisfactory (CEX scores of scores 1 \u2013 3) to unequivocally superior (CEX scores of 7 \u2013 9), regardless of whether the video was scripted to be unsatisfactory or not. After an intensive 4 day faculty development workshop in which participants were tasked with developing a shared mental model of what specific competencies should look like, problems still existed among faculty in discriminating satisfactory from unsatisfactory performance in these scripted encounters.(2) Kogan noted that in the absence of easily accessible frameworks, faculty evaluators default back to a myriad of highly variable evaluation strategies including such idiosyncratic features as instinct, \u201cgut feelings\u201d, \u201cunsubstantiated assumptions\u201d and the faculty members\u2019 emotional response to providing feedback. What she also noted was that faculty raters commonly 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 PeerJ reviewing PDF | (v2013:05:520:1:0:NEW 17 Aug 2013) R ev ie w in g M an us cr ip t fail to use existing frameworks or external standards in guiding their evaluations of trainees, thus explaining much of the well-recognized problems with poor validity and inter-rater reliability associated with clinical evaluations.(5) Given these realities, it is not surprising that medical trainees commonly do not view the feedback received from faculty as credible nor influential in learning, especially if the feedback was not immediate and tied to the trainees\u2019 clinical work-place performance. (9) Enhancing the effectiveness of clinical assessments, the delivery of feedback related to learning objectives and the creation of better systems for documenting faculty observations are commonly cited needs in medical education.(8, 10-13) Given these and other trends, systems that are capable of disseminating curricular objectives to students and faculty and which also enable criterion-based assessment have become key educational needs. We believe that cloud-based technology, appropriately applied to maximize efficiency, can contribute to optimizing the learning environment by directly aligning learning objectives from national disciplinary curricula with assessment tools for use by students and faculty anywhere and anytime, especially at the bedside. In our first feasibility study, we demonstrated our ability to deliver national educational objectives published by the CDIM to electronic hand-held personal digital assistants (PDAs) such as Palm\u00ae and PocketPC\u00ae devices.(14) In a second feasibility study, we subsequently demonstrated this system could be used to deliver and successfully implement competency-based checklists for student assessment related to the CDIM curricular objectives using PDAs.(15) Data from these studies helped us determine that the distribution and use of curricular objectives and related assessment tools by students and faculty in our geographically dispersed medical school could be facilitated with just in time mobile technology. Importantly, we also determined that 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 PeerJ reviewing PDF | (v2013:05:520:1:0:NEW 17 Aug 2013) R ev ie w in g M an us cr ip t students and preceptors valued the fact that the content and expected competencies were transparent and such transparency facilitated learner assessment.(15) However, technical issues with PDAs -- such as lack of direct internet connection and the requirement to \u201csynchronize\u201d data from PDAs to the web using desktop computers -- limited the practicality of PDA based assessment; a process that is not needed with contemporary internet enabled devices such as iPads, iPhones and other smart phones. These devices have become almost ubiquitous in the past four years and we have leveraged this trend to evolve JIT to a platform-neutral Cloud-based system. The displayed assessment tools function like an \u201capplication\u201d on mobile devices, but are device-agnostic in that they functions on all internet-enabled devices, including smart phones. Out study, like most others, have several inherent limitations. First, this is a single institution study and these results may not be generalizable. Future studies should focus on the use of this technology in other settings. Second, establishing the reliability of all of the customized checklists within the CEX application is needed, as is establishing its reliability in real clinical settings such as the hospital wards. Third, we have not established the validity of the electronic grading algorithm. Fourth, like many tools for direct observation, we have not established the effect of this tool on learning nor the transfer of acquired clinical skills to other areas, or the effect that such direct observation has on the most important outcome of patient care. conclusions : We have established that just in time Cloud-based mobile technology has great potential in competency-based medical education. Although not an objective of this study, we believe such technology holds great promise for use in authentic clinical settings for measuring student achievement related to educational milestones. Additionally, given the time and cost constraints 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 PeerJ reviewing PDF | (v2013:05:520:1:0:NEW 17 Aug 2013) R ev ie w in g M an us cr ip t associated with traditional faculty development efforts, we believe that systems such as JIT have great potential in operationalizing \u201cjust in time\u201d faculty development. 202 203 PeerJ reviewing PDF | (v2013:05:520:1:0:NEW 17 Aug 2013) R ev ie w in g M an us cr ip t References 1. Whitcomb ME. More on competencybased education. Acad Med. 2004;79(6):4934. 2. Holmboe ES, Hawkins RE, Huot SJ. Effects of training in direct observation of medical residents' clinical competence: a randomized trial. Ann Intern Med. 2004;140(11):87481. 3. RL E. Estimation of the reliability of ratings. Psychometrika. 1951;16:40724. 4. Solomon DJ. The rating reliability calculator. BMC Med Res Methodol. 2004;4:11. 5. Kogan JR, Conforti L, Bernabeo E, Iobst W, Holmboe E. Opening the black box of clinical skills assessment via observation: a conceptual model. Med Educ. 2011;45(10):104860. 6. Hill F, Kendall K, Galbraith K, Crossley J. Implementing the undergraduate miniCEX: a tailored applicationroach at Southampton University. Med Educ. 2009;43(4):32634. 7. Fernando N, Cleland J, McKenzie H, Cassar K. Identifying the factors that determine feedback given to undergraduate medical students following formative miniCEX assessments. Med Educ. 2008;42(1):8995. 8. Hasnain M, Connell KJ, Downing SM, Olthoff A, Yudkowsky R. Toward meaningful evaluation of clinical competence: the role of direct observation in clerkship ratings. Acad Med. 2004;79(10 Suppl):S214. 9. Watling C, Driessen E, van der Vleuten CP, Lingard L. Learning from clinical work: the roles of learning cues and credibility judgements. Med Educ. 2012;46(2):192200. 10. Howley LD, Wilson WG. Direct observation of students during clerkship rotations: a multiyear descriptive study. Academic medicine : journal of the Association of American Medical Colleges. 2004;79(3):27680. 11. Torre DM, Simpson DE, Elnicki DM, Sebastian JL, Holmboe ES. Feasibility, reliability and user satisfaction with a PDAbased miniCEX to evaluate the clinical skills of third year medical students. Teaching and learning in medicine. 2007;19(3):2717. 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 PeerJ reviewing PDF | (v2013:05:520:1:0:NEW 17 Aug 2013) R ev ie w in g M an us cr ip t 12. Hauer KE, Kogan JR. Realising the potential value of feedback. Med Educ. 2012;46(2):1402. 13. Whitcomb ME. Competencybased graduate medical education? Of course! But how should competency be assessed? Acad Med. 2002;77(5):35960. 14. Ferenchick G, Fetters M, Carse AM. Just in time: technology to disseminate curriculum and manage educational requirements with mobile technology. Teach Learn Med. 2008;20(1):4452. 15. Ferenchick GS, Foreback J, Towfiq B, Kavanaugh K, Solomon D, Mohmand A. The implementation of a mobile problemspecific electronic CEX for assessing directly observed studentpatient encounters. Med Educ Online. 2010;15. 229 230 231 232 233 234 235 236 237 238 PeerJ reviewing PDF | (v2013:05:520:1:0:NEW 17 Aug 2013) R ev ie w in g M an us cr ip t Figure 1 Step 1. Content Manager for Development of Assessment Tools Using simple interfaces, faculty adds content (e.g. the problem altered mental status) and the specific competency to be assessed (e.g. history taking) PeerJ reviewing PDF | (v2013:05:520:1:0:NEW 17 Aug 2013) R ev ie w in g M an us cr ip t Figure 2 Step 2. Content Manager for Development of Assessment Tools Using the self-service web-based content management system, faculty then adds assessment questions reflecting specific criterion-based outcomes (e.g. The student started the interview with open-ended questions) PeerJ reviewing PDF | (v2013:05:520:1:0:NEW 17 Aug 2013) R ev ie w in g M an us cr ip t Figure 3 Criterion-based assessment for altered mental status and history-taking as displayed on the mobile device for use anytime and anywhere Screen shot A displays how the specific checklist is accessed on the device; screen shot B displays the criterion-based tasks, which are defaulted to No change to Yes (screen shot C) once the task is completed by the learner. Screen shot D displays the alogrithm generated grade PeerJ reviewing PDF | (v2013:05:520:1:0:NEW 17 Aug 2013) R ev ie w in g M an us cr ip t Figure 4 Evaluator information is collected using simple interfaces on the device after the assessment is completed, including open-ended qualitative comments. Faculty enters information concerning their observation (screen shot A), their feedback and action plans (screen shot B). A color coded competency registry is displayed on the learners device (screen shot C). Note in screen shot B, the evaluator has the option to have an email link sent to him/her to complete the qualitative assessment at a later time. All evaluations become part of the learners cloudbased permanent record. PeerJ reviewing PDF | (v2013:05:520:1:0:NEW 17 Aug 2013) R ev ie w in g M an us cr ip t Figure 5 detailed cloud-based reporting options : One of the web-based permanent records of the students\u2019 performance; displaying the item(s) assessed, the percentage of potential items correctly performed, and algorithm generated grade and evaluators written comments on the learners performance (note all of these features are editable, based upon the users\u2019 needs) PeerJ reviewing PDF | (v2013:05:520:1:0:NEW 17 Aug 2013) R ev ie w in g M an us cr ip t Figure 6 jit detailed cloud-based reporting options : With the click of a hyperlink, a detailed list of all the items that were either performed or not by the student PeerJ reviewing PDF | (v2013:05:520:1:0:NEW 17 Aug 2013) R ev ie w in g M an us cr ip t Figure 7 Another option for a cloud-based record or registry of the learners performance. This image represents a milestone based report with the identified milestones (A); the milestone subcompetencies (B); a color-coded table of all of the learners assessments (C). A roll-over option (D) identifies which specific assessment is represented in each cell. This table shows the ACGME competency taxonomy for internal medicine. PeerJ reviewing PDF | (v2013:05:520:1:0:NEW 17 Aug 2013) R ev ie w in g M an us cr ip t Figure 8 Bar chart of grade distribution comparing resident to faculty raters PeerJ reviewing PDF | (v2013:05:520:1:0:NEW 17 Aug 2013) R ev ie w in g M an us cr ip t", |
| "v2_text": "results : Number and types of evaluations Five hundred sixteen evaluators used the app to assess 367 students for a total of 3567 separate assessments. The number of CEX\u2019s completed per student was 9.7 (\u00b1 1.8) and the average number of CEX\u2019s completed per faculty was 6.9 (\u00b1 15.8). The average number of training problems a student was assessed on was 6.7; of the three competency domains of communication skills, history taking, and physical examination 68% of the students had at least one evaluation in each of the three categories. In terms of the grades, time variables and satisfaction, ~ 83% of the encounters were associated with a \u201cwell done\u201d grade, and on average students were credited with performing ~ 86% of the items correctly. Between 43 \u2013 50% of the CEX assessments took < 10 minutes as estimated by the faculty, and in ~ 69% of the encounters feedback was estimated to occur in less 5 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 PeerJ reviewing PDF | (v2013:05:520:0:1:NEW 26 May 2013) R ev ie w in g M an us cr ip t than 10 minutes. In 92% of the encounters, faculty rated that they were either satisfied or highly satisfied with the CEX The inter-rater reliability among faculty observing the videotaped encounters was 0.69 (slightly higher for faculty at 0.74 vs. residents at .64). In judging the exact same clinical performance, these ratings discriminated between satisfactory and non-satisfactory performance, as the mean number of items captured for the performance on the videotapes scripted for unsatisfactory performance was 35% vs. 59% for those scripted for more satisfactory performance. In terms of predictive validity, there was a small (but statistically significant, correlation 0.144, p = .008) correlation between students CEX scores and communications skills in the gateway performance assessment exam. discussion : Although national learning objectives have been published for all core clerkships, their usefulness for assessing learning outcomes has been limited. As an example, the core competency gathering essential and accurate information seems relatively straight forward. However, when applied to a single condition such as chronic obstructive pulmonary disease, there are at least 28 specified clinical tasks related to history taking and performing a physical examination that a student should demonstrate to meet the expected outcomes as defined in the Clerkship Directors in Internal Medicine (CDIM) curricular objectives for that problem. Of these 28, how many will a faculty evaluator remember when assessing the student? More importantly how many can they remember and what level of consistency will there be among preceptors providing feedback to students? 6 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 PeerJ reviewing PDF | (v2013:05:520:0:1:NEW 26 May 2013) R ev ie w in g M an us cr ip t If we take almost any clinical skill and start to dissect it, we find very quickly that existing human memory is insufficient in recalling all of the explicit steps related to potentially hundreds of conditions that help frame the expected outcomes of a trainee\u2019s educational experience and curricula. As the expectations for assessment of discrete competencies increases, the evaluation burden for educators, students and administrators becomes progressively more educationally incomplete and logistically unmanageable. The inability of faculty to remember and accurately assess for outcomes related to potentially hundreds of discrete educational objectives while evaluating trainees in clinical settings is one of the major reasons faculty have a hard time reliably discriminating unsatisfactory from satisfactory performance, as has been noted by many authors over the past decade.(2, 4) For example, in a study of the mini-CEX among 300 medical students, Hill noted that problems existed \u201cin trying to ensure that everyone was working to the same or similar standards.\u201d(5) In another study of 400 mini-CEX assessments, Fernando concluded faculty evaluators were unsure of the level of performance expected of the learners.(6) Hasnain noted that poor agreement among faculty evaluating medical students on a Family Medicine clerkship was due to the fact that \u201cStandards for judging clinical competence were not explicit\u201d.(7) In a randomized trial of a faculty development effort, Holmboe studied the accuracy of faculty ratings by having them view videotaped trainee-patient encounters that were scripted to portray three levels of proficiency; unsatisfactory, marginal or satisfactory. Faculty viewing the exact same encounter varied widely in their assessment of trainee competence, with ratings from unequivocally unsatisfactory (CEX scores of scores 1 \u2013 3) to unequivocally superior (CEX scores of 7 \u2013 9), regardless of whether the video was scripted to be unsatisfactory or not. After an intensive 4 day faculty development workshop in which participants were tasked with 7 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 PeerJ reviewing PDF | (v2013:05:520:0:1:NEW 26 May 2013) R ev ie w in g M an us cr ip t developing a shared mental model of what specific competencies should look like, problems still existed among faculty in discriminating satisfactory from unsatisfactory performance in these scripted encounters.(2) Kogan noted that in the absence of easily accessible frameworks, faculty evaluators default back to a myriad of highly variable evaluation strategies including such idiosyncratic features as instinct, \u201cgut feelings\u201d, \u201cunsubstantiated assumptions\u201d and the faculty members\u2019 emotional response to providing feedback. What she also noted was that faculty raters commonly fail to use existing frameworks or external standards in guiding their evaluations of trainees, thus explaining much of the well-recognized problems with poor validity and inter-rater reliability associated with clinical evaluations.(4) Given these realities, it is not surprising that medical trainees commonly do not view the feedback received from faculty as credible nor influential in learning, especially if the feedback was not immediate and tied to the trainees\u2019 clinical work-place performance. (8) Enhancing the effectiveness of clinical assessments, the delivery of feedback related to learning objectives and the creation of better systems for documenting faculty observations are commonly cited needs in medical education.(7, 9-12) Given these and other trends, systems that are capable of disseminating curricular objectives to students and faculty and which also enable criterion-based assessment have become key educational needs. We believe that Cloud-based technology, appropriately applied to maximize efficiency, can contribute to optimizing the learning environment by directly aligning learning objectives from national disciplinary curricula with assessment tools for use by students and faculty anywhere and anytime, especially at the bedside. 8 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 PeerJ reviewing PDF | (v2013:05:520:0:1:NEW 26 May 2013) R ev ie w in g M an us cr ip t In our first feasibility study, we demonstrated our ability to deliver national educational objectives published by the CDIM to electronic hand-held personal digital assistants (PDAs) such as Palm\u00ae and PocketPC\u00ae devices.(13) In a second feasibility study, we subsequently demonstrated this system could be used to deliver and successfully implement competency-based checklists for student assessment related to the CDIM curricular objectives using PDAs.(14) Data from these studies helped us determine that the distribution and use of curricular objectives and related assessment tools by students and faculty in our geographically dispersed medical school could be facilitated with just in time mobile technology. Importantly, we also determined that students and preceptors valued the fact that the content and expected competencies were transparent and such transparency facilitated learner assessment.(14) However, technical issues with PDAs -- such as lack of direct internet connection and the requirement to \u201csynchronize\u201d data from PDAs to the web using desktop computers -- limited the practicality of PDA based assessment; a process that is not needed with contemporary internet enabled devices such as iPads, iPhones and other smart phones. These devices have become almost ubiquitous in the past four years and we have leveraged this trend to evolve JIT to a platform-neutral Cloud-based system. The displayed assessment tools function like an \u201capp\u201d on mobile devices, but are device-agnostic in that they functions on all internet-enabled devices, including smart phones. Out study, like most others, have several inherent limitations. First, this is a single institution study and these results may not be generalizable. Future studies should focus on the use of this technology in other settings. Second, establishing the reliability of all of the customized checklists within the CEX app is needed, as is establishing its reliability in real clinical settings such as the hospital wards. Third, we have not established the validity of the electronic grading algorithm. Fourth, like many tools for direct observation, we have not 9 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 PeerJ reviewing PDF | (v2013:05:520:0:1:NEW 26 May 2013) R ev ie w in g M an us cr ip t established the effect of this tool on learning nor the transfer of acquired clinical skills to other areas, or the effect that such direct observation has on the most important outcome of patient care. a cloud-based record or registry of the learners performance is created. image below demonstrates : registry reporting organized using the current ACGME competency taxonomy, followed by specific tasks and milestones for an individual learner PeerJ reviewing PDF | (v2013:05:520:0:1:NEW 26 May 2013) R ev ie w in g M an us cr ip t Figure 5 drills down to more detail on the assessed item (next image) PeerJ reviewing PDF | (v2013:05:520:0:1:NEW 26 May 2013) R ev ie w in g M an us cr ip t Figure 6 conclusions : We have established that just in time Cloud-based mobile technology has great potential in competency-based medical education. Although not an objective of this study, we believe such technology holds great promise for use in authentic clinical settings for measuring student achievement related to educational milestones. Additionally, given the time and cost constraints associated with traditional faculty development efforts, we believe that systems such as JIT have great potential in operationalizing \u201cjust in time\u201d faculty development. 10 200 201 202 203 204 205 206 207 208 209 210 PeerJ reviewing PDF | (v2013:05:520:0:1:NEW 26 May 2013) R ev ie w in g M an us cr ip t References 1. Whitcomb ME. More on competencybased education. Acad Med. 2004;79(6):4934. 2. Holmboe ES, Hawkins RE, Huot SJ. Effects of training in direct observation of medical residents' clinical competence: a randomized trial. Ann Intern Med. 2004;140(11):87481. 3. Boyd AD, Hosner C, Hunscher DA, Athey BD, Clauw DJ, Green LA. An 'Honest Broker' mechanism to maintain privacy for patient care and academic medical research. Int J Med Inform. 2007;76(56):40711. 4. Kogan JR, Conforti L, Bernabeo E, Iobst W, Holmboe E. Opening the black box of clinical skills assessment via observation: a conceptual model. Med Educ. 2011;45(10):104860. 5. Hill F, Kendall K, Galbraith K, Crossley J. Implementing the undergraduate miniCEX: a tailored approach at Southampton University. Med Educ. 2009;43(4):32634. 6. Fernando N, Cleland J, McKenzie H, Cassar K. Identifying the factors that determine feedback given to undergraduate medical students following formative miniCEX assessments. Med Educ. 2008;42(1):8995. 7. Hasnain M, Connell KJ, Downing SM, Olthoff A, Yudkowsky R. Toward meaningful evaluation of clinical competence: the role of direct observation in clerkship ratings. Acad Med. 2004;79(10 Suppl):S214. 8. Watling C, Driessen E, van der Vleuten CP, Lingard L. Learning from clinical work: the roles of learning cues and credibility judgements. Med Educ. 2012;46(2):192200. 9. Howley LD, Wilson WG. Direct observation of students during clerkship rotations: a multiyear descriptive study. Academic medicine : journal of the Association of American Medical Colleges. 2004;79(3):27680. 10. Torre DM, Simpson DE, Elnicki DM, Sebastian JL, Holmboe ES. Feasibility, reliability and user satisfaction with a PDAbased miniCEX to evaluate the clinical skills of thirdyear medical students. Teaching and learning in medicine. 2007;19(3):2717. 11 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 PeerJ reviewing PDF | (v2013:05:520:0:1:NEW 26 May 2013) R ev ie w in g M an us cr ip t 11. Hauer KE, Kogan JR. Realising the potential value of feedback. Med Educ. 2012;46(2):1402. 12. Whitcomb ME. Competencybased graduate medical education? Of course! But how should competency be assessed? Acad Med. 2002;77(5):35960. 13. Ferenchick G, Fetters M, Carse AM. Just in time: technology to disseminate curriculum and manage educational requirements with mobile technology. Teach Learn Med. 2008;20(1):4452. 14. Ferenchick GS, Foreback J, Towfiq B, Kavanaugh K, Solomon D, Mohmand A. The implementation of a mobile problemspecific electronic CEX for assessing directly observed studentpatient encounters. Med Educ Online. 2010;15. 12 237 238 239 240 241 242 243 244 245 246 PeerJ reviewing PDF | (v2013:05:520:0:1:NEW 26 May 2013) R ev ie w in g M an us cr ip t Figure 1 jit content manager : Simple web based interfaces allow faculty of average computing skills to enter content for any type of assessment PeerJ reviewing PDF | (v2013:05:520:0:1:NEW 26 May 2013) R ev ie w in g M an us cr ip t Figure 2 Content as displayed on an internet enabled device (e.g. iPhone) for use anwhere anytime PeerJ reviewing PDF | (v2013:05:520:0:1:NEW 26 May 2013) R ev ie w in g M an us cr ip t Figure 3 Faculty enters information concerning their observation, their feedback and action plans A color coded competency registry is displayed on the learners device PeerJ reviewing PDF | (v2013:05:520:0:1:NEW 26 May 2013) R ev ie w in g M an us cr ip t Figure 4 detailed reporting options : One of the web-based permanent records of the students\u2019 performance, with the item assessed, the percentage of potential items correctly performed, and algorithm generated grade and written comments (note all of these features are editable, based upon the users\u2019 needs) PeerJ reviewing PDF | (v2013:05:520:0:1:NEW 26 May 2013) R ev ie w in g M an us cr ip t Figure 7 With the click of a hyperlink, a detailed list of all the items that were either performed or not by the student PeerJ reviewing PDF | (v2013:05:520:0:1:NEW 26 May 2013) R ev ie w in g M an us cr ip t", |
| "url": "https://peerj.com/articles/167/reviews/", |
| "review_1": "Gerard Lazo \u00b7 Aug 31, 2013 \u00b7 Academic Editor\nACCEPT\nThank you for your feedback on the suggested revisions. My read-through went very smoothly and I feel you have addressed all concerns addressed in the initial review. The manuscript was in good shape in the first iteration of review, and it is now even better. Having worked in the area of plant pathology I feel this work can have impact in serving the newest concerns of the science field, and especially to serve well with the latest advances in technology. I applaud your efforts and I expect the feedback to match accordingly when published. Congratulations.", |
| "review_2": "Gerard Lazo \u00b7 Aug 2, 2013 \u00b7 Academic Editor\nMINOR REVISIONS\nThe manuscript appears well written and is poised to provide Galaxy work-flows for bench scientists wishing to conduct transcript and peptide analyses for plant pathology related studies. Both reviewers felt the manuscript was appropriate for publication with what I consider to be minor modifications. The Galaxy environment has gained wide acceptance and I feel your application topic may help aid analyses in other plant pathology systems. This would potentially lead toward building common data connections between diverse host-pathogen interactions, and may even extend beyond the limited area of focus. Your presentation is centered around plant pathology based studies; however, it mainly focuses on the tools and not the research findings. Given that the introduction went well into describing the importance of plant pathology studies, perhaps a mention of some plant pathology revelations uncovered from your work may strengthen the impact of this effort. I will forward this to you with a suggestion of minor modifications. I would like you to try to address the points suggested in the reviews; it does not seem to be a major hurdle to accomplish this in a short period of time. Thank you for submitting this manuscript and I expect it to be well received. Congratulations on your efforts.\n\nOther comments which may be useful are to mention software alternatives to Galaxy and to mention to what extent the tools contained within the work-flows can also be used via the command-line. When mentioning third-party software it is best to note their availability; whether a license is required or not. Since the target audience appears the general bench scientist a description of the system requirements in terms of memory and processor requirements would be helpful. A sample data-set may also serve to let the target audience use and test for the expected outcomes.\n\nAdditional edits suggested :\nExample of annotation:\nLINE NO.: / PREVIOUS FORM / SUGGESTED FORM / [ADDITIONAL NOTES]\n\n68: / Apple\u2019s Mac OS X / the Apple OS X / [Mac is semi-redundant; see wikipedia]\n78: / to support / for support / []\n84: / offers is to offer / offers is / []\n114: / server can made / server can be made / []", |
| "review_3": "Mick Watson \u00b7 Jul 29, 2013\nBasic reporting\nThe paper meets the requirements\nExperimental design\nThe paper meets the requirements\nValidity of the findings\nThe paper meets the requirements\nAdditional comments\nThe authors describe a number of tools and tool wrappers that have been integrated into Galaxy, and provide a use-case in molecular plant pathology\n\nThere could be more mention of alternatives to Galaxy, e.g. Taverna and Anvaya\n\nWhilst MIRA has been integrated, no mention is used of the memory requirements - many are reluctant to integrate assemblers into their Galaxy instances for fear that several large memory jobs are launched by users\n\nOn page 5, two workflows are mentioned that are essentially identical, except one uses GetOrfs for gene finding and the second uses Augustus and Glimmer3. Doesn't the second workflow make the first redundant? Why include the first?\n\nOn page 6, technically I feel orthology should be the basis for transferring functional information, not sequence similarity. Similarly on page 7, isn't it more standard to use reciprocal best hit to define orthologues before transferring annotation?\n\nBottom of page 7, GetOrfs is used again - why not use the aforementioned gene predictors?\n\nWas any attempt made to wrap the InterProScan web-service (rather than standalone)?\n\nTop of page 8, I am curious whether the SignalP licence allows for it to be integrated into a public Galaxy?\n\nThe RXLR prediction tools: as I understand it, the authors have implemented several published methods for RXLR motif prediction, and released these into the Galaxy tool shed. Does this paper serve as notice of their publication? Has any testing been done on these implementations to demonstrate their accuracy and efficacy?\n\nOverall the paper is well written and should be published. The above suggestions can be dealt with by adding text to various parts of the manuscript and do not represent a large body of work, therefore I recommend minor revisions\n\nMick Watson\nCite this review as\nWatson M (2013) Peer Review #1 of \"Galaxy tools and workflows for sequence analysis with applications in molecular plant pathology (v0.1)\". PeerJ https://doi.org/10.7287/peerj.167v0.1/reviews/1", |
| "review_4": "Mikel Ega\u00f1a Aranguren \u00b7 Jul 26, 2013\nBasic reporting\nThe paper is very well written and presents the ideas clearly.\n\nSome minor (Discretionary) comments regarding the style:\n\n* The title is too long, how about \"A Galaxy framework for sequence analysis with applications in molecular plant pathology\"?.\n* In the abstract, NCBI BLAST+ is mentioned and then BLAST is mentioned again, but as an example. It id confusing.\n* In the abstract, in the sentence \"The motivating research theme ... \" it is not clear whether the research theme mentioned refers to Galaxy as a whole or the content of this paper. Also, the abstract reads like a presentation of Galaxy, rather than presenting the authors' work (Specific Galaxy tools).\n* The sentence in lines 148-151 is very difficult to understand.\n* The last part of the sentence in lines 244-245 may be clearer written as follows: \"despite being phylogenetically distant\"\n\nPossible mistakes:\n\nLine 112: computING cluster?\nLine 114: can BE made\nLine 115: extra space after \"e.g.\"? Perhaps the authors can use the LaTex command \\newcommand{\\eg}{\\emph{e.g.}\\xspace} (and the xspace package)\nLine 244: sequenceS\nExperimental design\nThe main objection is that the work presented in this paper is not completely reproducible.\n\nThe authors present a set of Galaxy tools and workflows that exploit such tools. However, only the \"backbones\" of the workflows are stored in the Galaxy tool shed. Therefore, if a user wants to reproduce the workflow, she needs to import it into a Galaxy server and run the workflow with datasets of her choice: since the datasets will be different, the workflows are not completely reproducible.\n\nThe authors should publish the workflows with the datasets they used to test them. Since the authors mention in the acknowledgements that they maintain an in-house Galaxy server, they can easily make the workflows mentioned in the paper public, and also publish a history with the datasets used, with clear instructions mapping the datasets to the corresponding workflow steps. This way any reader can run precisely the workflows presented in the paper, with the actual datasets, and judge the results. If the authors are worried about the computational burden for their server, they can set up accounts for the reviewers only, without making their Galaxy server public.\nValidity of the findings\nAs already mentioned, the datasets used to test the workflows have not been made available.\nCite this review as\nEga\u00f1a Aranguren M (2013) Peer Review #2 of \"Galaxy tools and workflows for sequence analysis with applications in molecular plant pathology (v0.1)\". PeerJ https://doi.org/10.7287/peerj.167v0.1/reviews/2", |
| "pdf_1": "https://peerj.com/articles/167v0.2/submission", |
| "pdf_2": "https://peerj.com/articles/167v0.1/submission", |
| "all_reviews": "Review 1: Gerard Lazo \u00b7 Aug 31, 2013 \u00b7 Academic Editor\nACCEPT\nThank you for your feedback on the suggested revisions. My read-through went very smoothly and I feel you have addressed all concerns addressed in the initial review. The manuscript was in good shape in the first iteration of review, and it is now even better. Having worked in the area of plant pathology I feel this work can have impact in serving the newest concerns of the science field, and especially to serve well with the latest advances in technology. I applaud your efforts and I expect the feedback to match accordingly when published. Congratulations.\nReview 2: Gerard Lazo \u00b7 Aug 2, 2013 \u00b7 Academic Editor\nMINOR REVISIONS\nThe manuscript appears well written and is poised to provide Galaxy work-flows for bench scientists wishing to conduct transcript and peptide analyses for plant pathology related studies. Both reviewers felt the manuscript was appropriate for publication with what I consider to be minor modifications. The Galaxy environment has gained wide acceptance and I feel your application topic may help aid analyses in other plant pathology systems. This would potentially lead toward building common data connections between diverse host-pathogen interactions, and may even extend beyond the limited area of focus. Your presentation is centered around plant pathology based studies; however, it mainly focuses on the tools and not the research findings. Given that the introduction went well into describing the importance of plant pathology studies, perhaps a mention of some plant pathology revelations uncovered from your work may strengthen the impact of this effort. I will forward this to you with a suggestion of minor modifications. I would like you to try to address the points suggested in the reviews; it does not seem to be a major hurdle to accomplish this in a short period of time. Thank you for submitting this manuscript and I expect it to be well received. Congratulations on your efforts.\n\nOther comments which may be useful are to mention software alternatives to Galaxy and to mention to what extent the tools contained within the work-flows can also be used via the command-line. When mentioning third-party software it is best to note their availability; whether a license is required or not. Since the target audience appears the general bench scientist a description of the system requirements in terms of memory and processor requirements would be helpful. A sample data-set may also serve to let the target audience use and test for the expected outcomes.\n\nAdditional edits suggested :\nExample of annotation:\nLINE NO.: / PREVIOUS FORM / SUGGESTED FORM / [ADDITIONAL NOTES]\n\n68: / Apple\u2019s Mac OS X / the Apple OS X / [Mac is semi-redundant; see wikipedia]\n78: / to support / for support / []\n84: / offers is to offer / offers is / []\n114: / server can made / server can be made / []\nReview 3: Mick Watson \u00b7 Jul 29, 2013\nBasic reporting\nThe paper meets the requirements\nExperimental design\nThe paper meets the requirements\nValidity of the findings\nThe paper meets the requirements\nAdditional comments\nThe authors describe a number of tools and tool wrappers that have been integrated into Galaxy, and provide a use-case in molecular plant pathology\n\nThere could be more mention of alternatives to Galaxy, e.g. Taverna and Anvaya\n\nWhilst MIRA has been integrated, no mention is used of the memory requirements - many are reluctant to integrate assemblers into their Galaxy instances for fear that several large memory jobs are launched by users\n\nOn page 5, two workflows are mentioned that are essentially identical, except one uses GetOrfs for gene finding and the second uses Augustus and Glimmer3. Doesn't the second workflow make the first redundant? Why include the first?\n\nOn page 6, technically I feel orthology should be the basis for transferring functional information, not sequence similarity. Similarly on page 7, isn't it more standard to use reciprocal best hit to define orthologues before transferring annotation?\n\nBottom of page 7, GetOrfs is used again - why not use the aforementioned gene predictors?\n\nWas any attempt made to wrap the InterProScan web-service (rather than standalone)?\n\nTop of page 8, I am curious whether the SignalP licence allows for it to be integrated into a public Galaxy?\n\nThe RXLR prediction tools: as I understand it, the authors have implemented several published methods for RXLR motif prediction, and released these into the Galaxy tool shed. Does this paper serve as notice of their publication? Has any testing been done on these implementations to demonstrate their accuracy and efficacy?\n\nOverall the paper is well written and should be published. The above suggestions can be dealt with by adding text to various parts of the manuscript and do not represent a large body of work, therefore I recommend minor revisions\n\nMick Watson\nCite this review as\nWatson M (2013) Peer Review #1 of \"Galaxy tools and workflows for sequence analysis with applications in molecular plant pathology (v0.1)\". PeerJ https://doi.org/10.7287/peerj.167v0.1/reviews/1\nReview 4: Mikel Ega\u00f1a Aranguren \u00b7 Jul 26, 2013\nBasic reporting\nThe paper is very well written and presents the ideas clearly.\n\nSome minor (Discretionary) comments regarding the style:\n\n* The title is too long, how about \"A Galaxy framework for sequence analysis with applications in molecular plant pathology\"?.\n* In the abstract, NCBI BLAST+ is mentioned and then BLAST is mentioned again, but as an example. It id confusing.\n* In the abstract, in the sentence \"The motivating research theme ... \" it is not clear whether the research theme mentioned refers to Galaxy as a whole or the content of this paper. Also, the abstract reads like a presentation of Galaxy, rather than presenting the authors' work (Specific Galaxy tools).\n* The sentence in lines 148-151 is very difficult to understand.\n* The last part of the sentence in lines 244-245 may be clearer written as follows: \"despite being phylogenetically distant\"\n\nPossible mistakes:\n\nLine 112: computING cluster?\nLine 114: can BE made\nLine 115: extra space after \"e.g.\"? Perhaps the authors can use the LaTex command \\newcommand{\\eg}{\\emph{e.g.}\\xspace} (and the xspace package)\nLine 244: sequenceS\nExperimental design\nThe main objection is that the work presented in this paper is not completely reproducible.\n\nThe authors present a set of Galaxy tools and workflows that exploit such tools. However, only the \"backbones\" of the workflows are stored in the Galaxy tool shed. Therefore, if a user wants to reproduce the workflow, she needs to import it into a Galaxy server and run the workflow with datasets of her choice: since the datasets will be different, the workflows are not completely reproducible.\n\nThe authors should publish the workflows with the datasets they used to test them. Since the authors mention in the acknowledgements that they maintain an in-house Galaxy server, they can easily make the workflows mentioned in the paper public, and also publish a history with the datasets used, with clear instructions mapping the datasets to the corresponding workflow steps. This way any reader can run precisely the workflows presented in the paper, with the actual datasets, and judge the results. If the authors are worried about the computational burden for their server, they can set up accounts for the reviewers only, without making their Galaxy server public.\nValidity of the findings\nAs already mentioned, the datasets used to test the workflows have not been made available.\nCite this review as\nEga\u00f1a Aranguren M (2013) Peer Review #2 of \"Galaxy tools and workflows for sequence analysis with applications in molecular plant pathology (v0.1)\". PeerJ https://doi.org/10.7287/peerj.167v0.1/reviews/2\nReview 5: nan\nReview 6: \n", |
| "LLM_extracted_review": "['Review 1: The manuscript was in good shape in the first iteration of review, and it is now even better. I feel this work can have impact in serving the newest concerns of the science field, especially to serve well with the latest advances in technology.\\n\\nReview 2: The manuscript appears well written and is poised to provide Galaxy work-flows for bench scientists wishing to conduct transcript and peptide analyses for plant pathology related studies. The presentation is centered around plant pathology based studies; however, it mainly focuses on the tools and not the research findings. A mention of some plant pathology revelations uncovered from your work may strengthen the impact of this effort. Other comments which may be useful are to mention software alternatives to Galaxy and to mention to what extent the tools contained within the work-flows can also be used via the command-line. A description of the system requirements in terms of memory and processor requirements would be helpful. A sample data-set may also serve to let the target audience use and test for the expected outcomes.\\n\\nReview 3: The authors describe a number of tools and tool wrappers that have been integrated into Galaxy, and provide a use-case in molecular plant pathology. There could be more mention of alternatives to Galaxy, e.g. Taverna and Anvaya. No mention is used of the memory requirements for MIRA, which may deter users from integrating assemblers into their Galaxy instances. Two workflows mentioned are essentially identical; one uses GetOrfs for gene finding and the second uses Augustus and Glimmer3, making the first redundant. It is more standard to use reciprocal best hit to define orthologues before transferring annotation. The paper should clarify whether the SignalP licence allows for integration into a public Galaxy. The RXLR prediction tools have been implemented, but it is unclear if this paper serves as notice of their publication or if testing has been done to demonstrate their accuracy and efficacy.\\n\\nReview 4: The title is too long; a suggestion is \"A Galaxy framework for sequence analysis with applications in molecular plant pathology.\" The abstract reads like a presentation of Galaxy rather than the authors\\' work. The main objection is that the work presented is not completely reproducible. The authors should publish the workflows with the datasets they used to test them. If the authors are worried about the computational burden for their server, they can set up accounts for the reviewers only, without making their Galaxy server public. The datasets used to test the workflows have not been made available.']" |
| } |