Автор неизвестен - Mededworld and amee 2013 conference connect - страница 53

Страницы:
1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54  55  56  57  58  59  60  61  62  63  64  65  66  67  68  69  70  71  72  73  74  75  76  77  78  79  80  81  82  83  84  85  86  87  88  89  90  91  92  93  94  95  96  97  98  99  100  101  102  103  104  105  106  107  108  109  110  111  112  113  114  115  116  117  118  119  120  121  122  123  124  125  126  127  128  129  130  131  132  133  134  135  136  137  138  139  140 

Simone Scheffer (Charite Universitaetsmedizin Berlin, Dieter Scheffner Center for Medical Teaching and Educational Research, Berlin, Germany) Stefan Schauber (Charite Universitaetsmedizin Berlin, Dieter Scheffner Center for Medical Teaching and Educational Research, Berlin, Germany)

Background: At the Charite-Universitaetsmedizin Berlin OSCEs are used for high stakes examination. Performance of skills is scored by a checklist, communication skills by a set of global ratings [Scheffer et al. 2008].

A preliminary evaluation study indicated that students were neither considering the standard report nor the assessor's feedback as useful. Written comments particularly asked for more detailed information on their performance. We looked for a possibility to provide an enhanced report, while omitting the obstacles of immediate assessor feedback and without publishing the checklist items. The procedure needed to integrate smoothly in our routine data processing and psychometric analysis. Moreover, it had to be appropriate for the criterion referenced nature of our

OSCEs.

Summary of work: Our aim was to give detailed feedback on individual performances on the particular tasks within each OSCE station. Hence, we generated between 1 and 4 itemsets per station, mapping each

item to a corresponding task within the station. We chose the median as measure of central tendency and classified task performance as "insufficient", "doubtful", "sufficient" and "well done". Data preparation, analysis and report generation was done using free, open source software. Style was aligned to the feedback of the Progress Test Medizin, well-known by our students. Summary of results: The enhanced student report adds information of individual achievement on both stations and tasks. In addition, it presents average peer group performances.

Conclusions: Implementation of an enhanced report of results is feasible. Acceptance is being evaluated at the moment.

Take-home messages: Providing a detailed student report of the OSCE is feasible.

4FF/9

Objective Structured Clinical Examination for Post-Graduate Training

Chaur-Jong Hu (Shuang Ho Hospital, Taipei Medical University, Education and Research, Neurology, New Taipei City, Taiwan)

Mei-Yi Wu (Shuang Ho Hospital, Taipei Medical University, Education and Research, Nephrology, New Taipei City, Taiwan)

Tzu-Tao Chen (Shuang Ho Hospital, Taipei Medical University, Education and Research, Chest, 291, JhongJheng Road, Jhonghe District New Taipei City 235, Taiwan)

Bei-Wen Wu (Shuang Ho Hospital, Taipei Medical University, Education and Research, New Taipei City, Taiwan)

Nai-Fang Chi (Shuang Ho Hospital, Taipei Medical University, Education and Research, Neurology, New Taipei City, Taiwan)

Yuh-Feng Lin (Shuang Ho Hospital, Taipei Medical University, Education and Research, Nephrology, New Taipei City, Taiwan)

Background: One-year post-graduate (PGY) training has been implemented in Taiwan since 2011. The training course consisted of internal medicine for 4 months, surgery for 2 months, community medicine for 2 months and emergency medicine, pediatrics, obstetrics and gynecology (OBS/GYN), elect course for 1 month respectively. In the meantime, objective structured clinical examination (OSCE) is included into the national board examination since 2013. To assess the performance of the resident trainees, objective structured clinical examination (OSCE) was implemented.

Summary of work: A committee was responsible to prepare OSCE, which is different from the national board examination for medical students. OSCE for PGY is designed with longer time, more tasks and more complicated situation for every station. The PGY trainees were asked to do more clinical reasoning, decision making and more precise physical examination. This is a formative assessment accompanied by feedback in every station.

ABSTRACT BOOK: SESSION 4 MONDAY 26 AUGUST: 1400-1530

Summary of results: We have developed 12 stations of OCSE with reliability and validity analysis and conducted a 9-station OSCE for PGY. We found the PGY trainees are familiar with OSCE. The OSCE performance was correlated with the evaluation of their mentors. We also found the PGY trainees could not handle the psychiatry patients well. This phenomenon alerts us to modify the training course.

Conclusions: OSCE could provide important information not only for PGY learning but also the training curriculum.

Take-home messages: OSCE could be a good formative assessment for PGY. OSCE could provide information for training curriculum improvement.

4FF/10

Does experience of public performance relate to students' results in the OSCE examination?

Michael Chan (University of Sheffield, Medical School, Medical School, Beech Hill Road, Sheffield S10 2RX, United Kingdom)

Caroline Woodley (University of Sheffield, Medical

School, Sheffield, United Kingdom)

Michael Jennings (University of Sheffield, Medical

School, Sheffield, United Kingdom)

Nigel Bax (University of Sheffield, Medical School,

Sheffield, United Kingdom)

Philip Chan (University of Sheffield, Medical School, Sheffield, United Kingdom)

Background: Personal qualities have been shown to affect students' exam results. We studied the effect of experience, and level, of public performance in music, drama, dance, sport, and debate (MDDSD) at the time of admission to medical school as a predictor of student achievement in their first OSCE examination. Summary of work: A single medical school cohort (n=265) sitting their third year clinical exam in 2011 were studied. Pre-admission statements made at the time of application were coded for their stated achievements in the level of public performance; participation in each MDDSD area was scored 0-3, where 0 was no record, 1 = leisure time activity, 2 = activity at school or local level, 3= activity at district, regional or national level. These scores were correlated to OSCE results by linear regression and t-test. Summary of results: There was a bell shaped distribution in performance score in this cohort. There was no significant linear regression relationship between OSCE results and overall performance score, or between any subgroups. There was a non-significant trend for students who failed the OSCE exam to have lower performance scores than students who passed or achieved excellent results. (p=0.50, 0.46 respectively) Conclusions: We found no compelling evidence that experience of public performance and level of excellence in the MDDSD areas were related to OSCE results. These areas are often seen as enrichments to secondary education, and used as surrogates for desirable qualities in demanding courses such as medicine.

Take-home messages: Other factors seem more important in accounting for variability in OSCE results.

4FF/11

Assessing the Validity of a Multidisciplinary mini clinical evaluation exercises (mini-CEX): a Comparison to a Multidisciplinary OSCE

Mylene Cote (University of Ottawa, Department of Medicine, The Ottawa Hospital - Civic Campus, Department of Medicine, 407A - 737 Parkdale Ave.

Ottawa K1Y 1J8, Canada)

Debra Pugh (University of Ottawa, Department of

Medicine, Ottawa, Canada)

Timothy Wood (University of Ottawa, Academy for

Innovation in Medical Education, Ottawa, Canada)

Melissa Forgie (University of Ottawa, Department of

Medicine, Ottawa, Canada)

Susan Humphrey-Murto (University of Ottawa,

Department of Medicine, Ottawa, Canada)

Background: The mini-CEX is a workplace-based tool designed to assess competency based on direct observation. As with any assessment tool, it is important to collect validity evidence. Although some evidence associated with the mini-CEX has been reported, the purpose of this study was to gather further validity evidence by comparing student performance on the mini-CEX to their performance on other examinations.

Summary of work: Data from clinical rotations for third-year medical students was collected. Each mini-CEX form included six items and a global rating. The average rating of items on the forms (mean-items) was calculated for each mini-CEX form, as well as the mean score for the global rating (mean-GR). Using correlations, mini-CEX ratings were compared to scores on two multidisciplinary Objective Structured Clinical Examinations (OSCEs) and five written clerkship exams. Summary of results: There were 1262 mini-CEX forms available for analysis from 147 students. Correlations between the overall OSCE scores and the mini-CEX were 0.30 to 0.34 (mean-items) and 0.30 to 0.35 (mean-GR), respectively (p<0.01). Correlations between the communication component of the OSCE score and the mini-CEX were 0.26 to 0.30 (mean-items), and 0.33 and 0.36 (mean-GR), respectively (p<0.01). Correlations between the mini-CEX and two of the written exams were significant: family medicine 0.21 and surgery 0.22

(p<0.01).

Conclusions: Student performance on the mini-CEX is significantly correlated to multi-disciplinary OSCEs, but not consistently with written exam scores. Take-home messages: This study provides further validity evidence for the use of the mini-CEX as a clinical skills assessment tool.

ABSTRACT BOOK: SESSION 4 MONDAY 26 AUGUST: 1400-1530

4FF/12

Evaluating trends of students' performance and quality of an OSCE: Three years of experience in Tehran University of Medical Sciences

Sara Mortaz Hejri (Tehran University of Medical Sciences, Medical Education Department, Tehran, Iran) Mohammad Jalili (Tehran University of Medical Sciences, Center for Educational Research in Medical Sciences, Third Floor, Ghods Street, Keshavarz Boulevard, Tehran 14138-43941, Iran)

Ali Labaf (Tehran University of Medical Sciences, Clinical Skill Center, Tehran, Iran)

Background: Until 2009 all medical students in Tehran University of Medical Sciences had to take a comprehensive written examination before internship. An OSCE was added in 2009 to fill the gap of clinical skills assessment at this stage. We aimed at evaluating the educational impact of these OSCEs by checking if students' scores have improved over these years. Summary of work: To compare students' score over the years, 6 station categories were defined. The candidates' scores in each category were calculated and trends over years were evaluated. Summary of results: Six OSCEs each comprising 11 to 14 stations have been held for a total number of 945 candidates. The range of mean scores were 49.11 (±7.92) to 67.48 (±7.82) with a pass rate of 48.1% to 98.4%. The Cronbach's alpha ranged from 0.52 to 0.71. During these years, range of mean scores in categories of history taking, physical examination, communication skills, performing procedures, diagnosis, and patient management were 50.27 to 64.37, 49.10 to 73.86, 31.29 to 66.56, 25.86 to 70.67, 33.18 to 65.80, and 41.45 to 61.45, respectively.

Conclusions: As illustrated in scores and pass rates, students' performances in most categories have improved during this period of time. This may be attributed to the fact that establishment of this exam drew the attention of students towards the importance of clinical skills, a desirable educational impact.

4FF/13

Use of objective structured clinical examination for evaluation of medical student communication skills

Vitaliy Koikov (Republican Center of healthcare development, Centre for Research, Expertise and Health Innovation Development, Imanova str.13, Astana 010000, Kazakhstan)

Gulmira Derbissalina (Astana Medical University, General Medicine Department, Astana, Kazakhstan) Zhanagul Bekbergenova (Astana Medical University, General Medicine Department, Astana, Kazakhstan) Lazat Karsakbaeyeva (Astana Medical University, General Medicine Department, Astana, Kazakhstan) Zarema Gabdilashimova (Astana Medical University, Obstetrics and Gynecology Department, Astana, Kazakhstan)

Background: Communication skills are an important professional competence for physicians. One method to evaluate the communicative competence of the student is to conduct objective structured clinical examination (OSCE) involving standardized patients (volunteers). Summary of work: According to the State educational standards of Kazakhstan, the 5th year students (specialty "General Medicine") have to pass a two-stage exam in the end of clinical subjects study. This exam consists of integrated testing and skills evaluating by OSCE. Subject "General practice" includes four modules: "Internal Medicine", "Childhood diseases", "Obstetrics and Gynecology", "Surgical diseases". Skills evaluation was made at 10 OSCE stations. We involved "standardized patient" (volunteers, 6th year students). The analysis of these stations will allow us to analyze the level of communication skills of students. Students who have passed the exam, completed an anonymous questionnaire evaluating OSCE and their communication

skills.

Summary of results: We analyzed 51 questionnaires. Analysis of the questionnaires showed that 4% of the students mentioned unfriendly volunteers, 27% - wrote that it was difficult to take medical history, 2% -unfortunately, have never faced such situation, 12% -said about improvement of the volunteers communication skills. Only 10% of examinees decided that OSCE doesn't develop their communication skills, 24% said some stations that have caused them some difficulties. However, the students themselves admitted their poor communication skills. Conclusions: OSCE with volunteers and systematic feedback from both students and the volunteers can improve and assess the communicative competence of students. Analysis of the questionnaires showed that the students self-critical attitude to their communication skills, and recognize the need for their continuous improvement.

Take-home messages: Use of OSCE gives possibility to achieve an objective, unbiased assessment of the knowledge and skills of students, to evaluate teaching and quality of curriculum.

4FF/14

Student perception of revision modalities in preparation for final Objective Structured Clinical Examinations (OSCE)

Alex Bonner (University of Manchester, Undergraduate Medicine, Central Manchester University Hospitals NHS Foundation Trust Education North (Academic Campus), Oxford Road, Manchester M13 9WL, United Kingdom) Melanie Dowling (University of Manchester, Undergraduate Medicine, Manchester, United Kingdom) Katy Hinchcliffe (University of Manchester, Undergraduate Medicine, Manchester, United Kingdom) Nick Smith (University of Manchester, Undergraduate Medicine, Manchester, United Kingdom)

Background: Medical students preparing for final OSCE turn to a variety of sources to aid their revision.

ABSTRACT BOOK: SESSION 4 MONDAY 26 AUGUST: 1400-1530

Summary of work: We asked 128 students that had passed their final OSCE of the MBChB course to rate the perceived value of a variety of revision strategies in contributing to their success via electronic survey. 64 results were obtained.

Summary of results: Mock OSCE was the most highly rated modality; 64.3% of respondents rated it as "essential" and 83.9% as either "essential" or "important". Expert-led small group workshops (covering specialist subjects e.g. dermatology and ophthalmology) were rated as "essential" by 42.2% and as either "essential" or "important" by 71.1%. A revision weekend delivered by junior doctors (non-experts) was similarly highly rated (70.3% as either "essential" or "important"). However, non-exam orientated tutorials held throughout the 5th year, were rated by only 9.4% as "essential" and 46.9% as either "essential" or "important". 15.6% of respondents cited this modality as having either "no impact" or "little impact" on whether or not they passed their final OSCE. Conclusions: The data suggests that mock examinations are highly rated by students preparing for OSCE examinations. Small group tutorials lead by both experts and non-experts were highly rated. Students in the run up to their examinations do not value un-focused tutorials.

Take-home messages: Non-expert revision sessions led by junior doctors may be valued as much as expert-led tutorials, and have the potential to benefit students and junior doctors. Mock examinations may be the most valuable revision tool and we propose further research to explore this.

4FF/15

Better Judgement: Improving assessors' management of factors affecting their judgement

Lisa Schmidt (Flinders University, Centre for University Teaching, GPO Box 2100, Adelaide 5001, Australia) Lambert Schuwirth (Flinders University, Flinders Innovation in Clinical Education/HPE, Adelaide, Australia)

Background: It is increasingly clear that human judgement is indispensable in the assessment of students, especially in competency-based education. This requires a focus on assessor expertise, because assessing students requires similar expertise to clinical decision making. Managing judgement biases is important in developing such expertise, because such biases are incomplete (or incorrect) representations in the assessor's mind of what has occurred during the assessment. But, biases are very hard if not impossible to train away so a viable approach is to focus assessor training on developing biases into full-blown person and performance scripts, because possessing such scripts are associated with expertise.

Summary of work: We have developed an assessor-training package containing video presentations for the theoretical background, scripted video vignettes of assessment situations to practice recognition in well-defined situations, YouTube clips for ill-defined

situations, activities to apply all this to the participants' professional contexts and a compendium of practical strategies to prevent biases from unduly influencing the overall judgement.

Summary of results: The workshops demonstrated a need and demand in this area, led to an increased understanding of the issue, and produced a plethora of helpful strategies. We also gained a deeper understanding of how biases individually influence assessors' judgements and interact with each other, and how this can be counteracted Conclusions: Assessors want and need training to manage judgement biases.

Take-home messages: Assessors need to understand, recognise and manage the impact judgement biases may have on them and there is training available for this.

4GG ePosters: Simulation

Location: North Hall, PCC

4GG/1

Evaluation of an innovative scenario based training concept in spine surgery

Susanne Kotzsch (HTWK Leipzig - University of Applied Sciences, Innovative Surgical Training Technologies (ISTT), Eilenburger Str. 13, Leipzig 04317, Germany)

Background: The study was integrated into an interdisciplinary research project to develop a new haptic high-fidelity simulation system for spine surgery. The substantial demands for a change in surgical training in Germany, and the high rating and support of the innovation idea by medical experts and trainees were the driving forces behind it. A central hypothesis was that surgical training can be improved by standardization of surgical action, and thus the creation of effective training modules.

Summary of work: A Cognitive Task Analysis (CTA) was conducted in a multicentre study by a team of social scientists. 36 observations in the OR as well as 17 partially standardized interviews were analyzed. Consultants and residents were asked to validate the observed surgical workflows and the first draft of a scenario based training. Based on these data the training concept was developed. Summary of results: CTA showed that effective simulation training should not only place value on technical skills, but also on non-technical competences, e.g. understanding for the complex anatomy, diagnostic skills, perioperative decision-making and communication with the patient. Accompanying feedback techniques were seen as essential by all participants. In 2012 a pilot training with 4 consultants as trainees, 1 surgical expert as master presenter and 2 advanced consultants as trainers was conducted. All participating surgeons rated the training in all defined categories as very good. Conclusions: An interdisciplinary research on surgical training is required to improve educational competence for surgical trainers.

Take-home messages: The combination of simulator trainings with a training concept is necessary.

4GG/2

Development and initial evaluation of a minimally invasive spine surgery simulator

Monique J Boomsaad (University of Michigan, Neurosurgery, 1500 E. Medical Center Dr. , Taubman

Center 3552, Ann Arbor 48109, United States)

Deborah Rooney (University of Michigan, Medical Education, Ann Arbor, United States) Thomas J Wilson (University of Michigan, Neurosurgery, Ann Arbor, United States)

Jorge Sanz-Guerrero Cosulich (University of Michigan, Ann Arbor, United States)

Bruce Tai (University of Michigan, Ann Arbor, United

States)

ABSTRACT BOOK: SESSION 4 MONDAY 26 AUGUST: 1400-1530

Stephen E Sullivan (University of Michigan, Neurosurgery, Ann Arbor, United States) (Presenter: Tony Tsai, University of Michigan, Ann Arbor, United States)

Background: We developed a high-fidelity lumbar spine simulator for teaching neurosurgical residents to perform minimally invasive microdiscectomies. Feedback from the first prototype guided refinements of the second prototype. We evaluated both prototypes for content validity.

Summary of work: Four neurosurgery residents and one attending neurosurgeon (n = 5) evaluated the first prototype. Eight residents and five attendings (n=13) evaluated the second prototype. Evaluators incised, dissected, placed tubular retractors, and reviewed fluoroscopic images of the model. Participants then completed a survey rated on a four-point rating scale, ranging from "Not at all realistic" [1] to "Highly realistic" [4], in several domains including: physical attributes (PA), realism of experience (RE) and ability to perform tasks (A). Participants rated the simulator's value (V) and relevance (R) on a five-point scale. Summary of results: The observed averages for the first and second prototypes, respectively, were 3.2 and 3.4 (PA), 3.1 and 3.6 (RE), 3.1 and 3.8 (A), 4.6 and 4.9 (V) and 4.8 and 4.7 (R). Significant rating differences were identified for realism (p = 0.047) and value (p = 0.005). The final global rating 3.2 indicated that the simulator "can be used as is for training, but could be improved slightly". Final inter-rater reliability was high, ICC(2,13) = 0.81, 95% CI [0.61, 0.93], suggesting high participant rating agreement.

Страницы:
1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54  55  56  57  58  59  60  61  62  63  64  65  66  67  68  69  70  71  72  73  74  75  76  77  78  79  80  81  82  83  84  85  86  87  88  89  90  91  92  93  94  95  96  97  98  99  100  101  102  103  104  105  106  107  108  109  110  111  112  113  114  115  116  117  118  119  120  121  122  123  124  125  126  127  128  129  130  131  132  133  134  135  136  137  138  139  140 


Похожие статьи

Автор неизвестен - 13 самых важных уроков библии

Автор неизвестен - Беседы на книгу бытие

Автор неизвестен - Беседы на шестоднев

Автор неизвестен - Богословие

Автор неизвестен - Божественность христа