Автор неизвестен - Mededworld and amee 2013 conference connect - страница 16

1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54  55  56  57  58  59  60  61  62  63  64  65  66  67  68  69  70  71  72  73  74  75  76  77  78  79  80  81  82  83  84  85  86  87  88  89  90  91  92  93  94  95  96  97  98  99  100  101  102  103  104  105  106  107  108  109  110  111  112  113  114  115  116  117  118  119  120  121  122  123  124  125  126  127  128  129  130  131  132  133  134  135  136  137  138  139  140 

Background: The need to pass the national medical licensing test (NT) step 1 is important for Thai medical students. This study aimed to compare learning strategies between medical students who passed and failed the NT step 1 at Thammasat University. Summary of work: The 3rd year medical students at Thammasat University were studied. Self-report questionnaire of learning strategies consisting of 10 subscales in 44 items of 5-point likert scales were completed. The questionnaire was modified from the Learning and Study Strategies Inventory (LSSI) to assess students' thoughts, behaviors, attitudes, behaviors and beliefs related to learning. Mean scores of each subscale


were compared by independent t'test between students who passed and failed the test. Summary of results: Of 113 students completing the questionnaire, 87 students passed the NT. The successful students showed significantly higher mean scores of 5 subscales including attitude of learning (p=0.003), time management to academic situations (p=0.036), concentration in class (p=0.024), ability to process information (p=0.043) and skill to identify important material (p=0.013) than those who failed the test. The successful students also had higher mean scores of the remaining subscales including motivation, anxiety when approaching a task, use of study aids, self testing and test preparation strategies, but not reached statistically significant level.

Conclusions: Better learning strategies were confirmed to result in passing the NT step1 in pre-clinical medical students.

Take-home messages: Learning strategies are important for medical students to pass the NT step1.


Relations between results of oral and written forms of exam on pathophysiology in 3rd year medical students

Jan Hanacek (Comenius University, Jessenius Faculty of Medicine, Department of Pathophysiology, Sklabinska 26, 0, Martin 03601, Slovakia)

Background: As a summative form of assessment of knowledge on pathophysiology we are using classic methods -multiple choice question (MCQs) test and oral examination (OE). We did not analyse up to now whether there is any relation between results of these two forms of assessment. The mentioned analysis may bring useful information on quality of examination. Summary of work: We analysed results of students who underwent examination in pathophysiology during 2004 - 2012. Written MCQs test consists of 60 questions. One point was given for each correct answer. Oral exam consists of 3 questions. Marks A, B, C, D, E and Fx were used for assessment of presented knowledge. Summary of results: During the observed period we have found tendency to improvement of marks on oral exam despite the results of MCQs test showed rather slight tendency to worsen. Generally, there is positive relation between value of average mark obtained at oral exam and number of points obtained at test in each year.

Conclusions: Both MCQs test and oral exam should be

used for objective evaluation of knowledge on

pathophysiology in medical students.

Take-home messages: Evaluate regularly assessment

methods used in medical education and their results to

improve fairness to students.

Supported by Project ITMS: 26110230031


Fun and Formative Assessment in a Medical School with a Large Class Size

Elisabeth F. M. Schlegel (Ross University School of Medicine, Medical Microbiology and Immunology, P.O. Box 266, Roseau Commonwealth of Dominica 00152, Dominica)

Nancy J. Selfridge (Ross University School of Medicine, Integrated Medical Education, Roseau, Dominica)

Background: In medical education, IT-supported games are stimulating instructional tools useful for formative

assessments, providing valuable feedback to students and teachers. We show that they accommodate large classes divided into teams. Also, having students compete in groups emphasizes collaborative skills. Summary of work: At Ross University School of Medicine, classes entering Biomedical Sciences education are large. A fast-paced, competitive, interactive quiz game involving dermatology was developed for class wide participation. Held on the last day of class, it prepares students for high-stakes exams to continue their medical education. Summary of results: Teaching and assessing dermatology rely heavily on images. Audience response technology allows questions about diverse skin diseases, images, and applied knowledge quickly. Faculty delivers the questions, most of which are second-and third-order. Up to 48 groups compete for prizes, making each gaming competition an exciting event. Competitions provide motivational settings allowing students to assess their own knowledge, becoming aware of any gaps. Information technology allows students to take ownership of their knowledge as they interact with their teammates and gauge their exam readiness. Prudent educational management is required to implement such a session.

Conclusions: Educational game competitions provide two-way assessments for students and faculty alike, deepening learning and teaching processes. They can accommodate large numbers of students. Take-home messages: Educational game competitions are useful tools for formative assessment for large groups of students.


Strategies for Evaluation Used in Undergraduate Health Courses (Rehabilitation): Considerations for Educational Practice

Luciana Costa Silva (Medical School of Ribeirao Preto University Of Sao Paulo, Medical Clinic, Rua Tibiriga, n°

1457, Ribeirao Preto 14025-009, Brazil)

Maria Paula Panuncio-Pinto (Medical School of Ribeirao Preto University Of Sao Paulo, Neuro and behavioral sciences, Ribeirao Preto, Brazil) Luis Ernesto Almeida Troncon (Medical School of Ribeirao Preto University Of Sao Paulo, Medical Clinic, Ribeirao Preto, Brazil)

Felipe Alves de Oliveira (Medical School of Ribeirao Preto University Of Sao Paulo, Ribeirao Preto, Brazil)


Background: At a time when teachers are called to rethink curriculum, educational projects and teaching and assessment practices, is an important challenge to implement new assessment strategies, to overcome the traditional educational model. The assessment of learning in health courses takes an important place: evaluating the development of skills and competencies demands the adoption of progressive assessment, involving different strategies to contemplate acquiring knowledge and developing attitudes. Summary of work: This is an initial exploratory study, aimed identifying assessment strategies used in undergraduate courses of Physiotherapy, Speech Therapy and Occupational Therapy (FMRP-USP). This is a documentary study approaching 208 disciplines of these courses, analyzing summaries and timetables available online, following a reading guide. Summary of results: There was a high prevalence of strategies considered "traditional": 73.92% of the disciplines using theoretical proofs, followed by seminars (46.44%), practical exam (20.62 %), clinical case discussion (14.88%), paper (13.31%) and use of portfolio (5%).

Conclusions: These results indicate that is very strong the presence of strategies considered "traditional" in undergraduate courses studied. This approach requires confirmation and complementation, hearing students and teachers in order to better assess the significance of these findings.

Take-home messages: While it is possible to identify the discrete emergence of "non traditional" assessment, measures to overcome the traditional educational model can be recommended.


Learning from your mistakes after an exam: related to learning styles, reflection or insight?

Debra Sibbald (University of Toronto, Leslie Dan Faculty of Pharmacy, Toronto, Canada)

Matt Sibbald (University Health Network, Interventional Cardiology, University of Toronto, Leslie Dan Faculty of Pharmacy, 144 College Street, Toronto M5S 3M2, Canada)

Background: Collaborative learning after summative exams needs further study. Student ability to detect mistakes may relate to their ability to self-reflect or learning style.

Summary of work: 200 students chose 5 questions they thought they got wrong in a midterm multiple choice exam. Immediately afterwards, randomized groups discussed the questions in a reflective exercise. At home, they researched questions, completed a learning style inventory and a validated tool assessing self-reflection and insight (SRSI) behavioral measures. During the final exam, students self-selected 5 questions with no group discussion. Actual performance on these 5 questions and opinions on the reflective exercise were compared with learning styles and SRSI. Summary of results: Students actually got 3.1±1.1 of these 5 questions wrong. Only the insight SRSI subscore

predicted ability to detect mistakes. Neither learning style nor reflection subscales of the SRSI were predictive. Perceived value of the reflective exercise varied by learning styles and correlated with engagement in reflection and need for reflection SRSI subscores. Doers were lower scorers than thinkers or feelers for both. Thinkers and feelers valued the exercise more than doers or watchers for most measures. Conclusions: Student characteristics poorly correlate with ability to detect mistakes. However, the perceived value of post exam reflective exercise designed to improve this ability did correlate with learning style and SRSI reflection subscores.

Take-home messages: Understanding characteristics which enhance important skills of reflection and error detection may improve curricular design. Not all students perceive their value: focus efforts to engage doers and watchers.


Are Supplementary/Re-sit Examinations Valid?

Nadia Al Wardy (Sultan Qaboos University, Medical Education Unit, P.O. Box 35, College of Medicine & Health Sciences, Al Khod 123, Oman)

Background: Students are allowed to re-sit examinations after a short remedial period if they fail in one or two courses/modules in a semester in phase II of the MD degree programme. Students' grades can revert from F to any grade on the normal A to F scale depending on the in-course assessment marks and the marks they obtain on the second sitting of the failed component. The main purpose of these re-sits was to ensure that students have achieved minimum competence in a module to be allowed to progress to higher level modules in the next semester and would, thus, progress normally and without delay in their degree programme. Summary of work: The progression of students who were allowed re-sit examinations from fall 2009 to fall 2012 was studied.

Summary of results: The number of students who were granted re-sit examinations was 49. Seventy three percent of them are on probation now or/and are repeating a year regardless of the grade obtained from the re-sit, or, due to low grades obtained in other modules. Only 1 student has, so far, progressed without delay.

Conclusions: Re-sit examinations in the current system do not seem to fulfil their purpose of preventing students from being delayed in their degree programme.

Take-home messages: The regulations of granting re-sit examinations in the College need to be looked at.



Benefits of negative marking at the European Board of Ophthalmology Diploma (EBOD) examination, both for organiser and candidates

Danny Mathysen (Antwerp University Hospital & University of Antwerp, Department of Ophthalmology, Wilrijkstraat 10, Edegem (Antwerp) 2650, Belgium)

Background: European postgraduate medical assessment has developed during the last 25 years. Currently all European medical specialities examinations use MCQ's. The EBOD examination uses multiple independent true/false MCQ's for the written part of the examination. Since true/false MCQ's may be prone to guessing, thorough statistical evaluation has been set-up over the last five years to monitor the performance. Summary of work: In 2010, the European Board of Ophthalmology (EBO) decided to introduce negative marking for the MCQ's. To study the influence of negative marking on the performance/reliability of the examination, the following statistical performance parameters of test items have been compared: P-statistics, Rit-statistics, Cronbach-alpha and 3-parameter item-response analysis.

Summary of results: A decrease in average P-statistics (P<0.66) and increase in average Rit-statistics (Rit>0.15) and Cronbach-alpha (>0.85) was observed compared to the situation without negative marking (P>0.75; Rit<0.15; <0.80). 3-parameter item-response analysis revealed that the questions were less influenced by guessing correctly (average c<0.33). It was also confirmed that female candidates are not discriminated regarding their chances to pass EBOD. Conclusions: Introduction of negative marking at EBOD did lead to better item and examination performance values, without having any negative side-effects (such as discrimination of female candidates). Take-home messages: negative marking at the European Board of Ophthalmology Diploma (EBOD) examination has proven successful both for organiser and candidates.


The effect of reused multiple-choice questions -case study in fifth-year pediatrics rotation

Milton Severo (Faculty of Medicine, University of Porto, Center for Medical Education, Porto, Portugal) Fernanda Silva-Pereira (Faculty of Medicine, University of Porto, Center for Medical Education, Alameda Prof. Hernani Monteiro, Porto 4200-319, Portugal) Tiago Henriques-Coelho (Faculty of Medicine, University of Porto, Department of Pediatrics, Porto, Portugal) Herdlia Guimaraes (Faculty of Medicine, University of Porto, Department of Pediatrics, Porto, Portugal)

Background: Reutilization of multiple-choice questions (MCQ) on an examination is a concern because teachers do not want to unfairly aid examinees, for example, by sharing the item content from earlier to later examinees.

Summary of work: 238 examinees were assessed within 8 pediatrics rotations in 2011/12. Each examination had 40 MCQ. From the total 161 MCQs designed over the year, 36 (22.4%) were used twice, 15 (9.3%) three times, 13 (8.1%) four times and 12 (7.4%) 5/6 times. Mixed effects models were used to estimate the effect of reutilization of MCQs.

Summary of results: The mean of the correct answers increased 1.4 answers per rotation (p<0.001). The facility index of the MCQs increased 6.0% per repetition (p<0.001). After adjusting for the MCQs repetitions the increase of mean of correct answers decreased to 0.7 answers per rotation (p<0.001). Conclusions: The reuse of MCQ in rotations within the year inflated the mean of correct answers per rotation. Nevertheless, the remaining increase may be explained by real differences between examinees' ability or differences in the level difficulty of different examination forms during rotation. The use of non-equivalent equating design to adjust the differences in the level difficulty of different test forms during rotation is put in doubt if multiple reutilizations of MCQs are needed.

Take-home messages: Multiple reuses of MCQs in rotations within the year is an unfair aid for later examinees, therefore should be avoided, even more if the teachers want to correct possible differences in the level difficulty of different test forms during rotations.


Cueing Effects of Item Writing Flaws in Multiple-Choice Questions

Marie Tarrant (University of Hong Kong, School of Nursing, 4/F, William M. W. Mong Block, Li Ka Shing Faculty of Medicine, 21 Sassoon Road, Hong Kong) James Ware (The Saudi Commission for Health Specialties, Department of Medical Education and Postgraduate Studies, Riyadh, Saudi Arabia)

Background: Multiple-choice questions are frequently used in high-stakes assessments across health science disciplines. Many test items, however, contain cues to the correct answer allowing students without the requisite knowledge to correctly answer the test item. The purpose of this study was to examine the cueing effect of five common item-writing flaws: word repeats in the question stem and the correct option, the longest option is correct, use of absolute terms in the options, use of 'all of the above' and use of 'none of the above' as options.

Summary of work: We reviewed 3623 test items used in one school of nursing over a 7-year period. Questions were evaluated for 19 frequently occurring item-writing flaws, including five of the above identified flaws. We compared the proportion correct and the item discrimination indices of the flawed items with unflawed items.

Summary of results: Items containing the identified item-writing flaws were significantly less difficult and less discriminating than unflawed items. Students were


more likely to select the correct answer in questions that contained cues.

Conclusions: Cueing in multiple-choice questions is common and the presence of cueing flaws in test items would enhance student guessing on multiple-choice


Take-home messages: Adequate training in item-writing is recommended for all faculty members who are responsible for developing tests. Peer review prior to test administration to identify cuing flaws may also improve the quality of test items and reduce cueing effects.


Comparing functioning and non-functioning distractors in progress test with four- or five-options multiple-choice questions

Joelcio Francisco Abbade (UNESP - Universidade Estadual Paulista, Gynecology and Obstetrics, Departamento de Ginecologia e Obstetricia - Faculdade de Medicina de Botucatu/UNESP Rubiao Jr, s/n, Botucatu - Sao Paulo 18607-340, Brazil) Angelica Maria Bicudo (UNICAMP - Universidade Estadual de Campinas, Pediatrics, Campinas - Sao Paulo, Brazil)

Claudia Maria Maffei (USP - Universidade de Sao Paulo, Basics Science, Ribeirao Preto - Sao Paulo, Brazil) Sue Yazaki Sun (UNIFESP - Universidade Federal de Sao Paulo, Gynecology and Obstetrics, Sao Paulo, Brazil) Maria de Lourdes Hafner (FAMEMA - Faculdade de Medicina de Marilia, Public Health, Marilia - Sao Paulo, Brazil)

Moacyr Fernandes Godoy (FAMERP - Faculdade de Medicina de Sao Jose do Rio Preto, Cardiology, Sao Jose do Rio Preto, Brazil)

Background: Tests with multiple choice questions are a way of assessing the student's knowledge in the progress testing. There are still doubts about the number of options needed to adequately test the knowledge of undergraduate medical students. Our aim was to evaluate the functioning of distractors in tests with 4 and 5 options.

Summary of work: We evaluated the proportion of nonfunctioning distractors on seven progress tests applied to undergraduate medical students. Five tests were prepared with five-option MCQ (5o-MCQ) and two tests with four-option MCQ (4o-MCQ). We selected the questions with difficult index between 0.25 and 0.75 and the point biserial correlation coefficient (RPB) greater than 0.2. Nonfunctioning distractors were defined as those that were chosen by less than 5% of students and/or those with RPB equal to or greater than zero. We compared functioning and nonfunctioning distractors between tests with 4o-MCQ or 5o-MCQ. Summary of results: The tests that were prepared with 4o-MCQ had a lower frequency of distractors that were chosen by less than 5% of students or those with RPB equal to or greater than zero and a higher frequency of questions with all function distractors. There was no

difference on the reliability when comparing tests with 4o-MCQ or 5o-MCQ.

Conclusions: There were more plausible and attractive distractors for students in tests with 4o-MCQ. Our results suggest that teachers have difficulty in preparing the fifth plausible distractor for 5o-MCQ. Take-home messages: Do not spend time preparing a distractor that does not function well. Work hard preparing tests with very good MCQ with four options.


Reliability of multimethod assessment of medical students rotating in internal medicine

Krittiya Korphaisarn (Siriraj Hospital, Mahidol University, Medicine, 2 Prannok, Bangkoknoi, Bangkok 10700, Thailand)

Ranittha Ratanarat (Siriraj Hospital, Mahidol University, Medicine, Bangkok, Thailand)

Praveena Chiowchanwisawakit (Siriraj Hospital, Mahidol University, Medicine, Bangkok, Thailand) varalak Srinonprasert (Siriraj Hospital, Mahidol University, Medicine, Bangkok, Thailand)

Background: Multimethod assessment has been increasingly suggested for evaluation of medical students in order to overcome limitations of individual formats. Component of the combinations and its reliability has not been extensively reported. This study aimed to determine reliability of multimethod assessments utilized in evaluation of medical students. Summary of work: Assessment methods in internal medicine rotation for 5th year medical students at department of medicine, Siriraj hospital were divided into performance evaluations (PE) and examinations. Performance evaluations were carried out by teachers during teaching session including outpatient clinic, bedside teaching, small-group teaching, patient report and ward evaluation. Examinations comprise multiple essay question (MEQ), lab and long case exams. Evaluation scores were analyzed utilizing multiple linear regression models to determine the correlation with students' grade point average (GPA) which reflects the students' overall performance. Summary of results: The final composite score from multimethod assessment show good correlation with GPA (R2 = 0.56). The correlation was attributed in greater extent by examination scores. Performance evaluation, however, showed significant correlation with students' overall performance. It could, therefore, be integrated with examinations in evaluating students' performance during their clerkship as they are crucial for determining medical students' clinical competency when encounter patients.

Conclusions: Multiple methods of evaluation comprising several assessments during teaching activities and examinations focused on clinical diagnosis show good reliability for evaluating medical students. Take-home messages: Assessment of medical students should include performance evaluation in multiple domains of competency.

2FF ePosters: eLearning 1

Location: North Hall, PCC


Smartphone and tablet usage among medical students in Prince of Songkla University

Polathep Vichitkunakorn (PSU, Faculty of Medicine,

Prince of Songkla University, Songkhla 90110, Thailand)

Pleuk Kulrintip (PSU, Community Medicine (5th year

medical student), Songkhla, Thailand)

Nanida Tirachet (PSU, Community Medicine (5th year

medical student), Songkhla, Thailand)

Parin Boonthum (PSU, Community Medicine (5th year

medical student), Songkhla)

Piyada Kongkamol (PSU, Community Medicine,

Songkhla, Thailand)

Somchai Suntornlohanakul (PSU, Pediatrics, Songkhla, Thailand)

Background: There are many benefits of smartphone and tablet, user can bring them everywhere, communicate with others. Moreover smartphone and tablet facilitate learning in medical students. However, there's little information about usage of smartphone and tablet among the medical students especially in Thailand.

1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54  55  56  57  58  59  60  61  62  63  64  65  66  67  68  69  70  71  72  73  74  75  76  77  78  79  80  81  82  83  84  85  86  87  88  89  90  91  92  93  94  95  96  97  98  99  100  101  102  103  104  105  106  107  108  109  110  111  112  113  114  115  116  117  118  119  120  121  122  123  124  125  126  127  128  129  130  131  132  133  134  135  136  137  138  139  140 

Похожие статьи

Автор неизвестен - 13 самых важных уроков библии

Автор неизвестен - Беседы на книгу бытие

Автор неизвестен - Беседы на шестоднев

Автор неизвестен - Богословие

Автор неизвестен - Божественность христа