Автор неизвестен - Mededworld and amee 2013 conference connect - страница 52

Страницы:
1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54  55  56  57  58  59  60  61  62  63  64  65  66  67  68  69  70  71  72  73  74  75  76  77  78  79  80  81  82  83  84  85  86  87  88  89  90  91  92  93  94  95  96  97  98  99  100  101  102  103  104  105  106  107  108  109  110  111  112  113  114  115  116  117  118  119  120  121  122  123  124  125  126  127  128  129  130  131  132  133  134  135  136  137  138  139  140 

Conclusions: TBL was an effective learning strategy that improved learning outcomes equal to LBL. But TBL resulted in high medical student satisfaction.

4DD/14

Assessment of the effectiveness of team based learning in Pathology

Amitabha Basu (St Matthews University School of Medicine, Pathology, Grand Cayman, Cayman Islands) Anthony Lyons (St Matthews University School of Medicine Leeward III, Regatta Office Park West Bay Road, PO Box 30992, Grand Cayman, Cayman Islands) S K Biswas (St Matthews University School of Medicine Leeward III, Grand Cayman, Cayman Islands)

Background: We worked on this project to increase the awareness and value of team-effort and to motivate students to self-learn and augment problem-solving acumen.

Summary of work: TBL is used at end of each session of didactic lectures. It starts with group selection which is random and unalterable. Each session begins by asking 10 case based questions, which students' answers online individually. Following this activity they assemble in a group and re-answer same questions as a team and self-grade using a self-scoring scantron. Summary of results: Student felt TBL is effective. They could understand their mistakes while discussing the

ABSTRACT BOOK: SESSION 4 MONDAY 26 AUGUST: 1400-1530

question in a group and also learned how to solve a case. Some of them made new friends. Long-term improvement was noted among 43%. Conclusions: Real-time feedback motivates students to put in more effort to learn. Group discussions help students to look at a concept from different perspectives and learn value of team efforts. TBL is an effective method in a highly condensed and concept based course like pathology.

Take-home messages: Medical care is not delivered by an individual; rather by a trained team. It provides students an opportunity to coach others the methodology of learning, and take an active role.

4DD/15

Comparison of the effectiveness and satisfaction between lecture based and team based learning program of medical students in gynecology

Panya Sananpanichkul (Prapokklao Hospital, Obstetrics and Gynecology, Leap Neon Road, Tambon Wat Mai, Ampur Mueung, Chanthaburi 22000, Thailand)

Background: To compare the knowledge gain and satisfaction of learning between the lecture based and team based learning program. Summary of work: The students were divided by stratified sampling into two groups, one received the lecture based learning program and the other received the team based learning program. The two groups of the students were tested by fifteen topics multiple choices examinations. Five points score of satisfaction of learning were evaluated by the students. Summary of results: 1) Posttest score was statistically higher than pretest score in both learning programs (p < .01); 2) The pretest scores comparing between lecture based group and team based group showed no statistical difference and also with posttest score but the posttest tended to show significant difference; 3)The scores of the satisfaction between both groups were statistically different in team skill, team communication, team unity and readiness assurance (p < .05 and p < .01) but showed no significant difference of satisfaction scores in knowledge, verbal communication, group responsibility, timing and data support from teacher. Conclusions: There was no difference of knowledge gain between the lecture based and team based learning program. Team based learning had more satisfaction scores than the other in skill, communication, unity and readiness assurance.

Take-home messages: There was no difference in knowledge gain between the lecture based and team based learning program. Further large sample size and well designed research is required.

4DD/16

Does active participation in TBL promote individual learning?

Masanaga Yamawaki (Kyoto Prefectural University of Medicine, Medical Education & General Medicine, 465 Kajiicyo, Nakagyo-ku, Kyoto 6028566, Japan)

Jin Irie (Kyoto Prefectural University of Medicine, Medical Education & General Medicine, Kyoto, Japan) Kensuke Shiga (Kyoto Prefectural University of Medicine, Medical Education & General Medicine, Kyoto, Japan) Hiroko Mori (Kyoto Prefectural University of Medicine, Medical Education & General Medicine, Kyoto, Japan)

Background: Team-based learning (TBL) is a learner centered teaching strategy designed to   promote active engagement and deep learning. Our research question is whether contribution to a team is correlated to the score of the final examination in TBL. Summary of work: One hundred and one 5th grade medical students in our university had the first TBL class of general internal medicine. We used a response-analyzer "LENON" (Terada Electric Works Co. Ltd.) which can analyze class member's opinions "face to face" in real time and have a function of "PC Scratch Card". We analyze the relationship between an extent of participation in a group and individual score on the final examination.

Summary of results: The degree of participation and discussion in a group are closely related to individual understanding of clinical reasoning (p<0.01). iAPP (individual application) is not correlated with tAPP (team application). tAPP, but not iAPP, is related to proportion of participation (p<0.05), but iAPP. Conclusions: Our result indicates the score of the final examination is related to individual understanding of clinical case rather than group activity. Take-home messages: In addition to participating and working well with others, improving their ability to apply important concepts should be carefully designed in TBL session.

4DD/17

A feedback model to promote both educator's teaching strategy and learner's learning effectiveness of Team-Based Learning class in a hybrid course

Wei-Te Hung (Chung Shan Medical University, Department of Anesthesiology and Center of Faculty Development, College of Medicine, No.110, Sec I, Jian Guo N. Rd, Taichung 403, Taiwan) TH Chen (Chung Shan Medical University, Department of Anesthesiology, College of Medicine, Taichung, Taiwan) CY Chan (Chung Shan Medical University, Department of Anesthesiology, College of Medicine, Taichung, Taiwan) KC Ueng (Chung Shan Medical University, College of Medicine, Taichung, Taiwan)

Background: Team-based learning (TBL) was introduced into our college because of medical educational reform. In our department, we used a feedback model, including quantitative feedback: course design, learning goals, self-learning, group interaction, learning effectiveness and qualitative feedback on influential factors: educator's preparation and MCQs discussion, hand out material, MCQs design, group interaction, to accelerate TBL class incorporated into a hybrid course.

Summary of work: Quantitative data was collected by using immediate response system (IRS) from 1 (the worst) to 5 (the best) and qualitative data was collected by learner's feedback freely right after the class. According to the result of feedback model, we adjusted the course and the structure of TBL class of the second and the third year. We compared the data by unpaired t-test and Chi-square test.

Summary of results: TBL was introduced into two topics (2/10, 20%) in the first year, one topic (1/10, 10%) for the second year and two topics (2/10, 20%) for the third year. The outcome of the first year and the third year were compared and had significant improvements: course design 3.5 VS 4.7, learning goal 3.4 VS 4.6, group interaction 2.5 VS 4.7, self learning effectiveness 3.2 VS 4.6 and learning effectiveness 3.1 VS 4.7. Qualitative feedback data of influential factors indicated that educator's preparation and MCQs discussion played an import role about learner's learning effectiveness of the

TBL class (P<0.05).

Conclusions: A feedback model helped us to verify the insufficient area of influential factors, improved our teaching strategy and promoted our incorporated TBL class into a hybrid course.

4FF ePosters: Clinical Assessment and the OSCE

Location: North Hall, PCC

4FF/1

R-C-T or ethics? How to assess a rater-training for OSCE without discriminating against students

K Schuttpelz-Brauns (Medical Faculty Mannheim, Heidelberg University, Division for Study and Teaching Development, Theodor-Kutzer-Ufer 1-3, Mannheim 68167, Germany)

E Narcifi (Medical Faculty Mannheim, Heidelberg University, Division for Study and Teaching Development / Competence Center of the Practical Year, Mannheim, Germany)

J Kaden (Medical Faculty Mannheim, Heidelberg University, TheSiMa, Mannheim, Germany) U Obertacke (Medical Faculty Mannheim, Heidelberg University, Competence Center of the Practical Year, Mannheim, Germany)

H Fritz (Medical Faculty Mannheim, Heidelberg University, Division for Study and Teaching Development, Mannheim, Germany)

N Deis (Medical Faculty Mannheim, Heidelberg University, Division for Study and Teaching Development, Mannheim, Germany)

Background: To prove effectiveness of training a R-C-T is the best design to conduct. In the case of rater-training of summative OSCEs there is an ethical reason for not conducting R-C-T in the assessment process. Furthermore it is rather expensive to implement an OSCE just for testing the effectiveness of rater-training. Several authors such as Bloch & Norman (2012) or Tavakol & Dennick (2012) use G-theory to analyze OSCE-data. In the sense of training effectiveness G-theory seems to be a solution as well. Effective rater-training should amongst other facets increase the explained variance of stations.

Summary of work: During a summative 12-station OSCE with 5 rounds assessors and simulation patients rated 114 students using the Berlin Global Rating Scale (BGR, Scheffer, 2009) as a formative tool for communicative competencies. All of the 26 simulation patients were trained one hour to rate the BGR. 19 assessors got a short briefing to the BGR and 20 assessors missed the training. To validate results they rated additionally one multiple true-false-item (history taking). We used G-theory to analyze data.

Summary of results: When no training or short briefing is conducted 0-6% of variance was explained by station. Most variance (17-35%) is explained when no training is necessary as shown by e.g. the history taking item. Conclusions: G-theory is a possible way to show effectiveness of rater-training without conducting a R-C-T. Further analyses are needed to explore this methodology.

Take-home messages: See G-theory as you didn't see it before!

ABSTRACT BOOK: SESSION 4 MONDAY 26 AUGUST: 1400-1530

4FF/2

The effects of role-player-candidate interactions on fairness in the Clinical Skills Assessment

Pauline Foreman (Royal College of General Practitioners, CSA Core Group, Baldwins Lane Surgery, Croxley Green

WD3 3LG, United Kingdom)

Kamila Hawthorne (Cardiff University, Institute of Medical Education, Cardiff, United Kingdom)

Background: The Clinical Skills Assessment (CSA) is a high stakes licensing examination for UK General Practice. It is an OSCE style exam comprising a simulated surgery of 13 cases. In common with other postgraduate examinations, the CSA has differential pass rate between some subgroups of candidates, particularly between international medical graduates (IMGs) and UK trained graduates (34.7 v 90.1% pass rate). Some candidates who fail allege that the CSA is unfair due to roleplayer bias towards them. Summary of work: An observational study of roleplayer performance, using a semi-structured observation tool was conducted during the Spring 2013 assessments. Video recordings of consultations identified by individual assessors as showing possible roleplayer differences were analysed in more detail by groups of assessors to establish whether there was agreement that there had been a genuine difference in roleplayer performance, whether it affected the subsequent roleplayer-candidate interaction and the overall challenge of the case.

Summary of results: 461 consultations were observed. 66 consultations were reviewed on DVD. Initial results suggest that 25 consultations (5.4%) appear to demonstrate significant differences in roleplayer performance. These differences do not appear to be related to ethnicity or gender characteristics of the candidates, and most commonly appear to be a form of 'saving' behaviour.

Conclusions: Variations in roleplayer performance between candidates can largely be explained by variations in candidate's performance. Take-home messages: There is no evidence of systematic bias by roleplayers towards any candidate subgroup in the CSA.

4FF/3

360-degree evaluation of residents on communication and Interpersonal skills: Inter-rater variation in judgment

Muhammad Tariq (Aga Khan University, Medicine, Stadium Road, POBox 3500, Karachi 74800, Pakistan)

John Boulet (FAIMER, ECFMG, Philadelphia, United States)

Background: Effective communication and interpersonal skills are key components for the optimal performance of any health care professional. To perform an assessment of residents' performance, particularly with respect to their communication and interpersonal skills, and to identify potential areas needing improvement,

we conducted 360 degree evaluations of residents as a novel step in our setting.

Summary of work: A cross-sectional survey of 49 residents. Using a 360 degree evaluation technique, every resident was evaluated by eight other co-workers .A self-evaluation was also completed. The data was analyzed using SAS version 9.1. Analysis of Variance (ANOVA) was employed to test for differences in mean scores, both for rater type and residency year. Summary of results: We received a total of 367 completed forms for the 360 degree evaluations (response rate of 83.2%). There was a significant effect attributable to rater type (F=5.2, p<0.01). There were significant differences in mean ratings (p<.05) between the unit staff (M=6.2, SD=1.3) and self evaluations (M=5.4, SD=1.0), and unit staff and nurses (M=5.4, SD=1.3). There was tatistically significant difference between the ratings by the nurses and the faculty (p<0.01). The mean resident self-assessment scores were significantly lower than those provided by faculty

(p<0.01).

Conclusions: The 360 degree evaluation technique is effective for measuring the communication skills of trainees. Individuals who interact with trainees on a regular basis can provide meaningful judgments of their abilities. A diverse group of raters provides different angle perspectives based on their variable interaction with the trainees.

Take-home messages: 360 degree evaluation technique is an effective tool to assess communication skills.

4FF/4

MasterCase or MasterChef?

Daniel Lin (University of Sydney, Medicine, Parramatta Rd, Sydney 2006, Australia)

Background: Long Case MasterClass is an innovative one day educational program which covers the basic clinical skills in the medical consultation. It is part of a broader program called "CASE: Clinical Approach to Structured Examinations" to assist Year 3 medical students prepare for their medical long case examination. Summary of work: The program has been developed to assist students to develop good specific clinical skills in a structured approach. At the end of the day the 2 medical students who score the best in the specific skills sessions are challenged by performing a long case and then presenting the long cases to 3 judges. The winner is crowned Long Case MasterClass Champion! Summary of results: Topics: History in the making -History Taking; Let's get physical! - Physical Examination; What's your problem? - Problem Listing: Problem Identification and Prioritising; Dr House - Diagnostic dilemmas - Symptoms and Signs: Differentials; Testing times - Investigation; My Management Plan -Management Plan; Perfect Presentations -Presentation; Let's talk about SEX - Discussion. Conclusions: Students are driven the strongest by assessment and the need to find a number of teaching methods and resources to support their learning. Having some fun along the way is also important.

ABSTRACT BOOK: SESSION 4 MONDAY 26 AUGUST: 1400-1530

4FF/5

Examining the function of Mini-CEX as an assessment and learning tool: factors associated with the quality of written feedback within the

Mini-CEX

Diantha Soemantri (Faculty of Medicine, Universitas Indonesia, Department of Medical Education, Salemba Raya no 6, Central Jakarta, Jakarta 10430, Indonesia) Agnes Dodds (Melbourne Medical School, University of Melbourne, Medical Education Unit, Melbourne, Australia)

Geoff Mccoll (Melbourne Medical School, University of Melbourne, Medical Education Unit, Melbourne, Australia)

Background: The Mini Clinical Evaluation Exercise (Mini-CEX) has been implemented as a means of providing feedback in undergraduate medical education, despite limited evidence to support its use in prevocational training. This study was aimed to examine factors associated with quality of written feedback provided during the Mini-CEX sessions. Summary of work: Assessment forms collected from 1427 Mini-CEX sessions in a large Australian medical school were analysed. The written feedback, both on students' strength and suggestions for development, was categorized and related to the ratings of clinical performance, clinical case complexity and assessors' clinical position.

Summary of results: There was a relationship between type of feedback and assessors position, e.g. consultants were more likely to give specific feedback (x2=34.72, df=9, p< .01, z=2.2). The higher the clinical case complexity, the more likely for feedback to be provided (X2=17.48, df=3, p< .01, z=-3.2). The scores on 7 domains of clinical performance were intercorrelated (r=.59 to .79). Higher scores were less likely to be associated with specific feedback (r=-.22 to -.28). The quality of written feedback is influenced by the assessors' clinical position and clinical case complexity. The lack of variation between domain scores may indicate that Mini-CEX is not making fine distinctions in specific areas of clinical performance.

Conclusions: Standardization of Mini-CEX assessors and adequate exposure to various clinical cases with different complexity are required to improve the quality of written feedback.

Take-home messages: Assessors' position and clinical case complexity are shown to be two influential factors on feedback in the Mini-CEX.

4FF/6

Assessing the Long case based on SLICE (Structured Long Interview And Clinical Examination): an Action Research Approach

Rehan Ahmed Khan (Islamic International Medical College, Riphah University, Surgery, Rawalpindi, Pakistan)

Khalid Farooq Danish (Islamic International Medical College, Riphah University, Surgery, Rawalpindi) MasoodAnwar (Islamic International Medical College, Riphah University, Pathology, 274, Peshawar Road, Rawalpindi 46000, Pakistan)

Background: OSLER is a valid and reliable tool to assess long case. In our local settings in undergraduate examination, examiners are not used to structured long case assessment. A single examiner has to assess at least 15-20 students for a long case in duration of 02 hours. To meet our objectives, we needed a tool which was tailor made for the local needs. Summary of work: An action research approach was used. The problem of lack of structured examination in long case was identified. OSLER was found to not cater for the needs in the local context. It was felt that more time was required to train the examiners to conduct OSLER and complete the long case assessment in specified time. Hence focus group discussion and interviews were held with the senior faculty and medical educationists to come up with a modified tool named

SLICE.

Summary of results: SLICE was used in assessing Long cases and was found to be a feasible, valid and reliable tool. It was also found to be easier to be taught to the examiners.

Conclusions: OSLER is a very good tool to assess Long case; however with its modification and rewriting, an instrument was developed which had better feasibility and applicability in the local context. Take-home messages: Modifications/changes may be required for the established tools so new or modified tools can be designed to keep the soul and spirit of the structured and systematic examination alive.

4FF/7

Student-led mock clinical assessment successfully prepares medical students for their first OSCE

Ben Holden (University of Sheffield, Medical Society, Medical School, Beech Hill road, Sheffield S10 2RX, United Kingdom)

Steve Churchill (University of Sheffield, Medical Society,

Sheffield, United Kingdom)

Matthew Livesey (University of Sheffield, Medical

Society, Sheffield, United Kingdom)

Alexander Burnett (University of Sheffield, Medical

Society, Sheffield, United Kingdom)

Kabir Nepal (University of Sheffield, Medical Society,

Sheffield, United Kingdom)

Philip Chan (University of Sheffield, Academic Unit of Medical Education, Medical School, Sheffield, United Kingdom)

Background: The first objective structured clinical examination (OSCE) can be a daunting experience. Sheffield Medical Society (MedSoc) organise a mock OSCE for third year students. We surveyed the 2012 mock OSCE in order to assess its effectiveness. Summary of work: The mock examination was written and organised by senior students; all patients and

ABSTRACT BOOK: SESSION 4 MONDAY 26 AUGUST: 1400-1530

examiners were trained student volunteers. The exam consisted of eight 8-minute stations: 6 minutes for the history and examination followed by two minutes for questions and individualised feedback (including mark schemes) from the patient and examiner. An online questionnaire was distributed to third year students at three separate time points: before and after the mock OSCE and after the real OSCE. Students were asked to rate on a 5-point Likert scale in eight domains, including time management and overall preparedness. Statistical analysis was by paired and unpaired t-tests as appropriate.

Summary of results: The mean overall level of preparedness before the students sat the mock OSCE was 2.60±0.13 (n=126), improving to 3.61±0.15 (n=75) after the mock OSCE (p<0.0001) and 4.21±0.19 (n=58)

after the real OSCE (p<0.0001).

Conclusions: We found that the peer-organised mock OSCE prepared students well for the real OSCE. Whilst there was a statistically significant improvement in all domains, students found the mock particularly useful for familiarising themselves with the type of questions asked and the marking process used in the OSCE. Take-home messages: Student-organised mock clinical assessment successfully prepares medical students for their first summative OSCE.

4FF/8

Giving Feedback after the OSCE

Maren Marz (Charite Universitaetsmedizin Berlin, Dieter Scheffner Center for Medical Teaching and Educational Research, Chariteplatz 1, Virchowweg 24, Berlin 10117, Germany)

Страницы:
1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54  55  56  57  58  59  60  61  62  63  64  65  66  67  68  69  70  71  72  73  74  75  76  77  78  79  80  81  82  83  84  85  86  87  88  89  90  91  92  93  94  95  96  97  98  99  100  101  102  103  104  105  106  107  108  109  110  111  112  113  114  115  116  117  118  119  120  121  122  123  124  125  126  127  128  129  130  131  132  133  134  135  136  137  138  139  140 


Похожие статьи

Автор неизвестен - 13 самых важных уроков библии

Автор неизвестен - Беседы на книгу бытие

Автор неизвестен - Беседы на шестоднев

Автор неизвестен - Богословие

Автор неизвестен - Божественность христа