458 - Improved quality of written evaluations of residents after modifications to the faculty evaluation form
Friday, April 28, 2023
5:15 PM – 7:15 PM ET
Poster Number: 458 Publication Number: 458.122
Kristin Maletsky, Children's Hospital of Philadelphia, Philadelphia, PA, United States; Jessica Hart, Childrens Hospital of Philadelphia, Philadelphia, PA, United States; Daniel C. West, Children's Hospital of Philadelphia/U Penn, Philadelphia, PA, United States
Pediatric Hospital Medicine Fellow Children's Hospital of Philadelphia Philadelphia, Pennsylvania, United States
Background: Regular evaluation of trainees by faculty is essential to furthering the development of future pediatricians, improving the quality and effectiveness of a residency program, and maintaining accreditation. In addition to low completion rates of written evaluations, there is extreme variability in the quality of written comments. In July 2021, the faculty evaluation form for residents in our large pediatric training program was modified to include a focus on entrustable professional activities (EPAs) and prompts for free text comments. Additionally, suggestions of potential observed skills and/or behaviors to comment on were added to the top of the electronic evaluation form. Objective: As such, the objective of this study was to compare the quality of written free-text comments by faculty on resident evaluations before and after modification of the electronic evaluation form. Design/Methods: A member of our team de-identified 500 faculty evaluations of residents rotating on general pediatrics and oncology inpatient rotations, including 250 pre-intervention (March-April 2021) and 250 post-intervention (March-April 2022). Two separate members of our team independently scored each evaluation utilizing a previously published 7-point scale.1 For the few discrepancies in scoring, they met to discuss until reaching a consensus score for each evaluation. Scores were compared using an independent T-test. Results: There was a statistically significant increase of 1.78 in the mean quality score between 2021 and 2022 (P< 0.0001). The average score in the pre-intervention phase was 2.98 (SD 1.43) and the post-intervention phase was 4.76 (SD 1.57). Inter-rater reliability calculated prior to discussion for consensus yielded 74% agreement. Examples of free-text responses for each numerical quality score are included in Table 1.
Conclusion(s): We demonstrated a statistically significant improvement in the quality of written feedback for residents by modifying our faculty evaluation form to include tips for documenting specific observations of practice along with mapping to EPAs. This improvement in quality has garnered more descriptive comments and tangible recommendations for improvement in practice for our trainees. While there is still opportunity for further improvement in quality, it is evident that this guidance has led to richer and more meaningful written feedback. Next steps include dedicating attention to improving the completion rates of evaluations for trainees and applying similar strategies to improve resident evaluation of faculty.