Factorial validation of a widely disseminated educational framework for evaluating clinical teachers ACADEMIC MEDICINE Litzelman, D. K., Stratos, G. A., Marriott, D. J., Skeff, K. M. 1998; 73 (6): 688-695

Abstract

To examine an instrument for evaluating clinical teaching using factor analysis and to refine the validated instrument to a practical length.Factor analysis on a split sample of 1,581 student evaluations rating 178 teachers. The instrument was based on the seven-category Stanford Faculty Development Program's (SFDP's) clinical teaching framework and contained 58 Likert-scaled items, with at least seven items per category plus five items measuring "teacher's knowledge." Standard methodology for survey item reduction was used to remove items with low or complex factor loadings and iteratively remove items with low item-scale correlation. Results were replicated on the second sample.The seven original categories emerged and items originally categorized under "knowledge" statistically combined with "promoting self-directed learning." Over 73% of the variance was explained. Item reduction resulted in 25 items with overall internal consistency over .97 and internal consistency of constructs ranging from .82 to .95.Factor analysis of student ratings validated the seven-category SFDP framework. An abbreviated instrument to measure the seven categories is described. Results suggest that students may not systematically distinguish between their teachers' knowledge and their teachers' ability to promote self-directed learning, an important finding for both administrators and faculty development programs.

View details for Web of Science ID 000074383900026

View details for PubMedID 9653408