Objective To validate a simulator for upper airway examination, fitted with sensors, for use as an academic tool for learning how to conduct examination of the upper airway and for evaluation of that learning. Study Design Validation study. Setting Undergraduate medical education. Subjects and Methods A group of 18 fifth-year medical students and another of 6 otorhinolaryngology specialists conducted 6 examinations each with the simulator. To investigate concurrent validity, we calculated the correlation between damage scores provided by the simulator and damage assessment by a specialist. To evaluate construct validity, we compared both groups with regard to damage scores, technical procedure, and time spent. To examine content and face validity, we used questionnaires based on a 5-point Likert scale. Results For concurrent validity, the correlation between the simulator's damage scores and the specialist's damage assessment was high: Spearman's rho was 0.828 (P< .001). For construct validity, the group of students differed from the group of specialists in damage scores (P= .027) and in technical procedures (P< .001) but not in time spent. For content validity, all questionnaire statements were scored highly, and both groups had similar average scores. For face validity, the group of specialists considered the simulator to be realistic, and all statements on the questionnaire were rated with at least 4/5. Conclusion Concurrent, construct, content, and face validity have been demonstrated for a sensor-fitted simulator for upper airway examination, which is therefore accurate enough to be used as an academic tool for learning and evaluation of learning.