Building Team Agreement on Large Population Surveys through Inter-rater Reliability among Oral Health Survey Examiners

Susilawati, Sri and Monica, Grace and Fadilah, R. Putri N. and Bramantoro, Taufan and Setijanto, Darmawan and Sadho, Gilang Rasuna and Palupi, Retno (2018) Building Team Agreement on Large Population Surveys through Inter-rater Reliability among Oral Health Survey Examiners. Dental Journal (Majalah Kedokteran Gigi), 51 (1). pp. 42-46. ISSN 2442-9740

[img]
Preview
Text
1. Building team agreement on large population surveys through inter-rater reliability among oral health survey examiners.pdf

Download (865Kb) | Preview
[img]
Preview
Text
1.1 Peer Review_Building team agreement on large population surveys through inter-rater reliability among oral h.pdf

Download (954Kb) | Preview

Abstract

Background: Oral health surveys conducted on a very large population involve many examiners who must be consistent in scoring different levels of an oral disease. Prior to the oral health survey implementation, a measurement of inter-rater reliability (IRR) is needed to know the level of agreement among examiners or raters. Purpose: This study aimed to assess the IRR using consensus and consistency estimates in large population oral health surveys. Methods: A total of 58 dentists participated as raters. The benchmarker showed the clinical sample for dental caries and community periodontal index (CPI) score, with the raters being trained to carry out a calibration exercise in dental phantom. The consensus estimate was measured by means of a percent agreement and Cohen’s Kappa statistic. The consistency estimate of IRR was measured by Cronbach’s alpha coefficient and intraclass correlation. Results: The percent agreement is 65.50% for photographic slides of dental caries, 73.13% for photographic slides of CPI and 78.78% for calibration of dental caries using phantom. There were statistically significant differences between dental caries calibration using photographic slides and phantom (p<0.000), while the consistency of IRR between multiple raters is strong (Cronbrach’s Alpha: >0.9). Conclusion: A percent agreement across multiple raters is acceptable for the diagnosis of dental caries. Consistency between multiple raters is reliable when diagnosing dental caries and CPI.

Item Type: Article
Uncontrolled Keywords: inter-rater reliability; calibration; training; oral health survey
Subjects: R Medicine > RK Dentistry
Depositing User: Perpustakaan Maranatha
Date Deposited: 19 Oct 2021 01:51
Last Modified: 19 Oct 2021 01:51
URI: http://repository.maranatha.edu/id/eprint/27950

Actions (login required)

View Item View Item