TY - JOUR A1 - Vollmer, Andreas A1 - Vollmer, Michael A1 - Lang, Gernot A1 - Straub, Anton A1 - Kübler, Alexander A1 - Gubik, Sebastian A1 - Brands, Roman C. A1 - Hartmann, Stefan A1 - Saravi, Babak T1 - Automated assessment of radiographic bone loss in the posterior maxilla utilizing a multi-object detection artificial intelligence algorithm JF - Applied Sciences N2 - Periodontitis is one of the most prevalent diseases worldwide. The degree of radiographic bone loss can be used to assess the course of therapy or the severity of the disease. Since automated bone loss detection has many benefits, our goal was to develop a multi-object detection algorithm based on artificial intelligence that would be able to detect and quantify radiographic bone loss using standard two-dimensional radiographic images in the maxillary posterior region. This study was conducted by combining three recent online databases and validating the results using an external validation dataset from our organization. There were 1414 images for training and testing and 341 for external validation in the final dataset. We applied a Keypoint RCNN with a ResNet-50-FPN backbone network for both boundary box and keypoint detection. The intersection over union (IoU) and the object keypoint similarity (OKS) were used for model evaluation. The evaluation of the boundary box metrics showed a moderate overlapping with the ground truth, revealing an average precision of up to 0.758. The average precision and recall over all five folds were 0.694 and 0.611, respectively. Mean average precision and recall for the keypoint detection were 0.632 and 0.579, respectively. Despite only using a small and heterogeneous set of images for training, our results indicate that the algorithm is able to learn the objects of interest, although without sufficient accuracy due to the limited number of images and a large amount of information available in panoramic radiographs. Considering the widespread availability of panoramic radiographs as well as the increasing use of online databases, the presented model can be further improved in the future to facilitate its implementation in clinics. KW - radiographic bone loss KW - alveolar bone loss KW - maxillofacial surgery KW - deep learning KW - classification KW - artificial intelligence KW - object detection Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-305050 SN - 2076-3417 VL - 13 IS - 3 ER - TY - JOUR A1 - Vollmer, Andreas A1 - Vollmer, Michael A1 - Lang, Gernot A1 - Straub, Anton A1 - Kübler, Alexander A1 - Gubik, Sebastian A1 - Brands, Roman C. A1 - Hartmann, Stefan A1 - Saravi, Babak T1 - Performance analysis of supervised machine learning algorithms for automatized radiographical classification of maxillary third molar impaction JF - Applied Sciences N2 - Background: Oro-antral communication (OAC) is a common complication following the extraction of upper molar teeth. The Archer and the Root Sinus (RS) systems can be used to classify impacted teeth in panoramic radiographs. The Archer classes B-D and the Root Sinus classes III, IV have been associated with an increased risk of OAC following tooth extraction in the upper molar region. In our previous study, we found that panoramic radiographs are not reliable for predicting OAC. This study aimed to (1) determine the feasibility of automating the classification (Archer/RS classes) of impacted teeth from panoramic radiographs, (2) determine the distribution of OAC stratified by classification system classes for the purposes of decision tree construction, and (3) determine the feasibility of automating the prediction of OAC utilizing the mentioned classification systems. Methods: We utilized multiple supervised pre-trained machine learning models (VGG16, ResNet50, Inceptionv3, EfficientNet, MobileNetV2), one custom-made convolutional neural network (CNN) model, and a Bag of Visual Words (BoVW) technique to evaluate the performance to predict the clinical classification systems RS and Archer from panoramic radiographs (Aim 1). We then used Chi-square Automatic Interaction Detectors (CHAID) to determine the distribution of OAC stratified by the Archer/RS classes to introduce a decision tree for simple use in clinics (Aim 2). Lastly, we tested the ability of a multilayer perceptron artificial neural network (MLP) and a radial basis function neural network (RBNN) to predict OAC based on the high-risk classes RS III, IV, and Archer B-D (Aim 3). Results: We achieved accuracies of up to 0.771 for EfficientNet and MobileNetV2 when examining the Archer classification. For the AUC, we obtained values of up to 0.902 for our custom-made CNN. In comparison, the detection of the RS classification achieved accuracies of up to 0.792 for the BoVW and an AUC of up to 0.716 for our custom-made CNN. Overall, the Archer classification was detected more reliably than the RS classification when considering all algorithms. CHAID predicted 77.4% correctness for the Archer classification and 81.4% for the RS classification. MLP (AUC: 0.590) and RBNN (AUC: 0.590) for the Archer classification as well as MLP 0.638) and RBNN (0.630) for the RS classification did not show sufficient predictive capability for OAC. Conclusions: The results reveal that impacted teeth can be classified using panoramic radiographs (best AUC: 0.902), and the classification systems can be stratified according to their relationship to OAC (81.4% correct for RS classification). However, the Archer and RS classes did not achieve satisfactory AUCs for predicting OAC (best AUC: 0.638). Additional research is needed to validate the results externally and to develop a reliable risk stratification tool based on the present findings. KW - oro-antral communication KW - oro-antral fistula KW - prediction KW - machine learning KW - teeth extraction KW - complications KW - classification KW - artificial intelligence Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-281662 SN - 2076-3417 VL - 12 IS - 13 ER -