• Treffer 2 von 2
Zurück zur Trefferliste

Density-based weighting for imbalanced regression

Zitieren Sie bitte immer diese URN: urn:nbn:de:bvb:20-opus-269177
  • In many real world settings, imbalanced data impedes model performance of learning algorithms, like neural networks, mostly for rare cases. This is especially problematic for tasks focusing on these rare occurrences. For example, when estimating precipitation, extreme rainfall events are scarce but important considering their potential consequences. While there are numerous well studied solutions for classification settings, most of them cannot be applied to regression easily. Of the few solutions for regression tasks, barely any have exploredIn many real world settings, imbalanced data impedes model performance of learning algorithms, like neural networks, mostly for rare cases. This is especially problematic for tasks focusing on these rare occurrences. For example, when estimating precipitation, extreme rainfall events are scarce but important considering their potential consequences. While there are numerous well studied solutions for classification settings, most of them cannot be applied to regression easily. Of the few solutions for regression tasks, barely any have explored cost-sensitive learning which is known to have advantages compared to sampling-based methods in classification tasks. In this work, we propose a sample weighting approach for imbalanced regression datasets called DenseWeight and a cost-sensitive learning approach for neural network regression with imbalanced data called DenseLoss based on our weighting scheme. DenseWeight weights data points according to their target value rarities through kernel density estimation (KDE). DenseLoss adjusts each data point’s influence on the loss according to DenseWeight, giving rare data points more influence on model training compared to common data points. We show on multiple differently distributed datasets that DenseLoss significantly improves model performance for rare data points through its density-based weighting scheme. Additionally, we compare DenseLoss to the state-of-the-art method SMOGN, finding that our method mostly yields better performance. Our approach provides more control over model training as it enables us to actively decide on the trade-off between focusing on common or rare cases through a single hyperparameter, allowing the training of better models for rare data points.zeige mehrzeige weniger

Volltext Dateien herunterladen

Metadaten exportieren

Weitere Dienste

Teilen auf Twitter Suche bei Google Scholar Statistik - Anzahl der Zugriffe auf das Dokument
Metadaten
Autor(en): Michael Steininger, Konstantin Kobs, Padraig Davidson, Anna Krause, Andreas Hotho
URN:urn:nbn:de:bvb:20-opus-269177
Dokumentart:Artikel / Aufsatz in einer Zeitschrift
Institute der Universität:Fakultät für Mathematik und Informatik / Institut für Informatik
Sprache der Veröffentlichung:Englisch
Titel des übergeordneten Werkes / der Zeitschrift (Englisch):Machine Learning
ISSN:1573-0565
Erscheinungsjahr:2021
Band / Jahrgang:110
Heft / Ausgabe:8
Seitenangabe:2187–2211
Originalveröffentlichung / Quelle:Machine Learning 2021, 110(8):2187–2211. DOI: 10.1007/s10994-021-06023-5
DOI:https://doi.org/10.1007/s10994-021-06023-5
Allgemeine fachliche Zuordnung (DDC-Klassifikation):0 Informatik, Informationswissenschaft, allgemeine Werke / 00 Informatik, Wissen, Systeme / 004 Datenverarbeitung; Informatik
Freie Schlagwort(e):Kerneldensity estimation; cost-sensitive learning; imbalanced regression; sample weighting; supervised learning
Datum der Freischaltung:13.06.2022
Lizenz (Deutsch):License LogoCC BY: Creative-Commons-Lizenz: Namensnennung 4.0 International