Refine
Has Fulltext
- yes (4)
Is part of the Bibliography
- yes (4)
Year of publication
- 2019 (4) (remove)
Document Type
- Doctoral Thesis (4)
Language
- English (4)
Keywords
- Maschinelles Lernen (4) (remove)
Institute
Sonstige beteiligte Institutionen
Advanced Analytics in Operations Management and Information Systems: Methods and Applications
(2019)
The digital transformation of business and society presents enormous potentials for companies across all sectors. Fueled by massive advances in data generation, computing power, and connectivity, modern organizations have access to gigantic amounts of data. Companies seek to establish data-driven decision cultures to leverage competitive advantages in terms of efficiency and effectiveness. While most companies focus on descriptive tools such as reporting, dashboards, and advanced visualization, only a small fraction already leverages advanced analytics (i.e., predictive and prescriptive analytics) to foster data-driven decision-making today. Therefore, this thesis set out to investigate potential opportunities to leverage prescriptive analytics in four different independent parts.
As predictive models are an essential prerequisite for prescriptive analytics, the first two parts of this work focus on predictive analytics. Building on state-of-the-art machine learning techniques, we showcase the development of a predictive model in the context of capacity planning and staffing at an IT consulting company. Subsequently, we focus on predictive analytics applications in the manufacturing sector. More specifically, we present a data science toolbox providing guidelines and best practices for modeling, feature engineering, and model interpretation to manufacturing decision-makers. We showcase the application of this toolbox on a large data-set from a German manufacturing company.
Merely using the improved forecasts provided by powerful predictive models enables decision-makers to generate additional business value in some situations. However, many complex tasks require elaborate operational planning procedures. Here, transforming additional information into valuable actions requires new planning algorithms. Therefore, the latter two parts of this thesis focus on prescriptive analytics. To this end, we analyze how prescriptive analytics can be utilized to determine policies for an optimal searcher path problem based on predictive models. While rapid advances in artificial intelligence research boost the predictive power of machine learning models, a model uncertainty remains in most settings. The last part of this work proposes a prescriptive approach that accounts for the fact that predictions are imperfect and that the arising uncertainty needs to be considered. More specifically, it presents a data-driven approach to sales-force scheduling. Based on a large data set, a model to predictive the benefit of additional sales effort is trained. Subsequently, the predictions, as well as the prediction quality, are embedded into the underlying team orienteering problem to determine optimized schedules.
Active galactic nuclei (AGN) are among the brightest and most frequent sources on the extragalactic X-ray and gamma-ray sky. Their central supermassive blackhole generates an enormous luminostiy through accretion of the surrounding gas. A few AGN harbor highly collimated, powerful jets in which are observed across the entire electromagnetic spectrum. If their jet axis is seen in a small angle to our line-of-sight (these objects are then called blazars) jet emission can outshine any other emission component from the system. Synchrotron emission from electrons and positrons clearly prove the existence of a relativistic leptonic component in the jet plasma. But until today, it is still an open question whether heavier particles, especially protons, are accelerated as well. If this is the case, AGN would be prime candidates for extragalactic PeV neutrino sources that are observed on Earth. Characteristic signatures for protons can be hidden in the variable high-energy emission of these objects. In this thesis I investigated the broadband emission, particularly the high-energy X-ray and gamma-ray emission of jetted AGN to address open questions regarding the particle acceleration and particle content of AGN jets, or the evolutionary state of the AGN itself. For this purpose I analyzed various multiwavelength observations from optical to gamma-rays over a period of time using a combination of state-of-the-art spectroscopy and timing analysis. By nature, AGN are highly variable. Time-resolved spectral analysis provided a new dynamic view of these sources which helped to determine distinct emission processes that are difficult to disentangle from spectral or timing methods alone.
Firstly, this thesis tackles the problem of source classification in order to facilitate the search for interesting sources in large data archives and characterize new transient sources. I use spectral and timing analysis methods and supervised machine learning algorithms to design an automated source classification pipeline. The test and training sample were based on the third XMM-Newton point source catalog (3XMM-DR6). The set of input features for the machine learning algorithm was derived from an automated spectral modeling of all sources in the 3XMM-DR6, summing up to 137200 individual detections. The spectral features were complemented by results of a basic timing analysis as well as multiwavelength information provided by catalog cross-matches. The training of the algorithm and application to a test sample showed that the definition of the training sample was crucial: Despite oversampling minority source types with synthetic data to balance out the training sample, the algorithm preferably predicted majority source types for unclassified objects. In general, the training process showed that the combination of spectral, timing and multiwavelength features performed best with the lowest misclassification rate of \\sim2.4\\%.
The methods of time-resolved spectroscopy was then used in two studies to investigate the properties of two individual AGN, Mrk 421 and PKS 2004-447, in detail. Both objects belong to the class of gamma-ray emitting AGN. A very elusive sub-class are gamma-ray emitting Narrow Line Seyfert 1 (gNLS1) galaxies. These sources have been discovered as gamma-ray sources only recently in 2010 and a connection to young radio galaxies especially compact steep spectrum (CSS) radio sources has been proposed. The only gNLS1 on the Southern Hemisphere so far is PKS2004-447 which lies at the lower end of the luminosity distribution of gNLS1. The source is part of the TANAMI VLBI program and is regularly monitored at radio frequencies. In this thesis, I presented and analyzed data from a dedicated multiwavelength campaign of PKS 2004-447 which I and my collaborators performed during 2012 and which was complemented by individual observations between 2013 and 2016. I focussed on the detailed analysis of the X-ray emission and a first analysis of its broadband spectrum from radio to gamma-rays. Thanks to the dynamic SED I could show that earlier studies misinterpreted the optical spectrum of the source which had led to an underestimation of the high-energy emission and had ignited a discussion on the source class. I show that the overall spectral properties are consistent with dominating jet emission comprised of synchrotron radiation and inverse Compton scattering from accelerated leptons. The broadband emission is very similar to typical examples of a certain type of blazars (flat-spectrum radio quasars) and does not present any unusual properties in comparison. Interestingly, the VLBI data showed a compact jet structure and a steep radio spectrum consistent with a compact steep spectrum source. This classified PKS 2004-447 as a young radio galaxy, in which the jet is still developing.
The investigation of Mrk 421 introduced the blazar monitoring program which I and collaborator have started in 2014. By observing a blazar simultaneously from optical, X-ray and gamma-ray bands during a VHE outbursts, the program aims at providing extraordinary data sets to allow for the generation of a series of dynamical SEDs of high spectral and temporal resolution. The program makes use of the dense VHE monitoring by the FACT telescope. So far, there are three sources in our sample that we have been monitoring since 2014. I presented the data and the first analysis of one of the brightest and most variable blazar, Mrk 421, which had a moderate outbreak in 2015 and triggered our program for the first time. With spectral timing analysis, I confirmed a tight correlation between the X-ray and TeV energy bands, which indicated that these jet emission components are causally connected. I discovered that the variations of the optical band were both correlated and anti-correlated with the high-energy emission, which suggested an independent emission component. Furthermore, the dynamic SEDs showed two different flaring behaviors, which differed in the presence or lack of a peak shift of the low-energy emission hump. These results further supported the hypothesis that more than one emission region contributed to the broadband emission of Mrk 421 during the observations.
Overall,the studies presented in this thesis demonstrated that time-resolved spectroscopy is a powerful tool to classify both source types and emission processes of astronomical objects, especially relativistic jets in AGN, and thus provide a deeper understanding and new insights of their physics and properties.
Making machines understand natural language is a dream of mankind that existed
since a very long time. Early attempts at programming machines to converse with
humans in a supposedly intelligent way with humans relied on phrase lists and simple
keyword matching. However, such approaches cannot provide semantically adequate
answers, as they do not consider the specific meaning of the conversation. Thus, if we
want to enable machines to actually understand language, we need to be able to access
semantically relevant background knowledge. For this, it is possible to query so-called
ontologies, which are large networks containing knowledge about real-world entities
and their semantic relations. However, creating such ontologies is a tedious task, as often
extensive expert knowledge is required. Thus, we need to find ways to automatically
construct and update ontologies that fit human intuition of semantics and semantic
relations. More specifically, we need to determine semantic entities and find relations
between them. While this is usually done on large corpora of unstructured text, previous
work has shown that we can at least facilitate the first issue of extracting entities by
considering special data such as tagging data or human navigational paths. Here, we do
not need to detect the actual semantic entities, as they are already provided because of
the way those data are collected. Thus we can mainly focus on the problem of assessing
the degree of semantic relatedness between tags or web pages. However, there exist
several issues which need to be overcome, if we want to approximate human intuition of
semantic relatedness. For this, it is necessary to represent words and concepts in a way
that allows easy and highly precise semantic characterization. This also largely depends
on the quality of data from which these representations are constructed.
In this thesis, we extract semantic information from both tagging data created by users
of social tagging systems and human navigation data in different semantic-driven social
web systems. Our main goal is to construct high quality and robust vector representations
of words which can the be used to measure the relatedness of semantic concepts.
First, we show that navigation in the social media systems Wikipedia and BibSonomy is
driven by a semantic component. After this, we discuss and extend methods to model
the semantic information in tagging data as low-dimensional vectors. Furthermore, we
show that tagging pragmatics influences different facets of tagging semantics. We then
investigate the usefulness of human navigational paths in several different settings on
Wikipedia and BibSonomy for measuring semantic relatedness. Finally, we propose
a metric-learning based algorithm in adapt pre-trained word embeddings to datasets
containing human judgment of semantic relatedness.
This work contributes to the field of studying semantic relatedness between words
by proposing methods to extract semantic relatedness from web navigation, learn highquality
and low-dimensional word representations from tagging data, and to learn
semantic relatedness from any kind of vector representation by exploiting human
feedback. Applications first and foremest lie in ontology learning for the Semantic Web,
but also semantic search or query expansion.
It is the aim of this thesis to present a visual body weight estimation, which is suitable for medical applications. A typical scenario where the estimation of the body weight is essential, is the emergency treatment of stroke patients: In case of an ischemic stroke, the patient has to receive a body weight adapted drug, to solve a blood clot in a vessel. The accuracy of the estimated weight influences the outcome of the therapy directly. However, the treatment has to start as early as possible after the arrival at a trauma room, to provide sufficient treatment. Weighing a patient takes time, and the patient has to be moved. Furthermore, patients are often not able to communicate a value for their body weight due to their stroke symptoms. Therefore, it is state of the art that physicians guess the body weight. A patient receiving a too low dose has an increased risk that the blood clot does not dissolve and brain tissue is permanently damaged. Today, about one-third gets an insufficient dosage. In contrast to that, an overdose can cause bleedings and further complications. Physicians are aware of this issue, but a reliable alternative is missing.
The thesis presents state-of-the-art principles and devices for the measurement and estimation of body weight in the context of medical applications. While scales are common and available at a hospital, the process of weighing takes too long and can hardly be integrated into the process of stroke treatment. Sensor systems and algorithms are presented in the section for related work and provide an overview of different approaches.
The here presented system -- called Libra3D -- consists of a computer installed in a real trauma room, as well as visual sensors integrated into the ceiling. For the estimation of the body weight, the patient is on a stretcher which is placed in the field of view of the sensors. The three sensors -- two RGB-D and a thermal camera -- are calibrated intrinsically and extrinsically. Also, algorithms for sensor fusion are presented to align the data from all sensors which is the base for a reliable segmentation of the patient.
A combination of state-of-the-art image and point cloud algorithms is used to localize the patient on the stretcher. The challenges in the scenario with the patient on the bed is the dynamic environment, including other people or medical devices in the field of view.
After the successful segmentation, a set of hand-crafted features is extracted from the patient's point cloud. These features rely on geometric and statistical values and provide a robust input to a subsequent machine learning approach. The final estimation is done with a previously trained artificial neural network.
The experiment section offers different configurations of the previously extracted feature vector. Additionally, the here presented approach is compared to state-of-the-art methods; the patient's own assessment, the physician's guess, and an anthropometric estimation. Besides the patient's own estimation, Libra3D outperforms all state-of-the-art estimation methods: 95 percent of all patients are estimated with a relative error of less than 10 percent to ground truth body weight. It takes only a minimal amount of time for the measurement, and the approach can easily be integrated into the treatment of stroke patients, while physicians are not hindered.
Furthermore, the section for experiments demonstrates two additional applications: The extracted features can also be used to estimate the body weight of people standing, or even walking in front of a 3D camera. Also, it is possible to determine or classify the BMI of a subject on a stretcher. A potential application for this approach is the reduction of the radiation dose of patients being exposed to X-rays during a CT examination.
During the time of this thesis, several data sets were recorded. These data sets contain the ground truth body weight, as well as the data from the sensors. They are available for the collaboration in the field of body weight estimation for medical applications.