Faculty Profile

Gudmund Hermansen

Associate Professor - Department of Economics

Publications

Cunen, Celine Marie Løken; Hermansen, Gudmund Horn & Hjort, Nils Lid (2018)

Confidence distributions for change-points and regime shifts

Journal of Statistical Planning and Inference, 195, s. 14- 34. Doi: 10.1016/j.jspi.2017.09.009

Olsen, Håvard Goodwin & Hermansen, Gudmund Horn (2017)

Recent Advancements to Nonparametric Modeling of Interactions Between Reservoir Parameters

Quantitative Geology and Geostatistics, 19, s. 653- 669. Doi: 10.1007/978-3-319-46819-8_44

We demonstrate recent advances in nonparametric density estimation and illustrate their potential in the petroleum industry. Here, traditional parametric models and standard kernel methodology may often prove too limited. This is especially the case for data possessing certain complex structures, such as pinch-outs, nonlinearity, and heteroscedasticity. In this paper, we will focus on the Cloud Transform (CT) with directional smoothing and Local Gaussian Density Estimator (LGDE). These are flexible nonparametric methods for density (and conditional distribution) estimation that are well suited for data types commonly encountered in reservoir modeling. Both methods are illustrated with real and synthetic data sets.

Hauge, Vera Louise & Hermansen, Gudmund Horn (2017)

Machine Learning Methods for Sweet Spot Detection: A Case Study

Quantitative Geology and Geostatistics, 19, s. 573- 588. Doi: 10.1007/978-3-319-46819-8_38

In the geosciences, sweet spots are defined as areas of a reservoir that represent best production potential. From the outset, it is not always obvious which reservoir characteristics that best determine the location, and influence the likelihood, of a sweet spot. Here, we will view detection of sweet spots as a supervised learning problem and use tools and methodology from machine learning to build data-driven sweet spot classifiers. We will discuss some popular machine learning methods for classification including logistic regression, k-nearest neighbors, support vector machine, and random forest. We will highlight strengths and shortcomings of each method. In particular, we will draw attention to a complex setting and focus on a smaller real data study with limited evidence for sweet spots, where most of these methods struggle. We will illustrate a simple solution where we aim at increasing the performance of these by optimizing for precision. In conclusion, we observe that all methods considered need some sort of preprocessing or additional tuning to attain practical utility. While the application of support vector machine and random forest shows a fair degree of promise, we still stress the need for caution in naive use of machine learning methodology in the geosciences.

Hermansen, Gudmund Horn; Hjort, Nils Lid & Kjesbu, Olav Sigurd (2016)

Recent advances in statistical methodology applied to the Hjort liver index time series (1859-2012) and associated influential factors

Canadian Journal of Fisheries and Aquatic Sciences, 73(2), s. 279- 295. Doi: 10.1139/cjfas-2015-0086

Walker, Sam-Erik; Hermansen, Gudmund Horn & Hjort, Nils Lid (2015)

Model selection and verification for ensemble based probabilistic forecasting of air pollution in Oslo, Norway

Samaniego, Francisco J. (red.). Proceedings of the 60th World Statistics Congress of the International Statistical Institute, ISI2015

In this paper, we discuss building time series models for forecasting of air pollution during wintertime conditions in Oslo, Norway, using ensembles of air pollution model data. Since such ensembles becomes increasingly available as part of regular air quality forecast modelling, it is important to build properly calibrated statistical models utilising such data. In particular, we focus on model selection using the Akaike and Bayesian information criteria, and verification of the forecasts using Probability Integral Transform (PIT) histograms and Brier scores. Three time series models are considered, using ensemble mean values as a primary covariate in a linear regression setting explaining observations, and modelling the residual errors as an autoregressive process, using either a constant variance; a timevarying (heteroscedastic) variance only depending on the ensemble variances; or as a combination of both. We show that for the limited, although representative, data analysed, the model incorporating both terms, seems to have an edge according to the model selection criteria and forecast verification tools used. Finally, we briefly discuss the possibility of introducing more focused model selection criteria for these types of models and data.

Hermansen, Gudmund Horn; Hjort, Nils Lid & Jullum, Martin (2015)

Parametric or nonparametric: the FIC approach for stationary time series

Samaniego, Francisco J. (red.). Proceedings of the 60th World Statistics Congress of the International Statistical Institute, ISI2015

We seek to narrow the gap between parametric and nonparametric modelling of stationary time series processes. The approach is inspired by recent advances in focused inference and model selection techniques. Our paper generalises and extends current work by developing a new version of the focused information criterion (FIC), directly comparing the performance of both parametric and nonparametric time series models. This is achieved by comparing the mean squared error for estimating a focus parameter under consideration, for each candidate model. In particular this yields FIC formulae for covariances or correlations at specified lags, for the probability of reaching a threshold, etc. Suitable weighted average versions, the AFIC, also lead to model selection strategies for finding the best model for the purpose of estimating e.g. a sequence of correlations.

Hermansen, Gudmund Horn & Hjort, Nils Lid (2015)

Bernshteĭn-von Mises theorems for nonparametric function analysis via locally constant modelling: A unified approach

Journal of Statistical Planning and Inference, 166, s. 138- 157. Doi: 10.1016/j.jspi.2015.02.007

Various statistical models involve a certain function, say f, like the mean regression as a function of a covariate, the hazard rate as a function of time, the spectral density of a time series as a function of frequency, or an intensity as a function of geographical position, etc. Such functions are often modelled parametrically, whether for frequentist or Bayesian uses, and under weak conditions there are so-called Bernshteĭn–von Mises theorems implying that these two approaches are large-sample equivalent. Results of this nature do not necessarily hold up in nonparametric and high-dimensional setups, however. The aim of the present paper is to exhibit a unified framework and methodology for both frequentist and Bayesian nonparametric analysis, involving priors that set f constant over windows, and where the number m of such windows grows with sample size n. Applications include nonparametric regression, maximum likelihood with nonparametrically varying parameter functions, hazard rates being functions of covariates, and nonparametric analysis of stationary time series. We work out conditions on the number and sizes of the windows under which Bernshteĭn–von Mises type theorems can be established, with the prior changing with sample size via the growing number of windows. These conditions entail e.g. that if m∝nα, then View the MathML source is required. Illustrations of the general methodology are given, including setups with nonparametric regression, hazard rate estimation, and inference about frequency spectra for stationary time series.

Vonnet, Julie & Hermansen, Gudmund Horn (2015)

Using predictive analytics to unlock unconventional plays

First Break, 33(2), s. 87- 92.

Julie Vonnet and Gudmund Hermansen introduce a multi-disciplinary, integrated reservoir modelling workflow that gathers data of different natures, sources and scale in order to predict sweet spots locations. The dramatic expansion in computing power over the past two decades and the huge amount of data generated within organisations have led to a proliferation of new methods to identify patterns and trends among these large datasets. Data mining and predictive analytics are cross-disciplinary approaches that consist of advanced mathematical and statistical methods that retrace patterns from petabytes of data. Such methods and algorithms are able to extract information, correlations and interplays and turn them into structured sets of interactions for predicting the behaviour of a system – even under unknown conditions.

Bjørvik, Linn Marie; Dale, Svein, Hermansen, Gudmund Horn, Munishi, Pantaleo K. T. & Moe, Stein Ragnar (2015)

Bird flight initiation distances in relation to distance from human settlements in a Tanzanian floodplain habitat

Journal of Ornithology, 156, s. 239- 246. Doi: 10.1007/s10336-014-1121-1

Foldnes, Njål; Grønneberg, Steffen & Hermansen, Gudmund Horn (2018)

Statistikk og Dataanalyse

[Textbook]. Cappelen Damm Akademisk.

Olsen, Håvard Goodwin & Hermansen, Gudmund Horn (2016)

Recent advancements to non-parametric modelling of interactions between reservoir parameters

[Academic lecture]. The 10th International Geostatistical Congress.

Hauge, Vera Louise & Hermansen, Gudmund Horn (2016)

Machine learning methods for sweet spot detection: a case study

[Academic lecture]. The 10th International Geostatistical Congress.

Hermansen, Gudmund Horn (2016)

Focused Information Criteria for Time Series

[Academic lecture]. FICology - FocuStat International Workshop.

Almendral-Vazquez, Ariel; Abrahamsen, Petter, Dahle, Pål & Hermansen, Gudmund Horn (2015)

A continuous model for well depths: theory and application to well repositioning

[Academic lecture]. Det 18. norske statistikarmøtet – Solstrand 2015.

Academic Degrees
Year Academic Department Degree
2014 University of Oslo (Department of Mathematics) PhD