A time series is a series of data points indexed in time order. It can represent real world processes, such as demand for groceries, electricity usage and stock prices. Machine Learning (ML) models that accurately forecast these processes enable improved decision-making for reduc
...
A time series is a series of data points indexed in time order. It can represent real world processes, such as demand for groceries, electricity usage and stock prices. Machine Learning (ML) models that accurately forecast these processes enable improved decision-making for reducing waste and increasing efficiency. Previous research has produced an enormous number of ML model classes, each well-performing on a different forecasting task domain, and each written in their paradigm’s mathematical language.
For a new forecasting task in business, the job of data scientists is to select, tune and evaluate some existing ML models. Because data scientists are scarce and expensive, many resources are spent on replacing this human job with an automated approach, referred to as AutoML. In current practice, the many existing ML models are used by tuning some of them and combining their separate forecasts. An alternative is using them by merging their intrinsic components, and tuning them all together to find a single hybrid ML model with better performance. This is possible if the ambiguous language between forecasting paradigms is consolidated into a unifying framework.
The first aim of this research is to introduce this framework, and thereby replacing the human job with a computational job. The complete list of instructions to create a hybrid ML model - data cleaning excluded - is presented in parameter format: a superparameter configuration. Its components are feature engineering, training set formation and hypothesis training. An example shows how superparameters from different paradigms can constitute a hybrid model. The computational job is presented as superoptimization: optimizing the superparameters for performance, applied to the task at hand. The problem of superoptimization is that it requires too much runtime on a computer.
The second aim of this research is to reduce the runtime of the computational job, by learning from previous tasks. This research proposes metafeatures for warmstarted Bayesian optimiza- tion. It suggests promising hypothesis training superparameters (complexity and overfitting), from previous tasks similar in size and input richness. The computational complexity reduction by 50% in the experiment, with respect to both a naïve and (proposed) coldstart benchmark method, provides evidence for the potential of the proposed method. The weight of evidence for the metafeatures is increased, by maintained performance improvement when the method is badly tuned. The open-source Python package warmstart is published as a foundation for future experiments that focus on the other superparameter components, in the pursuit of an AutoML for hybrid forecasting models.