Keywords

1 Introduction

Probabilistic forecasting attracts an increasing attention in sports, finance, weather and energy fields. While an initial focus has been on deterministic forecasting, probabilistic prediction provides a more useful information which is essential for optimal planning and management in these fields. Probabilistic forecasts serve to quantify the uncertainty in a prediction, and they are an essential ingredient of optimal decision making [4]. An overview of the state of the art methods and scoring rules in probabilistic forecasting can be found in [4]. Quantile regression is one of the methods which models a quantile of the response variable conditional on the explanatory variables [6].

Due to its ability to provide interval predictions, quantile regression found its niche in the renewable energy forecasting area. Wind power is one of the fastest growing renewable energy sources [3]. As there is no efficient way to store wind power, producing accurate wind power forecasts are essential for reliable operation of wind turbines. Due to the uncertainty in wind power generation, there have been studies for improving the reliability of power forecasts to ensure the balance between supply and demand at electricity market. Quantile regression has been extensively used to produce wind power quantile forecasts, using a variety of explanatory variables such as wind speed, temperature and atmospheric pressure [7].

The Global Energy Forecasting Competition 2014 showed that combining predictions of several regressors can produce better results compared to a single model. It is shown in [9] that a voted ensemble of several quantile predictors could produce good results in probabilistic solar and wind power forecasting. In [1] the analogue ensemble technique is applied for prediction of solar power which slightly outperforms the quantile regression model.

In this paper we apply a different approach to combine predictions of several models based on the method of online prediction with expert advice. Contrary to batch mode, where the algorithm is trained on training set and gives predictions on test set, in online setting we learn as soon as new observations become available. One may wonder why not to use predictions of only one best expert from the beginning and ignore predictions of others. First, sometimes we cannot have enough data to identify the best expert from the start. Second, good performance in the past does not necessary lead to a good performance in the future. In addition, previous research shows that combining predictions of multiple regressors often produce better results compared to a single model [11].

We consider the adversarial setting, where no stochastic assumptions are made about the data generating process. Our approach is based on Weak Aggregating Algorithm (WAA) which was first introduced in [5]. The WAA works as follows: we assign initial weights to experts and at each step the weights of experts are updated according to their performance. The approach is similar to the Bayesian method, where the prediction is the average over all models based on the likelihood of the available data. The WAA gives a guarantee ensuring that the learner’s loss is as small as best expert’s loss up to an additive term of the form \(C \sqrt{T}\), where T is the number of steps and C is some constant. It is possible to apply WAA to combine predictions of an infinite pool of experts. In [8] WAA was applied to the multi-period, distribution-free perishable inventory problem, and it was shown that the asymptotic average performance of the proposed method was as good as any time-dependent stocking rule up to an additive term of the form \(C \sqrt{T} \ln T\).

The WAA was proposed as an alternative to the Aggregating Algorithm (AA), which was first introduced in [12]. The AA gives a guarantee ensuring that the learner’s loss is as small as best expert’s loss up to a constant in case of finitely many experts. The AA provides better theoretical guarantees, however it works with mixable loss functions, and it is not applicable in our task. An interesting application of the method of prediction with expert advice for the Brier loss function in forecasting of football outcomes can be found in [14]; it was shown that the proposed strategy that follows AA is as good as any bookmaker. Aggregating Algorithm for Regression (AAR) which competes with any expert from an infinite pool of linear regressions under the square loss was proposed in [13].

The contribution of our paper is two-fold. First, as a proof of concept, we apply WAA to a finite pool of experts to show that this method is applicable for this problem. As our experts we pick several models that provide quantile forecasts and then combine their predictions using WAA. To the best of our knowledge prediction with expert advice was not applied before for the prediction of quantiles. Second, we propose a new competitive online algorithm Weak Aggregating Algorithm for Quantile Regression (WAAQR), which is as good as any quantile regression up to an additive term of the form \(C \sqrt{T} \ln T\). For this purpose, we apply WAA to an infinite pool of quantile regressions. While the bound for the finite case can be straightforwardly applied to finite or countable sets of experts, every case of a continuous pool needs to be dealt with separately. We listed above a few results for different specific pools of experts, however there is no generic procedure for deriving a theoretical bound for the cumulative loss of the algorithm. WAAQR can be implemented by using Markov chain Monte Carlo (MCMC) method in a way which is similar to the algorithm introduced in [15], where AAR was applied to generalised linear regression class of function for making a prediction in a fixed interval. We derive a theoretical bound on the cumulative loss of our algorithm which is approximate (in the number of MCMC steps). MCMC is only a method for evaluating the integral and it can be replaced by a different numerical method. Theoretical convergence of the Metropolis-Hastings method in this case follows from Theorems 1 and 3 in [10]. Estimating the convergence speed is more difficult. With the experiments provided we show that by tuning parameters online, our algorithm moves fast to the area of high values of the probability function and gives a good approximation of the prediction.

We apply both methods to the problem of probabilistic forecasting of wind and solar power. Experimental results show a good performance of both methods. WAA applied to a finite set of models performs close or better than the retrospectively best model, whereas WAAQR outperforms the best quantile regression model that was trained on the historical data.

2 Framework

In the framework of prediction with expert advice we need to specify a game which contains three components: a space of outcomes \(\varOmega \), a decision space \(\varGamma \), and a loss function \(\lambda : \varOmega \times \varGamma \rightarrow \mathbb {R}\). We consider a game with the space of outcomes \(\varOmega = [A, B]\) and decision space \(\varGamma = \mathbb {R}\), and as a loss function we take the pinball loss for \(q \in (0, 1)\)

(1)

This loss function is appropriate for quantile regression because on average it is minimized by the q-th quantile. Namely, if Y is a real-valued random variable with a cumulative distribution function \(F_Y(x) = \Pr (Y \le x)\), then the expectation \(\mathbb {E} \lambda (Y, \gamma )\) is minimized by \(\gamma = \inf \{x: F_Y(x) \ge q\}\) (see Sect. 1.3 in [6] for a discussion).

In many tasks predicted outcomes are bounded. For example, wind and solar power cannot reach infinity. Therefore, it is possible to have a sensible estimate for the outcome space \(\varOmega \) based on the historical information.

Learner works according to the following protocol:

Protocol 1  

figure a

The cumulative loss of the learner at the step T is:

$$\begin{aligned} L_T := \sum _{\begin{array}{c} t = 1, \dots , T:\\ y_t < \gamma _t \end{array}} (1-q) |y_t - \gamma _t| + \sum _{\begin{array}{c} t = 1, \dots , T:\\ {y_t > \gamma _t} \end{array}} q |y_t - \gamma _t|. \end{aligned}$$
(2)

We want to find a strategy which is capable of competing in terms of cumulative loss with all prediction strategies \(\mathcal {E}_{\theta }\), \(\theta \in \mathbb {R}^n\) (called experts) from a given pool, which output \(\xi _t (\theta )\) at step t. In a finite case we denote experts \(\mathcal {E}_i, ~i=1,\dots ,N\).

Let us denote \(L^{\theta }_T\) the cumulative loss of expert \(\mathcal {E}_{\theta }\) at the step T:

$$\begin{aligned} L^{\theta }_T := \sum _{\begin{array}{c} t = 1, \dots , T:\\ y_t < \xi _t (\theta ) \end{array}} (1-q) |y_t - \xi _t (\theta )| + \sum _{\begin{array}{c} t = 1, \dots , T:\\ {y_t > \xi _t (\theta )} \end{array}} q |y_t - \xi _t (\theta )|. \end{aligned}$$
(3)

3 Weak Aggregating Algorithm

In the framework of prediction with expert advice we have access to experts’ predictions at each time step and the learner has to make a prediction based on experts’ past performance. We use an approach based on the WAA since a pinball loss function \(\lambda (y, \gamma )\) is convex in \(\gamma \). The WAA maintains experts’ weights \(P_t(d\theta ), ~t=1,\dots , T\). After each step t the WAA updates the weights of the experts according to their losses:

$$\begin{aligned} P_t(d\theta ) = \exp \left( -\frac{c L_{t-1}^{\theta }}{\sqrt{t}} \right) P_0 (d\theta ), \end{aligned}$$
(4)

where \(P_0 (d\theta )\) is the initial weights of experts and c is a positive parameter.

Experts that suffer large losses will have smaller weights and less influence on futher predictions.

The prediction of WAA is a weighted average of the experts’ predictions:

$$\begin{aligned} \gamma _t = \int _{\varTheta } \xi _t(\theta ) P_{t-1}^{*}(d\theta ), \end{aligned}$$
(5)

where \(P_{t-1}^{*}(d\theta )\) are normalized weights:

$$\begin{aligned} P_{t-1}^{*}(d\theta ) = \frac{P_{t-1}(d\theta )}{P_{t-1}(\varTheta )}, \end{aligned}$$

where \(\varTheta \) is a parameter space, i.e.  \(\theta \in \varTheta \).

In a finite case, an integral in (5) is replaced by a weighted sum of experts’ predictions \(\xi _t(i), ~i = 1,\dots ,N\).

In particular, when there are finitely many experts \(\mathcal {E}_i, ~i=1,\dots ,N\) for bounded games the following lemma holds.

Lemma 1

(Lemma 11 in [5]). For every \(L > 0\), every game \(\langle \varOmega , \varGamma , \lambda \rangle \) such that \(|\varOmega | < +\infty \) with \(\lambda (y, \gamma ) \le L\) for all \(y \in \varOmega \) and \(\gamma \in \varGamma \) and every \(N=1, 2, \dots \) for every merging strategy for N experts that follows the WAA with initial weights \(p_1, p_2,\dots , p_N \in [0, 1]\) such that \(\sum _{i=1}^N p_i =1\) and \(c > 0\) the bound

$$\begin{aligned} L_T \le L_T^i+ \sqrt{T} \left( \frac{1}{c} \ln \frac{1}{p_i}+ c L^2 \right) , \end{aligned}$$

is guaranteed for every \(T = 1, 2, \dots \) and every \(i = 1, 2,\dots , N.\)

After taking equal initial weights \(p_1 = p_2 = \dots = p_N = 1 / N\) in the WAA, the additive term reduces to \((c L^2 + (\ln N) / c) \sqrt{T}.\) When \(c = \sqrt{\ln N} / L\), this expression reaches its minimum. The following corollary shows that the WAA allows us to obtain additive terms of the form \(C \sqrt{T}\).

Corollary 1

(Corollary 14 in [5]). Under the conditions of Lemma 1, there is a merging strategy such that the bound

$$\begin{aligned} L_T \le L_T^i + 2L \sqrt{T \ln N} \end{aligned}$$

is guaranteed.

Applying Lemma 1 for an infinite number of experts and taking a positive constant \(c = 1\), we get the following Lemma.

Lemma 2

(Lemma 2 in [8]). Let \(\lambda (y, \gamma ) \le L\) for all \(y \in \varOmega \) and \(\gamma \in \varGamma \). The WAA guarantees that, for all T

$$\begin{aligned} L_T \le \sqrt{T} \left( -\ln \int _{\varTheta } \exp \left( -\frac{L_T^{\theta }}{\sqrt{T}} \right) P_0 (d\theta ) + L^2 \right) . \end{aligned}$$

4 Theoretical Bounds for WAAQR

In this section we formulate the theoretical bounds of our algorithm.

We want to find a strategy which is capable of competing in terms of cumulative loss with all prediction strategies \(\mathcal {E}_{\theta }\), \(\theta \in \varTheta = \mathbb {R}^n\), which at step t output:

$$\begin{aligned} \xi _t (\theta ) = x_t^\prime \theta , \end{aligned}$$
(6)

where \(x_t\) is a signal at time t. The cumulative loss of expert \(\mathcal {E}_{\theta }\) is defined in (3).

Theorem 1

Let \(a > 0\), \(y \in \varOmega = [A, B]\) and \(\gamma \in \varGamma \). There exists a prediction strategy for Learner such that for every positive integer T, every sequence of outcomes of length T, and every \(\theta \in \mathbb {R}^n\) with initial distribution of parameters

$$\begin{aligned} P_0(d\theta ) = \left( \frac{a}{2} \right) ^n e^{-a \Vert \theta \Vert _1}d\theta , \end{aligned}$$
(7)

the cumulative loss \(L_T\) of Learner satisfies

$$\begin{aligned} L_T \le L_T^{\theta } + \sqrt{T} a \Vert \theta \Vert _1 + \sqrt{T} \left( n \ln \left( 1 + \frac{\sqrt{T}}{a} \smash {\displaystyle \max _{t=1,\dots ,T}} \Vert x_t\Vert _{\infty } \right) + (B-A)^2 \right) . \end{aligned}$$

The theorem states that the algorithm predicts as well as the best quantile regression, defined in (6), up to an additive regret of the order \(\sqrt{T} \ln {T}\). The choice of the regularisation parameter a is important as it affects the behaviour of the theoretical bound of our algorithm. Large parameters of regularisation increase the bound by an additive term \(\sqrt{T} a \Vert \theta \Vert _1\), however the regret term has a smaller growth rate as time increases. As the maximum time T is usually not known in advance, the regularisation parameter a cannot be optimised, and its choice depends on the particular task. We discuss the choice of the parameter a in Sect. 6.2.

Proof

We consider that outcomes come from the interval [AB], and it is known in advance. Let us define the truncated expert \(\tilde{\mathcal {E}}_{\theta }\) which at step t outputs:

(8)

Let us denote \(\tilde{L}^{\theta }_T\) the cumulative loss of expert \(\tilde{\mathcal {E}}_{\theta }\) at the step T:

$$\begin{aligned} \tilde{L}^{\theta }_T := \sum _{t=1}^T \lambda (y_t, \tilde{\xi }_t (\theta )). \end{aligned}$$
(9)

We apply WAA for truncated experts \(\tilde{\mathcal {E}}_{\theta }\). As experts \(\tilde{\mathcal {E}}_{\theta }\) output predictions inside the interval [AB], and predictions of WAA is a weighted average of experts’ predictions (5), then each \(\gamma _t\) lies in the interval [AB].

We can bound the maximum loss at each time step:

$$\begin{aligned} L:= \max _{y \in [A,B],~\gamma \in [A, B]} \lambda (y, \gamma ) \le (B-A) \max (q, 1-q) \le B-A. \end{aligned}$$
(10)

Applying Lemma 2 for initial distribution (7) and putting the bound on the loss in (10) we obtain:

$$\begin{aligned} L_T \le \sqrt{T} \left( -\ln \left( \left( \frac{a}{2} \right) ^n \int _{\mathbb {R}^n} e^{- \tilde{J}(\theta )} d\theta \right) + (B-A)^2 \right) , \end{aligned}$$
(11)

where

$$\begin{aligned} \tilde{J}(\theta ) := \frac{\tilde{L}_T^{\theta }}{\sqrt{T}} + a \Vert \theta \Vert _1. \end{aligned}$$
(12)

For all \(\theta , \theta _0 \in \mathbb {R}^n\) we have:

(13)

Analogously, we have:

$$\begin{aligned} \sum _{\begin{array}{c} t = 1, \dots , T:\\ y_t> x_t^\prime \theta \end{array}} |x_t^\prime \theta - y_t| \le \sum _{\begin{array}{c} t = 1, \dots , T:\\ y_t > x_t^\prime \theta \end{array}} |x_t^\prime \theta _0 - y_t| + T \max _{t=1,\dots ,T} \Vert x_t\Vert _{\infty } \Vert \theta - \theta _0\Vert _1. \end{aligned}$$
(14)

By multiplying inequality (13) by \((1-q)\), inequality (14) by q and summing them, we have:

$$\begin{aligned} L_T^{\theta } \le L_T^{\theta _0} + T \max _{t=1,\dots ,T} \Vert x_t\Vert _{\infty } \Vert \theta - \theta _0\Vert _1. \end{aligned}$$
(15)

The cumulative loss of truncated expert \(\tilde{\mathcal {E}}_{\theta }\) cannot exceed the cumulative loss of non-truncated expert \(\mathcal {E}_{\theta }\) for all \(\theta \in \mathbb {R}^n\):

$$\begin{aligned} \tilde{L}_T^{\theta } \le L_T^{\theta } . \end{aligned}$$

By dividing (15) by \(\sqrt{T}\) and adding \(a \Vert \theta \Vert _1\) to both parts, we have:

where

$$\begin{aligned} J(\theta ) := \frac{L_T^{\theta }}{\sqrt{T}} + a \Vert \theta \Vert _1. \end{aligned}$$

Let us denote \(b_T =\sqrt{T} \max _{t=1,\dots ,T} \Vert x_t\Vert _{\infty } + a\). We evaluate the integral:

By putting this expression in (11) we obtain the theoretical bound.

Note that even though we apply WAA for truncated experts (8), we achieve the theoretical bound for prediction strategy that competes with a class of experts (6).

5 Prediction Strategy

A prediction of WAA (5) can be re-written as follows:

$$\begin{aligned} \gamma _T = \int _{\varTheta } \tilde{\xi }_T(\theta ) w_{T-1}^{*}(\theta ) d\theta , \end{aligned}$$
(16)

where

(17)

and Z is the normalising constant ensuring that \(\int _\varTheta w_T^{*}(\theta )d\theta = 1\).

Integral (16) is a Bayesian mixture, where function \(\xi _T(\theta )\) needs to be integrated with respect to the normalized distribution \(w_T^{*} (\theta )\). It is possible to avoid the calculation of normalising constant Z as it is a computationally inefficient operation, and integrate function \(\xi _T(\theta )\) from the unnormalized distribution \(w_T (\theta )\). In order to calculate the integral (16), it is possible to use MCMC algorithms. A good introduction of MCMC for Machine Learning is in [2].

We will use Metropolis-Hastings algorithm for sampling parameters \(\theta \) from the posterior distribution \(\mathcal {P}\). As a proposal distribution we choose Gaussian distribution \(\mathcal {N}(0, \sigma ^2)\) with some chosen parameter \(\sigma \). We start with some initial parameter \(\theta ^0\) and at each step m we update:

$$\begin{aligned} \theta ^m = \theta ^{m-1} + \mathcal {N} (0, \sigma ^2),~m=1,\dots , M, \end{aligned}$$

where M is a maximum number of iterations in MCMC method.

The update parameter \(\theta ^m\) at step m is accepted with probability \(\min \left( 1, \frac{f_{\mathcal {P}}(\theta ^m)}{f_{\mathcal {P}}(\theta ^{m-1})} \right) \), where \(f_{\mathcal {P}}(\theta )\) is the density function for the distribution \(\mathcal {P}\) at point \(\theta \). At each step by accepting and rejecting the updates of parameters \(\theta \) we move closer to the maximum of the density function. At the beginning it is common to use a ‘burn-in’ stage when the integral is not calculated till we will reach the area of high values of the density function \(f_{\mathcal {P}}\). Thus, we perform integration only from the area with high density of \(\mathcal {P}\). Some values of \(\theta \) are accepted even when the calculated probability is less than 1, it allows the algorithm to move away from local minimum of the density function. Because we are interested only in the ratio of density functions of generated parameters, we can generate new parameters \(\theta \) from the unnormalized posterior distribution \(w_T (\theta ) \) and avoid the weights normalization at each step which is more computationally efficient.

At time \(t=0\) the algorithm starts with the initial estimate of the parameters \(\theta _0 = 0\). At each iteration \(t > 0\) we start with parameter \(\theta _{t-1}^M\) calculated at the previous step \(t-1\). It allows the algorithm to converge faster to the correct location of the main mass of the distribution.

figure b

6 Experiments

In this section we apply WAA and WAAQR for prediction of wind and solar power and compare their performance with other predictive models. The data set is downloaded from Open Power System Data which provides free and open data platform for power system modelling. The platform contains hourly measurements of geographically aggregated weather data across Europe and time-series of wind and solar power. Our training data are measurements in Austria from January to December 2015, test set contains data from January to July 2016.Footnote 1

6.1 WAA

We apply WAA for three models: Quantile Regression (QR), Quantile Random Forests (QRF), Gradient Boosting Decision Trees (GBDT). These models were used in GEFCom 2014 energy forecasting competition on the final leaderboard [9]. In this paper the authors argue that using multiple regressors is often better than using only one, and therefore combine multiple model outputs. They noted that voting was found to be particularly useful for averaging the quantile forecasts of different models.

We propose an alternative approach to combine different models’ predictions by using WAA. We work according to Protocol 1: at each step t before seeing outcome \(y_t\), we output our prediction \(\gamma _t\) according to (5). After observing outcome \(y_t\), we update experts’ weights according to (4).

To build models for wind power forecasting we use wind speed and temperature as explanatory variables. These variables have been extensively used to produce wind power quantile forecasts [7]. We train three models QR, QRF and GBDT on training data set, and then apply WAA using forecasts of these models on test data set. We start with equal initial weights of each model and then update their weights according to their current performance. We estimate the constant of WAA \(c = 0.01\) using information about maximum losses on training set.

Figure 1 shows weights of each model for different quantiles depending on the current time step. We can see from the graph that for most of quantiles GBDT obtains the largest weights which indicates that it suffers smaller losses compared to other models. However, it changes for \(q = 0.95\), where the largest weights are acquired by QR. It shows that sometimes we can not use the past information to evaluate the best model. The retrospectively best model can perform worse in the future as an underlying nature of data generating can change. In addition, different models can perform better on different quantiles.

Table 1 illustrates total losses of QR, QRF, GBDT, WAA and Average methods, where Average is a simple average of QR, QRF and GBDT. For the prediction of wind power, for \(q = 0.25\) and \(q = 0.50\) the total loss of WAA is slightly higher than the total loss of GBDT, whereas for \(q = 0.75\) and \(q = 0.95\) WAA has the smallest loss. In most cases, WAA outperforms Average method.

We perform similar experiments for prediction of solar power. We choose measurements of direct and diffuse radiations to be our explanatory variables. In a similar way, QR, QRF and GBDT are trained on training set, and WAA is applied on test data. Figure 2 illustrates weights of models depending on the current step. Opposite to the previous experiments, GBDT has smaller weights compared to other models for \(q = 0.25\) and \(q = 0.50\). However, for \(q = 0.75\) and \(q = 0.95\) weights of experts become very close to each other. Therefore, predictions of WAA should become close to Average method. Table 1 shows total losses of the methods. For \(q = 0.25\) and \(q = 0.5\) both QR and QRF have small losses compared to GBDT, and WAA follows their predictions. However, for \(q = 0.75\) and \(q = 0.95\) it is not clear which model performs better, and predictions of WAA almost coincide with Average method. It again illustrates that the retrospectively best model could change with time, and one should be cautious about choosing the single retrospectively best model for future forecasts.

Table 1. Total losses (\(\times 10^3\))
Fig. 1.
figure 1

Weights update for wind power

6.2 WAAQR

In this section we demonstrate the performance of our algorithm for prediction of wind power and compare it with quantile regression model. We train QR on training data set, and apply WAAQR on test set. First, we use training set to choose the parameters of our algorithm. Table 2 illustrates the acceptance ratio of new sampling parameters of our algorithm for \(q = 0.5\). Increasing values of \(\sigma \) results in decreasing acceptance ratios of new sampling parameters \(\theta \). With large values of \(\sigma \) we move faster to the area of high values of density function while smaller values of \(\sigma \) can lead to more expensive computations as our algorithm would require more iterations to find the optimal parameters. Figure 3 illustrates logarithm of parameters likelihood \(w(\theta )\) defined in (17) for \(a = 0.1\) and \(\sigma = 0.5\) and 3.0. We can see from the graphs that for \(\sigma = 3.0\) the algorithm reaches maximum value of log-likelihood after around 800 iterations while for \(\sigma = 0.5\) it still tries to find maximum value after 1500 iterations. Table 2 shows the total losses of WAAQR for different parameters a and \(\sigma \). We can see that choosing the right parameters is very important as it notably affects the performance of WAAQR. It is important to keep track of acceptance ratio of the algorithm, as high acceptance ratio means that we move too slowly and need more iterations and larger ‘burn-in’ period to find the optimal parameters.

Now we compare performances of our algorithm and QR. We choose the parameters of WAAQR to be the number of iterations \(M = 1500,\) ‘burn-in’ stage \(M_0 = 300\), regularization parameter \(a = 0.1,\) and standard deviation \(\sigma = 3\). Note that even though we use the prior knowledge to choose the parameters of WAAQR, we start with initial \(\theta _0 = 0\) and train our algorithm only on the test set. Figure 4 illustrates a difference between cumulative losses of QR and WAAQR. If the difference is greater than zero, our algorithm shows better results compared to QR. For \(q = 0.25\) WAAQR shows better performance at the beginning, but after around 1000 iterations its performance becomes worse, and by the end of the period cumulative losses of QR and WAAQR are almost the same. We observe a different picture for \(q = 0.5\) and \(q = 0.75\): most of the time a difference between cumulative losses is positive, which indicates that WAAQR performs better than QR.

Fig. 2.
figure 2

Weights update for solar power

Figure 5 shows predictions of WAAQR and QR with \([25\%, 75\%]\) confidence interval for the first and last 100 steps. We can see from the graph, that initially predictions of WAAQR are very different from predictions of QR. However, by the end of period, predictions of both methods become very close to each other.

One of the disadvantages of WAAQR is that it might perform much worse with non-optimal input parameters of regularization a and standard deviation \(\sigma \). If no prior knowledge is available, one can start with some reasonable values of input parameters and keep track of the acceptance ratio of new generated \(\theta \). If the acceptance ratio is too high it might indicate that the algorithm moves too slowly to the area of high values of the probability function of \(\theta \), and standard deviation \(\sigma \) should be increased. Another option is to take very large number of steps and larger ‘burn-in’ period.

Table 2. Acceptance ratio (AR) and total losses of WAAQR on training set
Fig. 3.
figure 3

Log-likelihood of parameters for \(a = 0.1\).

Fig. 4.
figure 4

Cumulative loss difference between QR and WAAQR

Fig. 5.
figure 5

Predictions with \([25\%, 75\%]\) confidence interval for WAAQR and QR

7 Conclusions

We proposed two ways of applying the framework of prediction with expert advice to the problem of probabilistic forecasting of renewable energy. The first approach is to apply WAA with a finite number of models and combine their predictions by updating weights of each model online based on their performance. Experimental results show that WAA performs close or better than the best model in terms of cumulative pinball loss function. It also outperforms the simple average of predictions of models. With this approach we show that it is reasonable to apply WAA for the prediction of quantiles.

Second, we propose a new competitive online algorithm WAAQR which combines predictions of an infinite pool of quantile regressions. We derive the theoretical bound which guarantees that WAAQR asymptotically performs as well as any quantile regression up to an additive term of the form \(C \sqrt{T} \ln T\). Experimental results show that WAAQR can outperform the best quantile regression model that was trained on the historical data.