A Guide to Classical Time Series Models

Author: Andrew McQueen

Time series forecasting has applications in many disciplines. From business and finance to public health, meteorology, and other scientific fields, the target of the analysis often depends on time. As the second of three blogs, this blog will cover some of the classic statistical models used for time series forecasting. Time series forecasting as a machine learning service offers data-backed insights to help businesses make informed decisions. For simplicity’s sake, the use of exogenous (i.e., external) variables will not be covered.

Models

Albeit a noncomprehensive list, here are some classical models I have personally found useful when dealing with time series:

  • Naïve
  • Seasonal Naïve
  • Moving Average
  • Exponential Smoothing
    • Single
    • Double
    • Triple
  • Autoregressive Integrated Moving Average (ARIMA)

Naïve

Beginning with the simplest model, naïve forecasting does not take trend or seasonality into account. It is useful as a baseline and for cases in which we might not see a large difference from one period to the next. As it turns out, a naïve model simply implies that our prediction for the next observation is the same as the previous one. While meteorologists have more advanced methodologies for forecasting tomorrow’s temperature, a common example of a naïve forecast is when predicting this. Without using information contained in exogenous variables, trends, or seasonality, a fair assumption may be that if today’s temperature is 50 degrees Fahrenheit, tomorrow’s will likely be the same. This model does not contain any other information, so it won’t pick up on any patterns in the historical data.

Seasonal Naïve

Seasonality’s role in time series is an important one in forecasting, and the seasonal naïve model will be the first instance to begin building up toward models that use more information from our historical data. The seasonal naïve model looks indexes the previous season to forecast a new one. A simple example could be the sales of any good more common during a particular part of the year. A grocer’s fully-sized turkey sales likely have a seasonal effect in which there is an increase in November, which subsequently falls after the holidays.

Information that is notably missed in this model would be macro trends suggesting changes over periods. In our example, we miss out on the underlying change in the price of the good, so we need to be aware of any effective factors the model does not use.

Exponential Smoothing

Exponential smoothing is a family of models with a few variations. Starting with single (or simple) exponential smoothing (SES), which is a weighted moving average. While historical observations are given equal weights in a moving average, the SES model assigns weights based on time, where recent data is weighted more heavily than older data. The parameter of this model is the smoothing constant, which changes the weights assigned to our historical and forecasted data. SES models are good for short-term forecasts.

Double exponential smoothing, or Holt’s method, extends the simple exponential smoothing model to account for the presence of a trend. Accounting for the trend involves adding another smoothing constant to be estimated. While the default trend is linear, damping methods exist in the case of forecasts maintaining that constant change over time. Applying damping methods is useful in cases where there is some expectation that the target variable will move toward a flatter trend in the future. Holt’s method is an acceptable approach to forecasting based on historical data containing a trend but not seasonality.

Triple exponential smoothing, or Holt-Winters’ seasonal method, accounts for both trend and seasonality. Holt-Winters’ has additive and multiplicative methods, which differentiate two kinds of seasonal variation. The additive method is preferred for data showing constant seasonal variation over time, while the multiplicative method is preferred when seasonal variation is increasing. Like Holts’ method of exponential smoothing, Holt-Winters’ can use dampening to control the trend of a forecast. Holt-Winters’ seasonal method is useful for forecasting based on historical data containing both trend and seasonality. The presence of trend and seasonality is common in time series, so this model tends to be an option for a variety of use cases.

Looking over these methods, you can start to see that deciding on a model will depend on the nature of the historical data. Decomposing the time series helps in understanding its trend and seasonality, and ultimately in choosing a model.

Autoregression

Autoregressive models depend on the previous (i.e., lagged) observed values of our target (dependent) variable. The parameter, p, is the order of the autoregression, or the number of previous values on which the current value is based. This is one of the main components used in ARIMA models.

This will be our first encounter with stationarity. A stationary time series does not exhibit a trend or seasonality and has constant variance over time.

Autocorrelation function (ACF) and partial autocorrelation function (PACF) plots can be used to determine the order of the autoregression.

Moving Average

The moving average model depends on historical data in a different way, in that it is used to forecast future values based on the errors of previous forecast values. Like autoregressive models, moving average models have one parameter, q, which is once again the order of the model. Changing this parameter changes how many previous forecast errors a forecast depends on. ACF and PACF plots are also useful in determining the order of moving average models.

Autoregressive Integrated Moving Average (ARIMA)

ARIMA is a versatile model which has three components. Autoregressive and moving average models are combined to make an ARMA(p,q) model, which is stationary. The three components of ARIMA are:

  • AR: autoregression
  • I: differencing
  • MA: moving average

The model can be further extended to account for seasonality, which includes a seasonality component, and exogenous variables, which include one or more other time series as predictors. For a nonseasonal ARIMA model, there are three parameters:

  • p: the order of AR
  • d: the order of differencing
  • q: the order of the MA

Starting with a time series, we need to check whether it is stationary or not. This can be done by checking ACF and PACF plots and with a unit root test; commonly, the Augmented Dickey-Fuller Test. The null hypothesis is that a unit root exists in the time series. Differencing and other transformations are used to make a non-stationary time series stationary. Recall that time series moments (mean and variance) are constant over time when stationary. Differencing helps with stabilizing the mean, while other transformations help stabilize the variance. After transformations are applied, we recheck ACF and PACF plots and the test statistic of the unit root test. To determine the orders of the autoregressive and moving average components, we can use the ACF and PACF plots once more. We can also use a grid search to minimize Akaike’s Information Criterion (AIC) or the Bayesian Information Criterion (BIC).

ARIMA is a versatile option because adjustments to the autoregressive and moving average parameters fit a variety of time series, while the differencing component helps in ensuring stationarity.

Conclusion

In conclusion, time series forecasting using classical models provides a robust framework for analyzing and predicting data with a temporal component. Throughout this blog, we’ve delved into various models, from the simplest like naïve to more complex ones like ARIMA.

Each model offers its own insights and applicability depending on the data at hand. Naïve forecasting can serve as a baseline or on historically noisy data without much of a trend, while moving to seasonal naïve and simple exponential smoothing incorporate seasonality and trend components, respectively. Double and triple exponential smoothing extends the capability by considering trend and seasonality simultaneously. Autoregressive models introduce the concept of lagged variables, helping to better understand dependencies within the historical data.

Finally, ARIMA is a versatile and comprehensive model, integrating autoregressive, moving average, and differencing components to handle a wide range of time series characteristics. Through ACF and PACF plots, along with techniques like grid search for parameter optimization, ARIMA enables robust forecasting even in the presence of non-stationarity.

The choice of model depends on the specific characteristics of the data, so model selection through experimentation leads us toward more accurate and reliable forecasts for the given historical data. Classical time series models enable analysts and practitioners to gain valuable insights into historical trends and make informed decisions for the future.

Elevate your time series forecasting with Xorbix Technologies. Our classical models unlock valuable insights for informed decision-making. Contact us today to optimize your forecasting strategies!

AI Integration Services
Databricks Services in Madison
Angular 4 to 18
TrueDepth Technology

Let’s Start a Conversation

Request a Personalized Demo of Xorbix’s Solutions and Services

Discover how our expertise can drive innovation and efficiency in your projects. Whether you’re looking to harness the power of AI, streamline software development, or transform your data into actionable insights, our tailored demos will showcase the potential of our solutions and services to meet your unique needs.

Take the First Step

Connect with our team today by filling out your project information.

Address

802 N. Pinyon Ct,
Hartland, WI 53029