Apologies for cross posting but thought it might interest people here.
The abstract & link to the whole paper is below.
Machine Learning (ML) methods have been proposed in the academic literature
as alternatives to statistical ones for time series forecasting. Yet, scant
evidence is available about their relative performance in terms of accuracy
and computational requirements. The purpose of this paper is to evaluate
such performance across multiple forecasting horizons using a large subset
of 1045 monthly time series used in the M3 Competition. After comparing the
post-sample accuracy of popular ML methods with that of eight traditional
statistical ones, we found that the former are dominated across both
accuracy measures used and for all forecasting horizons examined. Moreover,
we observed that their computational requirements are considerably greater
than those of statistical methods. The paper discusses the results,
explains why the accuracy of ML models is below that of statistical ones
and proposes some possible ways forward. The empirical results found in our
research stress the need for objective and unbiased ways to test the
performance of forecasting methods that can be achieved through sizable and
open competitions allowing meaningful comparisons and definite conclusions.
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0194889