Conclusions
We have presented the development and assessment of a tool that issues probabilistic predictions on the number of future falls in a cohort of community-dwelling older subjects. We have trained this model over a dataset that is large, in terms of both number of variables related to mobility and falls and number of samples. We have benchmarked it against other risk indicators. The trained model and FRAT-up outperformed simple fall risk indicators. Despite the breadth of the dataset and the use of state-of-the-art tools of statistical learning, the trained model was not able to reach a better discriminative ability than FRAT-up. This finding supports the validity of the literature-based approach used to develop FRAT-up. Both the data-driven and literature-based approaches are better at estimating fall risk than commonly used fall risk indicators.
The accuracy-parsimony analysis has shown that predictive accuracy improves as the number of variables increases up to 20–30. This suggests that fall prediction is more accurate when based on multiple fall risk factors and indicators; thus simplistic screening tests (three to six variables) are suboptimal in terms of predictive accuracy. Since common risk factors and indicators are already part of geriatric comprehensive assessments, integrating prognostic tools for falls into them could improve the prediction without compromising usability.