Objective: To demonstrate the feasibility of data-driven models for discovering interpretable movement features relevant to Essential Tremor (ET) vs Cortical Myoclonus (CM) diagnosis using video.
Background: Advances in Artificial Intelligence (AI) have enabled video-based diagnosis of movement disorders versus healthy controls [1], [2], [3], [4], but these models still face challenges in interpretability. Furthermore, only a few of these models have attempted to differentiate between movement disorders [5], [6], [7], [8], and none between ET and CM. While video analysis methods have advanced greatly outside of medicine [9], data constraints [10] and reliance on non-interpretable models [11] make many of them unfeasible for clinical use.
Method: Our dataset [12], [13] includes 18 patients diagnosed with CM and 18 patients diagnosed with ET by three movement disorder experts. Video recordings are processed using MediaPipe [14] to extract movement coordinates during multiple tasks with clips of 8-30 seconds. For this case study, we cut all clips to 8 seconds and focus on finger tapping.
We validate our models using Leave-One-Out Cross-Validation. Their performance is shown in [table1].
Two models are evaluated: Random Forest (RF) and XGBoost (XGB) [15]. For both, we extract features by applying TSFRESH [16], generating 10,934 different measurements. These values are impractical for interpretability due to their quantity, so we apply two feature extraction algorithms: Recursive Feature Elimination and FeatBoost [17].
Results: RF achieves a 71% accuracy, compared to XGB with 82%. As can be observed in [figure1], our approach allows identifying which feature contributed the most to a model’s decision. All models selected features capable of separating sporadic from oscillating movements, which matches our understanding of ET and CM.
XGB with FeatBoost identifies a single key feature (Variance larger than Standard Deviation) which diagnoses 30/36 patients correctly. This meets the performance of training without feature selection, while being fully interpretable.
Conclusion: Our model using FeatBoost achieves high performance while maintaining interpretability, enabling objective measurements of ET and CM using non-invasive, accessible consumer video. Since this approach is data-driven, future work can explore larger datasets including more movement disorders.
Figure 1
Table 1
References: [1] M. U. Friedrich et al., “Validation and application of computer vision algorithms for video-based tremor analysis,” Npj Digit. Med., vol. 7, no. 1, pp. 1–13, Jun. 2024, doi: 10.1038/s41746-024-01153-1.
[2] X. Wang, S. Garg, S. N. Tran, Q. Bai, and J. Alty, “Hand tremor detection in videos with cluttered background using neural network based approaches,” Health Inf. Sci. Syst., vol. 9, no. 1, p. 30, Jul. 2021, doi: 10.1007/s13755-021-00159-3.
[3] J. Hyppönen et al., “Automatic assessment of the myoclonus severity from videos recorded according to standardized Unified Myoclonus Rating Scale protocol and using human pose and body movement analysis,” Seizure – Eur. J. Epilepsy, vol. 76, pp. 72–78, Mar. 2020, doi: 10.1016/j.seizure.2020.01.014.
[4] H. Zhang, E. S. L. Ho, F. X. Zhang, S. Del Din, and H. P. H. Shum, “Pose-based tremor type and level analysis for Parkinson’s disease from video,” Int. J. Comput. Assist. Radiol. Surg., vol. 19, no. 5, pp. 831–840, 2024, doi: 10.1007/s11548-023-03052-4.
[5] E. Kovalenko et al., “Distinguishing Between Parkinson’s Disease and Essential Tremor Through Video Analytics Using Machine Learning: A Pilot Study,” IEEE Sens. J., vol. 21, no. 10, pp. 11916–11925, May 2021, doi: 10.1109/JSEN.2020.3035240.
[6] H. Haberfehlner et al., “A Novel Video-Based Methodology for Automated Classification of Dystonia and Choreoathetosis in Dyskinetic Cerebral Palsy During a Lower Extremity Task,” Neurorehabil. Neural Repair, vol. 38, no. 7, pp. 479–492, Jul. 2024, doi: 10.1177/15459683241257522.
[7] J. Spann, S. A. Chen, T. Ashizawa, and E. Hoque, “Getting on the Right Foot: Using Observational and Quantitative Methods to Evaluate Movement Disorders,” in Proceedings of the 29th International Conference on Intelligent User Interfaces, in IUI ’24. New York, NY, USA: Association for Computing Machinery, Apr. 2024, pp. 742–749. doi: 10.1145/3640543.3645160.
[8] K. Eguchi et al., “Feasibility of differentiating gait in Parkinson’s disease and spinocerebellar degeneration using a pose estimation algorithm in two-dimensional video,” J. Neurol. Sci., vol. 464, p. 123158, Sep. 2024, doi: 10.1016/j.jns.2024.123158.
[9] Z. Sun, Q. Ke, H. Rahmani, M. Bennamoun, G. Wang, and J. Liu, “Human Action Recognition From Various Data Modalities: A Review,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 45, no. 3, pp. 3200–3225, Mar. 2023, doi: 10.1109/TPAMI.2022.3183112.
[10] W. Tang, P. M. A. van Ooijen, D. A. Sival, and N. M. Maurits, “Automatic two-dimensional & three-dimensional video analysis with deep learning for movement disorders: A systematic review,” Artif. Intell. Med., vol. 156, p. 102952, Oct. 2024, doi: 10.1016/j.artmed.2024.102952.
[11] J. Amann, A. Blasimme, E. Vayena, D. Frey, V. I. Madai, and the Precise4Q consortium, “Explainability for artificial intelligence in healthcare: a multidisciplinary perspective,” BMC Med. Inform. Decis. Mak., vol. 20, no. 1, p. 310, Nov. 2020, doi: 10.1186/s12911-020-01332-6.
[12] A. M. M. van der Stouwe et al., “Next move in movement disorders (NEMO): developing a computer-aided classification tool for hyperkinetic movement disorders,” BMJ Open, vol. 11, no. 10, p. e055068, Oct. 2021, doi: 10.1136/bmjopen-2021-055068.
[13] J. R. Dalenberg, D. E. Peretti, L. R. Marapin, A. M. M. van der Stouwe, R. J. Renken, and M. A. J. Tijssen, “Next move in movement disorders: neuroimaging protocols for hyperkinetic movement disorders,” Front. Hum. Neurosci., vol. 18, Aug. 2024, doi: 10.3389/fnhum.2024.1406786.
[14] C. Lugaresi et al., “MediaPipe: A Framework for Building Perception Pipelines,” Jun. 14, 2019, arXiv: arXiv:1906.08172. doi: 10.48550/arXiv.1906.08172.
[15] T. Chen and C. Guestrin, “XGBoost: A Scalable Tree Boosting System,” in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Aug. 2016, pp. 785–794. doi: 10.1145/2939672.2939785.
[16] M. Christ, N. Braun, J. Neuffer, and A. W. Kempa-Liehr, “Time Series FeatuRe Extraction on basis of Scalable Hypothesis tests (tsfresh – A Python package),” Neurocomputing, vol. 307, pp. 72–77, Sep. 2018, doi: 10.1016/j.neucom.2018.03.067.
[17] A. Alsahaf, N. Petkov, V. Shenoy, and G. Azzopardi, “A framework for feature selection through boosting,” Expert Syst. Appl., vol. 187, p. 115895, Jan. 2022, doi: 10.1016/j.eswa.2021.115895.
To cite this abstract in AMA style:
R. Martínez-García-Peña, L. Koens, M. Tijssen, G. Azzopardi. Interpretable Video Analysis for Movement Disorders, A Case Study of Essential Tremor vs. Cortical Myoclonus [abstract]. Mov Disord. 2025; 40 (suppl 1). https://www.mdsabstracts.org/abstract/interpretable-video-analysis-for-movement-disorders-a-case-study-of-essential-tremor-vs-cortical-myoclonus/. Accessed October 5, 2025.« Back to 2025 International Congress
MDS Abstracts - https://www.mdsabstracts.org/abstract/interpretable-video-analysis-for-movement-disorders-a-case-study-of-essential-tremor-vs-cortical-myoclonus/