Efficient and Secure μ-Training and μ-Fine-Tuning for TinyML Optimization and Personalization at the Edge
Huang, Z.; Yu, L.; Herbozo Contreras, L. F.; Kavehei, O.
Show abstract
This study presents a novel, computationally efficient training framework demonstrated through bio-signal processing on edge medical devices. The approach integrates conventional full training with an innovative {micro}-Training technique, wherein the encoder and decoder of a compact model remain frozen while only the middle layer is updated. This design is further enhanced by a novel Future-Guided Self-Distillation mechanism that leverages the models anticipated future state in training to boost performance and improve generalization on unseen data, using electrocardiogram (ECG) signals as the primary case study. Additionally, {micro}-Fine-Tuning facilitates ondevice adaptation under resource-constrained conditions. We validate our framework using in-sample data from the Telehealth Network of Minas Gerais (TNMG) and out-of-sample testing on the China Physiological Signal Challenge 2018 (CPSC) datasets. Experimental results demonstrate that our integrated strategy (combining full training, self-distilled {micro}-Training, and {micro}-Fine-Tuning) consistently matches or surpasses conventional methods while significantly improving computational efficiency and mitigating catastrophic forgetting. Deployment on Radxa Zero hardware underscores the approachs practical applicability and scalability. Moreover, a demonstration incorporating the proposed self-distilled {micro}-Training into standard training procedures reveals performance improvements. This highlights the techniques potential for broader applications beyond medical diagnostics and TinyML systems, paving the way for its integration into existing training mechanisms to elevate overall model performance.
Matching journals
The top 3 journals account for 50% of the predicted probability mass.