CL
Chi Harold Liu
12 records found
1
FedViT
Federated continual learning of vision transformer at edge
Deep Neural Networks (DNNs) have been ubiquitously adopted in internet of things and are becoming an integral part of our daily life. When tackling the evolving learning tasks in real world, such as classifying different types of objects, DNNs face the challenge to continually re
...
ElasticDNN
On-Device Neural Network Remodeling for Adapting Evolving Vision Domains at Edge
Executing deep neural networks (DNN) based vision tasks on edge devices encounters challenging scenarios of significant and continually evolving data domains (e.g. background or subpopulation shift). With limited resources, the state-of-the-art domain adaptation (DA) methods eith
...
Edge-cloud applications are rapidly prevailing in recent years and pose the challenge of using both resource-strenuous edge devices and elastic cloud resources under dynamic workloads. Efficient resource allocation on edge-cloud jobs via cluster schedulers (e.g. Kubernetes/Volcan
...
EdgeVisionBench
A Benchmark of Evolving Input Domains for Vision Applications at Edge
Vision applications powered by deep neural networks (DNNs) are widely deployed on edge devices and solve the learning tasks of incoming data streams whose class label and input feature continuously evolve, known as domain shift. Despite its prominent presence in real-world edge s
...
FedKNOW
Federated Continual Learning with Signature Task Knowledge Integration at Edge
Deep Neural Networks (DNNs) have been ubiquitously adopted in internet of things and are becoming an integral of our daily life. When tackling the evolving learning tasks in real world, such as classifying different types of objects, DNNs face the challenge to continually retrain
...
EdgeTuner
Fast Scheduling Algorithm Tuning for Dynamic Edge-Cloud Workloads and Resources
Edge-cloud jobs are rapidly prevailing in many application domains, posing the challenge of using both resource-strenuous edge devices and elastic cloud resources. Efficient resource allocation on such jobs via scheduling algorithms is essential to guarantee their performance, e.
...
With the massive amount of data generated from mobile devices and the increase of computing power of edge devices, the paradigm of Federated Learning has attracted great momentum. In federated learning, distributed and heterogeneous nodes collaborate to learn model parameters. Ho
...
Deep neural networks (DNNs) have been showing significant success in various anomaly detection applications such as smart surveillance and industrial quality control. It is increasingly important to detect anomalies directly on edge devices, because of high responsiveness require
...
Deep learning (DL) models are increasingly built on federated edge participants holding local data. To enable insight extractions without the risk of information leakage, DL training is usually combined with differential privacy (DP). The core theme is to tradeoff learning accura
...
With the exponential growth of data created at the network edge, decentralized and Gossip-based training of deep learning (DL) models on edge computing (EC) gains tremendous research momentum, owing to its capability to learn from resource-strenuous edge nodes with limited networ
...
The core of many large-scale machine learning (ML) applications, such as neural networks (NN), support vector machine (SVM), and convolutional neural network (CNN), is the training algorithm that iteratively updates model parameters by processing massive datasets. From a plethora
...
Cluster schedulers provide flexible resource sharing mechanism for best-effort cloud jobs, which occupy a majority in modern datacenters. Properly tuning a scheduler's configurations is the key to these jobs' performance because it decides how to allocate resources among them. To
...