Ahmed Salem, Theodoros Salonidis, et al.
MASS 2017
Deep learning techniques achieve state-of-the-art performance on many computer vision related tasks, e.g. large-scale object recognition. In this paper we show that recognition accuracy degrades when used in daily mobile scenarios due to context variations caused by different locations, time of a day, etc. To solve this problem, we present DeepCham- the first adaptive mobile object recognition framework that allows deep learning techniques to be used successfully in mobile environments. Specifically, DeepCham is mediated by an edge master server which coordinates with participating mobile users to collaboratively train a domain-aware adaptation model which can yield much better object recognition accuracy when used together with a domain-constrained deep model. DeepCham generates high-quality domain-aware training instances for adaptation from in-situ mobile photos using two major steps: (i) a distributed algorithm which identifies qualifying images stored in each mobile device for training, (ii) a user labeling process for recognizable objects identified from qualifying images using suggestions automatically generated by a generic deep model. Using a newly collected dataset with smartphone images collected from different locations, time of a day, and device types, we show that DeepCham improves the object recognition accuracy by 150% when compared to that achieved merely using a generic deep model. In addition, we investigated how major design factors affect the performance of DeepCham. Finally, we demonstrate the feasibility of DeepCham using an implemented prototype.
Ahmed Salem, Theodoros Salonidis, et al.
MASS 2017
Nirmit Desai, Anuradha Bhamidipaty, et al.
SCC 2010
Nirmit Desai, Pietro Mazzoleni, et al.
DEST 2007
Bongjun Ko, Kin K. Leung, et al.
SPIE Defense + Security 2018