Abstract : Autonomous driving relies greatly on deep learning to comprehend the surroundings and activities of the road systems. The learning models are traditionally trained off-line and used during driving. However, recent research on feder- ated learning has enabled distributed deep learning for model adaptation with new data inputs from end users. Similarly, the recent research on continual learning has enabled the upgrading of the learned model with newer rounds of training, without losing previously acquired knowledge. For autonomous and/or connected vehicles, these mean it is possible to take new, maybe real time, data inputs from various sensors from multiple vehicles in vicinity, to train and update the preloaded models. The updated models can improve the safety and reduce human involvements when driving through unfamiliar situations. This paper tackles one important issue, namely, the model data dissemination for enabling such a distributed learning. The model data dissemination consists of steps of soliciting workers, trans- mitting model, and collecting the updates. In a mixed autonomous vehicles and connected vehicles network scenario, communication overhead and network dynamics are the major challenges. We present analyses on critical latency issues pertained to the various aspects of the model data dissemination to gain insights on the feasibility of federated learning for such a scenario. Also, we introduce a communication architecture and a publish-subscribe system for the model data dissemination. Our system is built with an information-centric networking paradigm via a tiered edge network architecture. Such a system organizes the steps of the model data dissemination clearly and manages the dynamics of the participating vehicles easily. The results show that the presented system reduces overall communication overhead and delay. It also provides high resiliency to packet losses in the mixed wireless connected and autonomous vehicular network.
Index terms : Autonomous vehicles, connected vehicles, con- tinual learning distributed learning, federated learning, model distribution, named data networking, synchronization groups.