Social learning is almost as good as centralized detection with slight global knowledge
Journal
2020 IEEE Information Theory Workshop, ITW 2020
Date Issued
2021
Author(s)
Huang Y.-C
Abstract
Fundamental limits on the error probability of the social learning rule proposed by Lalitha et al. [1] and its variants for decentralized detection over a directed graph is investigated. We show that while social learning algorithms enjoy the benefits that each node in the graph weights the messages received from its neighbors locally to form its private belief and only requires knowledge of the data generating distributions of its own observation, it suffers a gap in the achievable error exponent compared to the centralized case. We propose a generalization of the social learning rule achieving the error exponent in centralized detection with the aid of slight global knowledge. A further analysis reveals that the price of decentralization is at most a constant term in the higher-order asymptotics. To obtain the slight global knowledge needed at each node for achieving the centralized error exponent, we develop a decentralized estimation method for each node to come up with a local estimate of that piece of global knowledge. An extended version of this paper is accessible at: http://homepage.ntu.edu.tw/~ihwang/Eprint/itw20soclrn.pdf ?2021 IEEE
Subjects
Directed graphs
Errors
Graph algorithms
Information theory
Centralized detections
Decentralized detection
Decentralized estimation
Error probabilities
Extended versions
Global knowledge
Higher order asymptotics
Social learning
Learning algorithms
Type
conference paper
