Abstract:We analyze a simple algorithm for network embedding, explicitly characterizing conditions under which the learned representation encodes the graph's generative model fully, partially, or not at all. In cases where the embedding loses some information (i.e., is not invertible), we describe the equivalence classes of graphons that map to the same embedding, finding that these classes preserve community structure but lose substantial density information. Finally, we show implications for community detection and link prediction. Our results suggest strong limitations on the effectiveness of link prediction based on embeddings alone, and we show common conditions under which naive link prediction adds edges in a disproportionate manner that can either mitigate or exacerbate structural biases.
Abstract:In this paper, we investigate the conditions under which link analysis algorithms prevent minority groups from reaching high ranking slots. We find that the most common link-based algorithms using centrality metrics, such as PageRank and HITS, can reproduce and even amplify bias against minority groups in networks. Yet, their behavior differs: one one hand, we empirically show that PageRank mirrors the degree distribution for most of the ranking positions and it can equalize representation of minorities among the top ranked nodes; on the other hand, we find that HITS amplifies pre-existing bias in homophilic networks through a novel theoretical analysis, supported by empirical results. We find the root cause of bias amplification in HITS to be the level of homophily present in the network, modeled through an evolving network model with two communities. We illustrate our theoretical analysis on both synthetic and real datasets and we present directions for future work.