Abstract:Scientific recommender systems, such as Google Scholar and Web of Science, are essential tools for discovery. Search algorithms that power work through stigmergy, a collective intelligence mechanism that surfaces useful paths through repeated engagement. While generally effective, this ``rich-get-richer'' dynamic results in a small number of high-profile papers that dominate visibility. This essay argues argue that these algorithm over-reliance on popularity fosters intellectual homogeneity and exacerbates structural inequities, stifling innovative and diverse perspectives critical for scientific progress. We propose an overhaul of search platforms to incorporate user-specific calibration, allowing researchers to manually adjust the weights of factors like popularity, recency, and relevance. We also advise platform developers on how word embeddings and LLMs could be implemented in ways that increase user autonomy. While our suggestions are particularly pertinent to aligning recommender systems with scientific values, these ideas are broadly applicable to information access systems in general. Designing platforms that increase user autonomy is an important step toward more robust and dynamic information
Abstract:Shannon's information entropy measures of the uncertainty of an event's outcome. If learning about a system reflects a decrease in uncertainty, then a plausible intuition is that learning should be accompanied by a decrease in the entropy of the organism's actions and/or perceptual states. To address whether this intuition is valid, I examined an artificial organism -- a simple robot -- that learned to navigate in an arena and analyzed the entropy of the outcome variables action, state, and reward. Entropy did indeed decrease in the initial stages of learning, but two factors complicated the scenario: (1) the introduction of new options discovered during the learning process and (2) the shifting patterns of perceptual and environmental states resulting from changes to the robot's learned movement strategies. These factors lead to a subsequent increase in entropy as the agent learned. I end with a discussion of the utility of information-based characterizations of learning.