NLP is in a period of disruptive change that is impacting our methodologies, funding sources, and public perception. In this work, we seek to understand how to shape our future by better understanding our past. We study factors that shape NLP as a field, including culture, incentives, and infrastructure by conducting long-form interviews with 26 NLP researchers of varying seniority, research area, institution, and social identity. Our interviewees identify cyclical patterns in the field, as well as new shifts without historical parallel, including changes in benchmark culture and software infrastructure. We complement this discussion with quantitative analysis of citation, authorship, and language use in the ACL Anthology over time. We conclude by discussing shared visions, concerns, and hopes for the future of NLP. We hope that this study of our field's past and present can prompt informed discussion of our community's implicit norms and more deliberate action to consciously shape the future.
Recent criticisms of AI ethics principles and practices have indicated a need for new approaches to AI ethics that can account for and intervene in the design, development, use, and governance of AI systems across multiple actors, contexts, and scales of activity. This paper positions AI value chains as an integrative concept that satisfies those needs, enabling AI ethics researchers, practitioners, and policymakers to take a more comprehensive view of the ethical and practical implications of AI systems. We review and synthesize theoretical perspectives on value chains from the literature on strategic management, service science, and economic geography. We then review perspectives on AI value chains from the academic, industry, and policy literature. We connect an inventory of ethical concerns in AI to the actors and resourcing activities involved in AI value chains to demonstrate that approaching AI ethics issues as value chain issues can enable more comprehensive and integrative research and governance practices. We illustrate this by suggesting five future directions for researchers, practitioners, and policymakers to investigate and intervene in the ethical concerns associated with AI value chains.
Here, I ask what we can learn about how how gender affects how people engage with robots. I review 46 empirical studies of social robots, published 2018 or earlier, which report on the gender of their participants or the perceived or intended gender of the robot, or both, and perform some analysis with respect to either participant or robot gender. From these studies, I find that robots are by default perceived as male, that robots absorb human gender stereotypes, and that men tend to engage with robots more than women. I highlight open questions about how such gender effects may be different in younger participants, and whether one should seek to match the gender of the robot to the gender of the participant to ensure positive interaction outcomes. I conclude by suggesting that future research should: include gender diverse participant pools, include non-binary participants, rely on self-identification for discerning gender rather than researcher perception, control for known covariates of gender, test for different study outcomes with respect to gender, and test whether the robot used was perceived as gendered by participants. I include an appendix with a narrative summary of gender-relevant findings from each of the 46 papers to aid in future literature reviews.