Social Foundations of Computation

Social Foundations of Computation

At Social Foundations of Computation, we build scientific foundations for machine learning and artificial intelligence in the social world. To chart and implement a society’s norms and expectations, we start from concepts and work our way towards applications. Challenging existing problem formulations when necessary, we think through how the use of machine learning distributes societal resources and opportunity. Computational tools to critically evaluate---and possibly contest---algorithmic systems and their impacts are a key component of our work. Our ultimate goal is to promote a positive role of artificial intelligence in society.

Developed as a tool for statistical pattern recognition, machine learning now reconfigures virtually any aspect of the social world. Statistical algorithms support or replace human judgment across state bureaucracies and public institutions. Predictive systems drive online engagement and steer digital markets. Massive machine learning models exhibit social skills that seem to defy the mundane way they are created: by fitting billions of parameters to heaps of data. Yet, machine learning as a field is woefully unprepared for the sheer societal impact of its creations. Working from little more than trial and error, what a machine learning model does is hard to specify and rarely by design. But the problem runs deeper. The field lacks not only the technical tools but also the conceptual repertoire to meet society’s norms and expectations.

Ensuring a positive role of artificial intelligence in society therefore starts at the foundations of the field. New concepts, definitions, and theories must make room for the dynamic character of the social world that defies the old “astronomical conception” of pattern recognition. Predictions in the social world are always actions that change the course of events. This fundamental fact has catalyzed the emerging field of studying performative prediction, a formulation of machine learning that allows the model to influence the data. This departure from statistical tradition is necessary to capture real-world systems that drive behavior through predictions. Performative prediction rewrites the rules of prediction as there are now two ways to be good at prediction: learning and steering. Learning is the classical way a platform can discover and target consumer preferences. Steering is unique to performative prediction allowing a platform to push consumption towards a distribution more favorable to its objectives.

The notion of steering illuminates important concerns about digital systems. A greater ability to steer indicates a greater degree of power that a platform has in a digital economy. This observation motivates a new definition of power, called performative power, that addresses challenges in the study of competition in digital markets. Power imbalances are one threat to well-being in digital economies. We therefore develop algorithms that empower individuals to organize systematically toward promoting their well-being in algorithmic ecosystems. Normative concerns with machine learning arise with all tasks of firms, institutions and governments that allocate opportunity and scarce resources. Although algorithmic fairness provides a useful lens on algorithmic decisions, it gives only a partial view. Increasingly, we broaden the normative criteria we apply to algorithmic systems from the perspective of resource allocation.

What challenges every aspect of our work is that complex machine learning systems are notoriously difficult to evaluate reliably. Establishing valid empirical facts and measurements about machine learning systems remains a major challenge. Evaluation is all the more difficult when the model is part of a larger ecosystem, be it a digital market, a social network, or an online platform. Here the model necessarily works at scale, making it difficult to reason from experiments involving a small fraction of participants.

When it comes to evaluating model capabilities, benchmarks have served machine learning research well for decades. But the paradigm is now reaching its limits. A flurry of new benchmarks struggles to keep up as large models continue to advance in their capabilities across a rapidly expanding range of tasks. Investing in the emerging science of machine learning benchmarks and evaluation is therefore an important part of ensuring the positive impact of artificial intelligence in society.