Friday,April 10, 3:00 p.m.
Center for Social Complexity Suite
Research Hall, Third Floor

Growing Firms from the Bottom Up: Data, Theories, and Models

Rob Axtell, Professor/Chair
Department of Computational Social Science
George Mason University

ABSTRACT: We are living through revolutionary times in the social sciences. There are orders of magnitude more DATA available today than even a decade ago. We have built up significant experience using EXPERIMENTS to figure out how to animate the agents in our models. And continued progress in COMPUTER hardware has made large-scale agent models possible. The combination of these independent developments makes it possible to build large-scale, data-driven models of human social phenomena with realistic behavior at the agent level. I shall describe my recent efforts to do this for the population of U.S. firms, about which extensive data have become available over the past decade. Some 120 million people work in the American private sector, in approximately 6 million firms. Some 10 thousand firms are publicly traded. One firm has more than a million employees. The universe of U.S. firms possesses very peculiar distributions of size, age, growth rate, productivity, lifetimes, employment tenure, turnover, and so on, and when we make each of these conditional on the others (e.g., growth rates conditional on firm size, or turnover conditional on firm age) we end up with a wealth of empirical 'targets' that satisfactory models must fit/reproduce. I will describe a reasonably simple model in which agents contribute a variable amount of effort to their firm in a team production setting, moving from one firm to another when it is in their self-interest, or starting up a new firm if better opportunities cannot be found. It will be shown that the parameters in this model can be adjusted to reproduce most of what is known about the empirical structure of U.S. firms. The model is realized at 'full-scale' with the real population--120 million agents--on a single machine. I will suggest that working with models at full-scale is often easier than smaller models because many of the outputs we care about scale in non-trivial ways with the size of the system. Furthermore, I will also explain why both cloud-type compute architectures and GPUs are ineffective for agent models in which there are dense interactions between the agents, and why large shared memory, multi-core systems work best. Lastly, the model presented will demonstrate that with sufficiently rich behavioral specifications agents can operate far away from equilibrium (endogenous dynamics), yet macro steady-states can arise and be readily compared to data. This is in contrast to much modern economic theorizing in which equilibrium is claimed to obtain at the agent level and so the only source of dynamics is external (exogenous) to the economy.

This material forms the basis of a book manuscript, currently in production.