COMPUTATIONAL SOCIAL SCIENCE

Department of Computational Social Science Seminar - Cotla

Friday, September 27, 3:00 p.m.
Center for Social Complexity Suite
Research Hall, Third Floor

Learning in Linear Public Goods Games: A Comparative Analysis

Chenna Reddy Cotla, CSS PhD Candidate
Department of Computational Social Science
George Mason University

ABSTRACT: This paper examines learning in repeated linear public goods games. Experimental data from previously published papers is considered in testing several learning models in terms of how accurately they describe individuals' round-by-round choices. In total 18 datasets are considered and each dataset differs from the others in at least one of the following aspects: marginal per capita return, group size, matching protocol, number of rounds, and endowment that determines the number of stage-game strategies. Both ex post descriptive power of learning models and their ex ante predictive power are examined. Descriptive power of learning models is examined by comparing mean quadratic scores computed for each dataset using the parameters that are estimated using all datasets. Predictive power of the learning models is evaluated by comparing mean quadratic scores computed for each dataset using parameters estimated using the other datasets. The following learning models are considered to model individual level adaptive behavior: reinforcement learning, normalized reinforcement learning, stochastic fictitious play, normalized stochastic fictitious play, experience weighted attraction learning (EWA), self-tuning EWA, individual evolutionary learning and Impulse matching learning. In addition to these prominent learning models, this paper also introduces a new learning model: Experience weighted attraction learning with inertia and experimentation (EWAIE). The main result is that EWAIE outperforms the other learning models in modeling individuals' round-by-round choices in repeated linear public goods games. Furthermore, while all the learning models out-perform a random choice benchmark, only EWA and EWAIE out-perform the empirical choice frequencies in predicting behavior, which indicates that they adjust their individual level predictions more accurately over time.