Summary: Five 2-hour Lectures demonstrating how simple models of strategic interaction illuminate important topics in the evolution of animal behavior.
Instructor: Richard McElreath
Location: MPI-EVA Leipzig, main lecture hall
Time: Tuesdays 10am-12pm
Dates: 4 Oct, 11 Oct, 1 Nov, 8 Nov, 15 Nov
Audience: Students and researchers at MPI-EVA and iDiv. If there is space, other Leipzig folks welcome.
Specifics: Lectures will be chalk-on-slate and show the construction and solution of the most basic and famous results of evolutionary game theory. I'll provide extensive notes to accompany the lectures, but you'll still want a notebook to copy the board work during class. There will be algebra. But no calculus or advanced math required.
Credit: There will be a single homework problem each week that will help you solidify the lecture material and practice extending it. Students who complete all of the assignments can earn course credit.
Topical outline:
Lecture 1. The evolution of conflict
Lecture 2. The evolution of cooperation
Lecture 3. The evolution of relationships
Lecture 4. The evolution of families
Lecture 5. The evolution of societies
Students can submit homework to me via email or just give me paper in class. You are welcome to work in groups. Just submit your own individual solution. There is one problem each week, listed below.
It doesn't make sense that Dove's display has no fitness cost. If nothing else, it costs time and energy. Let
Analyze the evolutionary dynamics of the coordinate game from lecture, using the statistical assortment model. The coordination game is where "Safe" earns b when it meets itself, zero otherwise. "Risky" earns B when it meets itself, zero otherwise. Let p be the proportion of Risky in the population. Let r be the probability of assortment. Determine when each strategy is evolutionary stable and the location of any unstable equilibria.
The iterated prisoner's dilemma is often criticized for presenting a too pessimistic view of the potential for cooperation, because many real contexts are not prisoner's dilemmas. Reanalyze the Tit-for-Tat strategy from lecture, but use a Stag Hunt payoff structure instead. This means that when both individuals cooperate, they both earn B. If one cooperates and the other does not, the cooperator earns zero (0). Non-cooperation always earns b < B. Consider when TFT is stable and can invade against ALLC and NO-C. Are there any qualitative differences from the prisoner's dilemma?