Speaker: Rajiv Sarin (University of Exeter)

Title of the talk: “A Model of Satisficing”

Date, Time and Venue: Thursday, 17 August 2017,  2:00-3:00 pm, room 260-323 [Business School Building, Level 3]

Abstract: “We build a model of satisficing behaviour. We explicitly introduce the payoff the decision maker expects from a strategy, where this expectation is adaptively formed. This valuation of a strategy is differentiated from her satisficing level which is taken to be the payoff the agent expects from her best outside option. If the agent receives a payoff above her satisficing level she continues with the current action, updating her valuation of the action. If she receives a payoff below her satisficing level and her valuation of the action falls below her satisficing level she updates both her satisficing level and what she expects from the strategy. We show that in the long run, all players satisfice. In individual decision problems, satisficing behaviour results in cautious, maximin choice. In games like the Prisoner’s Dilemma and Stag Hunt, they converge to cooperative outcomes. In other games, such as canonical public good games, they converge to (selfish) Nash equilibria.”

Bio: Rajiv Sarin is Professor of Economics at the University of Exeter. He is a theorist whose research has appeared in such leading journals as American Economic Review, International Economic Review, Games and Economic Behavior and Journal of Economic Theory.

Everyone welcome!

Speaker: Simona Fabrizi (Department of Economics, University of Auckland)

Title of the talk: “The Good, The Bad, and The Not So Ugly: Unanimity Voting with Ambiguous Information” based on a joint paper with Addison Pan

Date, Time and Venue: Thursday, 3 August 2017,  2:00-3:00 pm, [Room Change!] now room 260-307 [Business School Building, Level 3]

Abstract: “Collective decision-making leads to poorer quality decisions under the unanimity voting rule than under majority voting especially as the size of the group grows larger, due to the tendency for strategic decision-makers to vote more often against their private information. In jury trials, for instance, it is well-established that strategic voting is responsible for the paradoxical result that the more demanding the hurdle for conviction is, the more likely it is that a jury will convict an innocent defendant. We challenge these findings, by exploring collective decision-making under alternative voting rules when decision-makers face an ambiguous information structure. Specifically, we investigate voting behaviour by ambiguity-averse voters, who are MaxMin Expected Utility Maximizers, demonstrating that unanimity voting is compatible with instances of informative voting, outperforming other voting rules, such as majority voting.”

Everyone welcome!

SpeakerThomas Pfeiffer  (New Zealand Institute for Advanced Study, Massey University)

Title of the talk: “Decision markets in theory, experiment, and practical applications”

Date, Time and Venue: Wednesday, 31 May 2017,  2:00-3:00 pm, 260-307 [Business School Building, Level 3]

What this talk is going to be about, in Thomas’ words: “Knowledge in society is often dispersed, with different individuals holding different pieces of information. Decision markets are novel mechanisms to harness this knowledge for decision-making. They combine scoring rules to reward individuals for accurate forecasts with decision rules to translate aggregated forecasts into decisions. Because of this combination, decision markets can also be viewed as voting mechanisms that tie votes on actions to forecasts of their consequences.

In preparation for a full Marsden proposal on decision markets, I would like to discuss some promising open questions on this topic:

  • What are the most interesting theoretical aspects of the proposal?
    • What really is the relation between decision markets and voting mechanisms – i.e. what framework in voting theory is best suited to compare decision markets with other voting mechanisms?
    • The decision rules in proper decision markets are stochastic – what is the most relevant literature on stochastic voting systems?
    • The forecasting functionality of decision markets resembles (to some degree) signalling of candidates intentions prior to an election. Is there theory on campaign promises and voting?
  • What are the most interesting aspects in terms of human-subjects experiments
    • Proof-of-concept: we currently don’t really have a solid one – what is the best possible experiment for this (e.g. compare to Plott’s XYZ prediction market experiments)?
    • What would be a good lab setup/scenario to compare voting and decision markets?
  • What are the most relevant implications of such a proposal
    • Can decision markets help strengthening evidence-based decision-making in a “post-truth” world?

This is really intended as “sparring” session – I’ll prepare material for about 15 min (max), and would be very grateful for critical feedback, discussion, and pointers to the literature.”

Everyone welcome!

SpeakerJiamou Liu  (University of Auckland & CMSS Member)

Title of the talk: “How to Build Your Network? – A Structural Analysis” joint work with Anastasia Moskvina (AUT)

Date, Time and Venue: Wednesday, 5 April 2017,  2:00-3:00 pm, 260-307 [Business School Building, Level 3]

What this talk is going to be about, in Jaimou’s own words: “Creating new ties in a social network facilitates knowledge exchange and affects positional advantage. We study the process of establishing ties between a single node and an existing network in order to reach certain structural goals. We motivate this problem from two perspectives. The first perspective is socialization: we ask how a newcomer can forge relationships with an existing network to place herself at the center. The second perspective is network expansion: we investigate how a network may preserve or reduce its diameter through linking with a new node, hence ensuring small distance between its members. We then extend our discussion to the problem of network integration, which refers to the process of building links between two networks so that they dissolve into a single unified network.”

Everyone welcome!

Speaker: Nina Anchugina (PhD Candidate, University of Auckland)

Title of the talk: “A Puzzle of Mixing Discount Functions” joint work with Matthew Ryan (AUT) and Arkadii Slinko (University of Auckland)

Date, Time and Venue: Wednesday, 15 March 2017,  2:00-3:00 pm, 206-202 [Arts 1 Building, Level 2]

What this talk is going to be about, in Nina’s own words: “This talk will introduce the concept of a “discount function” from decision theory, and discuss some results on mixtures of discount functions.  These results suggest a puzzle that we are struggling to resolve.  Your help is sought!

In decision-theory, intertemporal preferences are modelled using “discount functions”, which attach weights to different points in time at which costs or benefits might be experienced.  In reliability theory, “survival functions” describe the probability that a component survives beyond any given point in time.  Discount functions and survival functions have similar mathematical properties.  If S(t) is a survival function, its “failure rate” is given by -S'(t)/S(t).  For discount functions, the analogous quantity is known as the “time preference rate”.  For exponential functions, this rate is constant.  For hyperbolic functions, which have become popular for modelling intertemporal preferences, this rate is strictly decreasing – a phenomenon known as strictly decreasing impatience (DI).  It is well known that mixing – that is, forming convex combinations – of exponentials produces a function that exhibits strictly DI.  (The analogous result is also well known in reliability theory.) We study generalisations of this phenomenon, from which a puzzle emerges.  For example, we have not been able to prove (or disprove) that mixing an exponential function with a non-exponential function that exhibits DI will always produce a mixture that exhibits strictly DI.  Are we missing something, or do these mixtures behave very strangely?”

Everyone welcome!

Speaker: Patrick Girard, University of Auckland (Department of Philosophy) and CMSS Member

Title of talk #1: “Ceteris Paribus Preferences”

Date, Time and Venue: Wednesday, 22 March 2017,  2:00-3:00 pm, 206-202 [Arts 1 Building, Level 2]

What talk #1 is going to be about, in Patrick’s own words: “I’m writing a book on Ceteris Paribus Logic. I’m trying to get closure with 10+ years on the topic, which had me doing a log of preference and belief revision logic. I have a chapter on preference logic which at the moment contains no less than 40 definitions of preference! Some are a bit mad, but for the most part they are plausible. As a logician, my goal is to unify them all into a simple preference logic, which is what the book is about, but not what I will bore you with in the talk. Instead, I will get you to realise why one might be so mad as to offer 40 definitions of preferences, and we can discuss if and how it may relate to your own research.”

Title of talk #2: “Inconsistent Logic”

Date, Time and Venue: Wednesday, 29 March 2017,  2:00-3:00 pm, 260-040B [Business School Building, Level 0]

What talk #2 is going to be about, in Patrick’s own words: “Now this is mad! I’m a new-born dialetheist. That’s a philosophical position which says that some contradictions are inevitable. By “inevitable”, we mean that they are true. As non-sensical as it sounds, there’s a lot of research trying to find logics, and mathematics, that can accommodate such madness. Those are called “Paraconsistent Logics” in general. Not all of them need to accept that there are true contradictions, so not all is mad. There are practical motivations for looking at logics that can tolerate inconsistencies. Think about an auto-pilot that needs to save a cabin of free passengers while receiving inconsistent information from it’s various channels. Or think about inconsistencies that people display in their beliefs and preferences, and how those are always idealised away, because we can’t cope with contradiction. Well, maybe we can, if we let in a bit more madness in our logic.”

Everyone welcome!

Speaker: Bettina Klaus

Affiliation: University of Lausanne

Title: Non-Revelation Mechanisms for Many-to-Many Matching: Equilibria versus Stability

Date: Monday, 31 October 2016

Time: 4:00-5:00 pm

Location: 260-307

We study many-to-many matching markets in which agents from a set A are matched to agents from a disjoint set B through a two-stage non-revelation mechanism. In the first stage, A-agents, who are endowed with a quota that describes the maximal number of agents they can be matched to, simultaneously make proposals to the B-agents. In the second stage, B-agents sequentially, and respecting the quota, choose and match to available A-proposers. We study the subgame perfect Nash equilibria of the induced game. We prove that stable matchings are equilibrium outcomes if all A-agents’ preferences are substitutable. We also show that the implementation of the set of stable matchings is closely related to the quotas of the A-agents. In particular, implementation holds when A-agents’ preferences are substitutable and their quotas are non-binding.

A copy of the paper to be presented is available for downloads here

Everyone welcome!

Speaker: Gerardo Berbeglia
Affiliation: Melbourne Business School
Title: The Effect of a Finite Time Horizon in the Durable Good Monopoly Problem with Atomic Consumers
Date: Monday, 27 June 2016
Time: 4-5pm
Location: OGGB, Room 6115

Abstract:
A durable good is a long-lasting good that can be consumed repeatedly over time, and a duropolist is a monopolist in the market of a durable good. In 1972, Ronald Coase conjectured that a duropolist who lacks commitment power cannot sell the good above the competitive price if the time between periods approaches zero. Coase’s counterintuitive conjecture was later proven by Gul et al. (1986) under an infinite time horizon model with non-atomic consumers. Remarkably, the situation changes dramatically for atomic consumers and an infinite time horizon. Bagnoli et al. (1989) showed the existence of a subgame-perfect Nash equilibrium where the duropolist extracts all the consumer surplus. Observe that, in these cases, duropoly profits are either arbitrarily smaller or arbitrarily larger than the corresponding static monopoly profits — the profit a monopolist for an equivalent consumable good could generate. In this paper we show that the result of Bagnoli et al. (1989) is in fact driven by the infinite time horizon. Indeed, we prove that for finite time horizons and atomic agents, in any equilibrium satisfying the standard skimming property, duropoly profits are at most an additive factor more than static monopoly profits. In particular, duropoly profits are always at least static monopoly profits but never exceed twice the static monopoly profits. Finally we show that, for atomic consumers, equilibria may exist that do not satisfy the skimming property. For two time periods, we prove that amongst all equilibria that maximise duropoly profits, at least one of them satisfies the skimming property. We conjecture that this is true for any number of time periods.

Speaker: Arkadii Slinko
Affiliation: Department of Mathematics
Title: Growth of dimension in complete simple games
Date: Monday, 16 May 2016
Time: 4:00 pm
Location: Clock Tower 032

Simple games are used to model a wide range of situations from decision making in committees to reliability of systems made from unreliable components and McCulloch-Pitts units in threshold logic. Weighted voting games are a natural and practically important class of simple games, in which each agent is assigned a numerical weight, and a coalition is winning if the sum of weights of agents in that coalition achieves a certain threshold.

The concept of dimension in simple games was introduced by Taylor and Zwicker in 1993 as a measure of remoteness of a given simple game from a weighted game. They demonstrated that the dimension of a simple game can grow exponentially in the number of players. However, the problem of worst-case growth of the dimension in the important class of complete games was left open. Freixas and Puente (2008) showed that complete games of arbitrary dimension exist and, in particular, their examples demonstrate that the worst-case growth of dimension in complete games is at least linear. In this paper, using a novel technique of Kurz and Napel (2015), we demonstrate that the worst-case growth of dimension in complete games is at least polynomial in the number of players. Whether or not it can be exponential remains an open question.

This is a joint paper with Liam O’Dwyer.

Everyone welcome!

Speaker: Mark Wilson
Affiliation: Computer Science Department
Title: Average-case analysis of random assignment algorithms
Date: Monday, 2 May 2016
Time: 4:00 pm
Location: Clock Tower 032

I present joint work with summer scholarship student Jacky Lo. The problem of one-sided matching without money (also known as house allocation), namely computing a bijection from a finite set of items to a finite set of agents each of whom has a strict preference order over the items, has been much studied. Symmetry considerations require the use of randomization, yielding the more general notion of random assignment. The two most commonly studied algorithms (Random Serial Dictatorship (RP) and Probabilistic Serial Rule (PS)) dominate the literature on random assignments. One feature of our work is the inclusion of several new algorithms for the problem. We adopt an average-case viewpoint: although these algorithms do not have the axiomatic properties of PS and RP, they are computationally efficient and perform well on random data, at least in the case of sincere preferences. We perform a thorough comparison of the algorithms, using several standard probability distributions on ordinal preferences and measures of fairness, efficiency and social welfare. We find that there are important differences in performance between the known algorithms. In particular, our lesser-known algorithms yield better overall welfare than PS and RP and better efficiency than RP, with small negative consequences for envy, and are computationally efficient. Thus provided that worst-case and strategic concerns are relatively unimportant, the new algorithms should be seriously considered for use in applications.

Everyone welcome!