Contact

For general inquiry please contact Yang Liu or use the contact form on the right. 

Name *
Name
         

123 Street Avenue, City Town, 99999

(123) 555-6789

email@address.com

 

You can set your address, phone number, email and site description in the settings tab.
Link to read me page with more information.

CastleRocks-Full.jpg

DECISION-AI III

 

DECISION-AI III

Decision Theory & the Future of Artificial Intelligence

26 - 27 August 2019
Canberra, Australia

 

Speakers

David Danks (CMU)
Tom Dietterich (Oregon State)
Branden Fitelson (Northeastern)
Finnian Lattimore (Gradient Institute)
Anna Mahtani (LSE)
Jim Joyce (Michigan)
Johanna Thoma (LSE)
Toby Walsh (UNSW)
Annette Zimmermann (Princeton)

Organisers

Seth Lazar (ANU)
Alan Hájek (ANU)
Stephan Hartmann (LMU Munich)
Yang Liu (Cambridge)
Huw Price (Cambridge)

Local contacts

Mario Günther (ANU)
Chad Lee-Stronach (ANU)

Programme

To be announced

Abstracts

Decision Theory & Biased AI
David Danks

Biased AI systems are now recognized to be a significant problem, whether pragmatically, ethically, socially, politically, or more. Most of the research on fairness and bias in AI systems has focused on measures of bias in prediction or classification algorithms. The typical belief is that we should respond to the possibility of bias by picking one such measure, and then working to minimize it over time. In contrast, I argue that we should focus on the outcomes that result from the use of such systems, and minimize the harms (or maximize the benefits) to various people and groups. That is, we should use the tools of decision theory to respond to biased AI. In this talk, I explore the impacts of this shift in approach (away from predictions, towards decisions), and argue that it results in more ethical AI systems.

What High-Reliability Human Organizations can Teach Us about Robust AI
Tom Dietterich 

Organization and management researchers define a high-reliability organization (HRO) as one that can operate for long periods of time with very low error levels. Studies of HROs have identified five attributes that are critical for achieving high reliability: (a) preoccupation with failure, (b) reluctance to simplify interpretations, (c) sensitivity to operations, (d) commitment to resilience, and (e) deference to expertise. As AI systems are deployed as part of human teams in high-risk settings, the AI systems must respect and implement these five attributes to ensure that the combined human plus AI team continues to achieve high reliability. This paper will summarize the current state of AI research aimed at achieving each of these five properties and identify shortcomings that require immediate research attention.

How to Model the Epistemic Probabilities of Conditionals
Branden Fitelson

David Lewis (and others) have famously argued against Adams's Thesis (that the probability of a conditional is the conditional probability of its consequent, given it antecedent) by proving various "triviality results." In this paper, I argue for two theses -- one negative and one positive. The negative thesis is that the "triviality results" do not support the rejection of Adams's Thesis, because Lewisian "triviality based" arguments against Adams's Thesis rest on an implausibly strong understanding of what it takes for some credal constraint to be a rational requirement (an understanding which Lewis himself later abandoned in other contexts). The positive thesis is that there is a simple (and plausible) way of modeling the epistemic probabilities of conditionals, which (a) obeys Adams's Thesis, and (b) avoids all of the existing triviality results.

Awareness Growth and Dispositional Attitudes
Anna Mahtani

Classical Bayesian decision theory leaves no room for awareness growth, but it seems that real-life agents can grow in awareness. For example, I might be alerted to a possible state of affairs that had not occurred to me before; or I might discover that there is a further option available to me that I had not previously considered. Should we then adapt Bayesian decision theory to make room for awareness growth? One popular idea for how this should be done is 'Reverse Bayesianism'. In this paper I raise a problem for this account, and then go on to challenge the very idea of awareness growth.

Risk Aversion and Rationality: Dilemmas for the Design of Autonomous Artificial Agents
Johanna Thoma 

The ambition for the design of autonomous artificial agents is that they can make decisions at least as good, or better, than those humans would make in the relevant decision context. For instance, the hope for autonomous vehicles is that they can drive safer and more efficiently than human-driven vehicles, as well as help prevent traffic jams. The theory of rational choice designers of artificial agents ideally aim to implement, in the context of uncertainty, is expected utility theory, the standard theory or rationality in economics and the decision sciences. This talk will present two features of the attitudes to risk that human agents tend to display that defy any simple expected utility analysis: We tend to be differently risk averse with regard to different kinds of goods, and we tend to have inconsistent risk attitudes to small stakes and large stakes gambles. Unlike standard ‘anomalies’ of choice, these attitudes are not obviously irrational. In fact, the decisions of an agent who does not display them will often strike us as unintuitive, and even morally problematic when the risks involved are morally relevant (as they often are in the case of autonomous vehicles). I will show that this poses difficult dilemmas for the design of autonomous artificial agents.

Venue

School of Philosophy, the Australian National University, Canberra Australia

 
 

Sponsors

banner photo credit: seth lazar