Contact

For general inquiry please contact Yang Liu or use the contact form on the right. 

         

123 Street Avenue, City Town, 99999

(123) 555-6789

email@address.com

 

You can set your address, phone number, email and site description in the settings tab.
Link to read me page with more information.

CastleRocks-Full.jpg

DECISION-AI III

 

DECISION-AI III

Decision Theory & the Future of Artificial Intelligence

26 - 27 August 2019
Canberra, Australia


The development of articial intelligence (AI) comes with some dangers. In particular, decisions made by AI systems raise serious concerns. Since the inception of the workshop series, the goal has been to bring together experts of decision theory and AI to tackle the challenges posed by the development of AI. The third instalment discusses the concerns raised by algorithmic decision-making and how decision theory can help AI systems to make moral decisions. We hope that a deeper understanding allows us to envision a safe and beneficial future with AI.

Speakers

Vincent Conitzer (Duke)
David Danks (CMU)
Tom Dietterich (Oregon State)
Branden Fitelson (Northeastern)
Finnian Lattimore (Gradient Institute)
Anna Mahtani (LSE)
Elija Perrier (University of Sydney)
Jim Joyce (Michigan)
Johanna Thoma (LSE)
Annette Zimmermann (Princeton)

Organisers

Seth Lazar (ANU)
Alan Hájek (ANU)
Yang Liu (Cambridge)
Huw Price (Cambridge)
Stephan Hartmann (LMU Munich)

Local contacts

Mario Günther (ANU)
Chad Lee-Stronach (ANU)

Programme

Monday 26 August 2019

10:00
10:15
11:15
11:45

12:45
14:15
15:15
15:45
16:45
17:00

Opening by Seth Lazar
David Danks: Decision Theory & Biased AI
Morning tea
Tom Dietterich: What High-Reliability Human Organizations can Teach Us about Robust AI
Lunch
Brandon Fitelson: How to Model the Epistemic Probabilities of Contionals
Afternoon tea
Anna Mahtani: Awareness Growth and Dispositional Attitudes
Short break
Elija Perrier: Complexity Constraints on Algorithmic Governance

Tuesday 27 August 2019

09:30
10:30
11:00
12:00
13:00
14:00
14:15
15:15
15:45
16:45

Jim Joyce: Deliberation, Prediction and Freedom in Decision Theory
Morning tea
Finnian Lattimoure: Learning to Act
Lunch
Anette Zimmermann: Compounding Wrongs in Sequential Decisions
Short break
Johanna Thoma: Risk Aversion and Rationality
Afternoon tea
Vincent Conitzer: Designing Belief Formation and Decision Theories for AI
Closing by Huw Price

Abstracts

Designing Belief Formation and Decision Theories for AI
Vincent Conitzer

When we design AI systems, we can choose what they will remember and what they will forget. Moreover, an AI system can be spread out across space so that one part of it may not (yet) know what another part has already done. In such cases, the system has imperfect recall, posing challenges for belief formation along the lines of the Sleeping Beauty problem and, consequently, for decision making. The possibility that others will read an AI system's code also has implications for how it should make decisions (cf. Newcomb's problem). In this talk, I will discuss some examples, some technical results, and their implications.

Decision Theory & Biased AI
David Danks

Biased AI systems are now recognized to be a significant problem, whether pragmatically, ethically, socially, politically, or more. Most of the research on fairness and bias in AI systems has focused on measures of bias in prediction or classification algorithms. The typical belief is that we should respond to the possibility of bias by picking one such measure, and then working to minimize it over time. In contrast, I argue that we should focus on the outcomes that result from the use of such systems, and minimize the harms (or maximize the benefits) to various people and groups. That is, we should use the tools of decision theory to respond to biased AI. In this talk, I explore the impacts of this shift in approach (away from predictions, towards decisions), and argue that it results in more ethical AI systems.

What High-Reliability Human Organizations can Teach Us about Robust AI
Tom Dietterich 

Organization and management researchers define a high-reliability organization (HRO) as one that can operate for long periods of time with very low error levels. Studies of HROs have identified five attributes that are critical for achieving high reliability: (a) preoccupation with failure, (b) reluctance to simplify interpretations, (c) sensitivity to operations, (d) commitment to resilience, and (e) deference to expertise. As AI systems are deployed as part of human teams in high-risk settings, the AI systems must respect and implement these five attributes to ensure that the combined human plus AI team continues to achieve high reliability. This paper will summarize the current state of AI research aimed at achieving each of these five properties and identify shortcomings that require immediate research attention.

How to Model the Epistemic Probabilities of Conditionals
Branden Fitelson

David Lewis (and others) have famously argued against Adams's Thesis (that the probability of a conditional is the conditional probability of its consequent, given it antecedent) by proving various "triviality results." In this paper, I argue for two theses -- one negative and one positive. The negative thesis is that the "triviality results" do not support the rejection of Adams's Thesis, because Lewisian "triviality based" arguments against Adams's Thesis rest on an implausibly strong understanding of what it takes for some credal constraint to be a rational requirement (an understanding which Lewis himself later abandoned in other contexts). The positive thesis is that there is a simple (and plausible) way of modeling the epistemic probabilities of conditionals, which (a) obeys Adams's Thesis, and (b) avoids all of the existing triviality results.

Deliberation, Prediction and Freedom in Decision Theory
Jim Joyce

I will address two broad questions: What does it mean for a decision-maker to see herself as making a free choice?  Does seeing oneself as free prevent one from viewing one's decision as an object of prediction (by, e.g., assigning subjective probabilities to one's acts)? I will be defending the coherence of "act probabilities," and explaining how they function in a decision theory that respects both the transparency of beliefs and the thesis that Huw Price has called "Ramsey's Ultimate Contingency, the idea that a decision to act in a certain way screens off all other evidence that one might have about one's likely actions." The talk will engage heavily with a recent paper by Yang Liu and Huw Price, agreeing in many ways with the views propounded there but disagreeing in other ways. It will also engage, somewhat more peripherally, with a recent paper by Al Hajek on the "DARC" thesis, and a response to it by Liu and Price. 

Learning to Act: Making Good Decisions with Machine Learning
Finn Lattimore

Predicting the consequences of actions is a central component of decision theory under uncertainty. However, supervised machine learning, which forms the core of most current AI systems, is generally insucient for this task as it relies on the assumption that the system for which we wish to make predictions is identical to the one that generated the labeled training data. This assumption is violated when the actions, whose outcomes we wish to predict, modify the system under question. This talk will outline two major approaches to predicting the outcome of interventions or actions in a system: observational causal inference and reinforcement learning. I will show how these two approaches relate to one another and how they can be combined in some settings to produce better predictions.

Awareness Growth and Dispositional Attitudes
Anna Mahtani

Classical Bayesian decision theory leaves no room for awareness growth, but it seems that real-life agents can grow in awareness. For example, I might be alerted to a possible state of affairs that had not occurred to me before; or I might discover that there is a further option available to me that I had not previously considered. Should we then adapt Bayesian decision theory to make room for awareness growth? One popular idea for how this should be done is 'Reverse Bayesianism'. In this paper I raise a problem for this account, and then go on to challenge the very idea of awareness growth.

Complexity Constraints on Algorithmic Governance
Elija Perrier

Computational complexity constrains the tractability and feasibility of algorithms and the extent to which outcomes or decisions arising from them are determinative or probabilistic. In this talk, I present ongoing research into the effect of computational complexity constraints upon algorithmic governance and approaches to ethical AI. Along with presenting a general framework for how to approach constraints on attempts to render AI ethical from a computational perspective, I also explore the impact of a number of cross-disciplinary no-go theorems on algorithmic governance.

Risk Aversion and Rationality: Dilemmas for the Design of Autonomous Artificial Agents
Johanna Thoma 

The ambition for the design of autonomous artificial agents is that they can make decisions at least as good, or better, than those humans would make in the relevant decision context. For instance, the hope for autonomous vehicles is that they can drive safer and more efficiently than human-driven vehicles, as well as help prevent traffic jams. The theory of rational choice designers of artificial agents ideally aim to implement, in the context of uncertainty, is expected utility theory, the standard theory or rationality in economics and the decision sciences. This talk will present two features of the attitudes to risk that human agents tend to display that defy any simple expected utility analysis: We tend to be differently risk averse with regard to different kinds of goods, and we tend to have inconsistent risk attitudes to small stakes and large stakes gambles. Unlike standard ‘anomalies’ of choice, these attitudes are not obviously irrational. In fact, the decisions of an agent who does not display them will often strike us as unintuitive, and even morally problematic when the risks involved are morally relevant (as they often are in the case of autonomous vehicles). I will show that this poses difficult dilemmas for the design of autonomous artificial agents.

Compounding Wrongs in Sequential Decisions
Annette Zimmermann

Much of the literature on algorithmic fairness has focused on comparative assessments between multiple agents at a given time t1 (‘synchronic fairness’). By contrast, this paper focuses on assessing algorithmic fairness for a single agent over time t1,…, tn (‘diachronic fairness’). As recent work in computer science on run-away feedback loops in predictive policing and in risk scoring for criminal justice algorithms has shown, severely unequal treatment can occur over time when there is a high level of interdependence in sequential decisions. In particular, an application of a simple Pólya urn model illustrates that negligible levels of bias in early-stage decision outcomes of a given system are consistent with highly biased decision outcomes at later stages. When and how (if at all) are agents affected by such systems morally wronged? This paper introduces the concept of compounding wrongs in sequential decisions, and maps different types of such wrongs.

Venue

Weston Theatre, 1 Lennox Crossing, Canberra, ACT, 2601, Australia.

 
 

Sponsors

banner photo credit: seth lazar