News

  • 08 May 2022: We invite all authors and participants to join our Slack workspace! Check out the program for more details.
  • 08 May 2022: All video presentations have been uploaded to the ALA YouTube channel!.
  • 01 May 2022: We are pleased to announce Herke van Hoof as a guest speaker!.
  • 11 Apr 2022: We are also excited to announce Patrick MacAlpine of Sony AI will be giving a talk/demo on Sony AI's recent paper "Outracing Champion Gran Turismo Drivers with Deep Reinforcement Learning".
  • 11 Apr 2022: We are excited to announce Bei Peng as a keynote speaker for ALA 2022.
  • 16 Feb 2022: We are excited to announce Natasha Jaques as a keynote speaker for ALA 2022.
  • 01 Feb 2022: ALA-Cogment Challenge goes live!
  • 28 Jan 2022: ALA 2022 submission deadline has been extended to 11 Feb 2022 23:59 UTC
  • 6 Dec 2021: ALA 2022 Call for papers can be found here
  • 25 Nov 2021: ALA 2022 Website goes live!

ALA 2022 - Workshop at AAMAS 2022

Adaptive and Learning Agents (ALA) encompasses diverse fields such as Computer Science, Software Engineering, Biology, as well as Cognitive and Social Sciences. The ALA workshop will focus on agents and multiagent systems which employ learning or adaptation.

This workshop is a continuation of the long running AAMAS series of workshops on adaptive agents, now in its fourteenth year. Previous editions of this workshop may be found at the following urls:

The goal of this workshop is to increase awareness of and interest in adaptive agent research, encourage collaboration and give a representative overview of current research in the area of adaptive and learning agents and multi-agent systems. It aims at bringing together not only scientists from different areas of computer science (e.g. agent architectures, reinforcement learning, evolutionary algorithms) but also from different fields studying similar concepts (e.g. game theory, bio-inspired control, mechanism design).

The workshop will serve as an inclusive forum for the discussion of ongoing or completed work covering both theoretical and practical aspects of adaptive and learning agents and multi-agent systems.

This workshop will focus on all aspects of adaptive and learning agents and multi-agent systems with a particular amphasis on how to modify established learning techniques and/or create new learning paradigms to address the many challenges presented by complex real-world problems. The topics of interest include but are not limited to:

  • Novel combinations of reinforcement and supervised learning approaches
  • Integrated learning approaches that work with other agent reasoning modules like negotiation, trust models, coordination, etc.
  • Supervised multi-agent learning
  • Reinforcement learning (single- and multi-agent)
  • Novel deep learning approaches for adaptive single- and multi-agent systems
  • Multi-objective optimisation in single- and multi-agent systems
  • Planning (single- and multi-agent)
  • Reasoning (single- and multi-agent)
  • Distributed learning
  • Adaptation and learning in dynamic environments
  • Evolution of agents in complex environments
  • Co-evolution of agents in a multi-agent setting
  • Cooperative exploration and learning to cooperate and collaborate
  • Learning trust and reputation
  • Communication restrictions and their impact on multi-agent coordination
  • Design of reward structure and fitness measures for coordination
  • Scaling learning techniques to large systems of learning and adaptive agents
  • Emergent behaviour in adaptive multi-agent systems
  • Game theoretical analysis of adaptive multi-agent systems
  • Neuro-control in multi-agent systems
  • Bio-inspired multi-agent systems
  • Applications of adaptive and learning agents and multi-agent systems to real world complex systems

Extended and revised versions of papers presented at the workshop will be eligible for inclusion in a journal special issue (see below).

Important Dates

  • Submission Deadline: 30 January 2022   11 February 2022 23:59 UTC
  • Notification of acceptance: 14 March 2022
  • Camera-ready copies: 15 April 2022
  • Workshop: 9 - 10 May 2022

Submission Details

Papers can be submitted through EasyChair.

We invite submission of original work, up to 8 pages in length (excluding references) in the ACM proceedings format (i.e. following the AAMAS formatting instructions). This includes work that has been accepted as a poster/extended abstract at AAMAS 2022. Additionally, we welcome submission of preliminary results, i.e. work-in-progress, as well as visionary outlook papers that lay out directions for future research in a specific area, both up to 6 pages in length, although shorter papers are very much welcome, and will not be judged differently. Finally, we also accept recently published journal papers in the form of a 2 page abstract.

Furthermore, for submissions that were rejected or accepted as extended abstracts at AAMAS, we encourage authors to also append the received reviews. This is simply a recommendation and it is optional. Authors can also include a short note or changelist they carried out on the paper. The reviews can be appended at the end of the submission file and do not count towards the page limit.

All submissions will be peer-reviewed (single-blind). Accepted work will be allocated time for poster and possibly oral presentation during the workshop. Extended versions of original papers presented at the workshop will also be eligible for inclusion in a post-proceedings journal special issue.

When preparing your submission for ALA 2022, please be sure to remove the AAMAS copyright block, citation information and running headers. Please replace the AAMAS copyright block in the main.tex file from the AAMAS template with the following:

\setcopyright{none}
\acmConference[ALA '22]{Proc.\@ of the Adaptive and Learning Agents Workshop (ALA 2022)}
{May 9-10, 2022}{Online, \url{https://ala2022.github.io/}}{Cruz, Hayes, da Silva, Santos (eds.)}
\copyrightyear{2022}
\acmYear{2022}
\acmDOI{}
\acmPrice{}
\acmISBN{}
\settopmatter{printacmref=false}

ALA-Cogment Challenge

All accepted ALA papers will be eligable to take part in the ALA-Cogment Challenge. The ALA-Cogment Challenge offers a total prize pool of $10,000.

Cogment is an open-source framework for distributed multi-actor training, deployment, and operations. To take part in the competition you must submit a paper to ALA by 11-Feb-2022. Then all accepted ALA particiapants must, by April 30, 2022:

  • Sign up for the ALA-Cogment Challenge.
  • Cite Cogment in your paper.
  • Implement an experimental setup using Cogment for your submitted paper and share it with the ALA-Cogment Challenge evaluation team (e.g. in a GitHub repo or in a zip file). We strongly encourage authors to publish their code to meet this requirement.

We will award up to three grand prizes to ALA-Cogment Challenge submissions that make the best use of Cogment for applications or for fundamental research. Grand prizes will be awarded according to criteria that include (but are not limited to):

  • The richness of the submission’s Cogment usage with respect to agents, implementations, environments, benchmarking, and evaluations.
  • The creative involvement of human actors or evaluators during the submission’s Cogment training or validation process.
  • The complexity of the AI problem being addressed with Cogment.
We will announce the grand prize results live during the May 9-10, 2022 workshop. Further details about the competition can be found here: https://ai-r.com/aamas-2022-welcome-to-the-ala-cogment-challenge/

Journal Special Issue

We are delighted to announce that extended versions of all original contributions at ALA 2022 will be eligible for inclusion in a special issue of the Springer journal Neural Computing and Applications (Impact Factor 5.606). The deadline for submitting extended papers will be 15 September 2022.

NCA

We will post further details about the submission process and expected publication timeline here after the workshop.

Program

Except for the invited talks, ALA will take place in an asynchronous manner. We invite all authors and participants to join our Slack workspace!

In order to organise discussions, we ask authors/participants to create channels with the name paper-#. We have added below a unique number for each contribution.

Monday 9 May

To attend the session on May 9th please join our Zoom meeting room!

If you cannot access the meeting room, the session will be streamed live on the ALA YouTube channel. Please feel free to pose your questions in the chat

15:45 - 16:00 UTC Welcome & Opening Remarks
16:00 - 17:15 UTC Invited Demo: Patrick MacAlpine (Sony AI)
Outracing champion Gran Turismo drivers with deep reinforcement learning
17:15 - 18:30 UTC Invited Talk: Natasha Jaques (Google Brain & UC Berkeley)
Social Reinforcement Learning

Tuesday 10 May

To attend the session on May 10th please join our Zoom meeting room!

If you cannot access the meeting room, the session will be streamed live on the ALA YouTube channel. Please feel free to pose your questions in the chat

07:00 - 08:15 UTC Invited Talk: Herke van Hoof (University of Amsterdam)
Learning agent policies with RL and structure
08:15 - 09:30 UTC Invited Talk: Bei Peng (University of Liverpool)
Cooperative Multi-Agent Reinforcement Learning
09:30 - 09:45 UTC Awards, closing remarks and ALA 2023

Accepted Papers

Paper # Authors Title Details
4 Brown Wang Deducing Decision by Error Propagation [video]
5Benjamin Wexler, Elad Sarafian and Sarit Kraus Analyzing and Overcoming Degradation in Warm-Start Off-Policy Reinforcement Learning [video]
6Joe Eappen and Suresh Jagannathan DistSPECTRL: Distributing Specifications in Multi-Agent Reinforcement Learning Systems [video]
7Manel Rodriguez-Soto, Juan Antonio Rodriguez Aguilar and Maite Lopez-Sanchez Building Multi-Agent Environments with Theoretical Guarantees on the Learning of Ethical Policies [video]
9Borja Gonzalez Leon, Murray Shanahan and Francesco Belardinelli Systematic Generalisation of Temporal Tasks through Deep Reinforcement Learning [video]
10Sindhu Padakandla Data Efficient Safe Reinforcement Learning Algorithm NA
11 Andrew Butcher, Michael Johanson, Elnaz Davoodi, Dylan Brenneis, Leslie Acker, Adam Parker, Adam White, Joseph Modayil and Patrick Pilarski Pavlovian Signalling with General Value Functions in Agent-Agent Temporal Decision Making [video]
12 Dylan J. A. Brenneis, Adam S. R. Parker, Michael Bradley Johanson, Andrew Butcher, Elnaz Davoodi, Leslie Acker, Matthew M. Botvinick, Joseph Modayil, Adam White and Patrick M. Pilarski Assessing Human Interaction in Virtual Reality With Continually Learning Prediction Agents Based on Reinforcement Learning Algorithms: A Pilot Study [video]
13Bram Grooten, Jelle Wemmenhove, Maurice Poot and Jim Portegies Is Vanilla Policy Gradient Overlooked? Analyzing Deep Reinforcement Learning for Hanabi [video]
14Wolfram Barfuss and Richard P. Mann Non-linear dynamics of multi-agent reinforcement learning in partially observable environments [video]
15Ghada Sokar, Elena Mocanu, Decebal Constantin Mocanu, Mykola Pechenizkiy and Peter Stone Dynamic Sparse Training for Deep Reinforcement Learning [video]
16Thomas Cassimon, Reinout Eyckerman, Siegfried Mercelis, Steven Latré and Peter HellinckxA Survey on Discrete Multi-Objective Reinforcement Learning Benchmarks [video]
17Simon Vanneste, Astrid Vanneste, Kevin Mets, Tom De Schepper, Ali Anwar, Siegfried Mercelis, Steven Latré and Peter HellinckxLearning to Communicate Using Counterfactual Reasoning [video]
18David Radke, Kate Larson and Tim BrechtThe Importance of Credo in Multiagent Learning [video]
19 Anirudh Jamkhandi, Masahiro Yasuda and Shusaku YosaDeveloping Sim-to-Real Multi-Task Recommendations via Open-Ended Learning [video]
21 Manuel Schneckenreither and Georg MoserAverage Reward Adjusted Discounted Reinforcement Learning [video]
22 Inês Terrucha, Elias Fernández Domingos, Francisco C. Santos, Pieter Simoens and Tom LenaertsThe art of compensation: how hybrid teams solve collective risk dilemmas [video]
23 Astrid Vanneste, Simon Vanneste, Kevin Mets, Tom De Schepper, Siegfried Mercelis, Steven Latré and Peter HellinckxAn Analysis of Discretization Methods for Communication Learning with Multi-Agent Reinforcement Learning [video]
24 Felipe Leno Da Silva, Andre Goncalves, Sam Nguyen, Denis Vashchenko, Ruben Glatt, Thomas Desautels, Mikel Landajuela, Brenden Petersen and Daniel FaissolLeveraging Language Models to Efficiently Learn Symbolic Optimization Solutions [video]
25 Willem Röpke, Roxana Radulescu, Ann Nowe and Diederik M. RoijersCommitment and Cyclic Strategies in Multi-Objective Games [video]
26 Yulin Zhang, William Macke, Jiaxun Cui, Daniel Urieli and Peter StoneLearning a Robust Multiagent Driving Policy for Traffic Congestion Reduction [video]
27 Kyrill Schmid, Michael Kölle and Tim Matheis Learning to Participate through Trading of Reward Shares [video]
28 Canmanie Ponnambalam, Danial Kamran, Thiago D. Simão, Frans Oliehoek and Matthijs T. J. Spaan Back to the Future: Solving Hidden Parameter MDPs with Hindsight [video]
29 Rory Buckley, Gregory O'Hare and Rem Collier How to Pick Strawberries Safely: Objective Reward Shaping with Visual Complexity [video]
30 Sai Kiran Narayanaswami, Swarat Chauduri, Moshe Vardi and Peter Stone Automating Mechanism Design with Program Synthesis [video]
31 Daniele Foffano, Jinke He and Frans A. Oliehoek Robust Ensemble Adversarial Model-Based Reinforcement Learning [video]
32 Conor F. Hayes, Diederik M. Roijers, Enda Howley and Patrick Mannion Multi-Objective Distributional Value Iteration [video]
33 Ajay Narayanan, Prasant Misra, Ankush Ojha, Vivek Bandhu, Supratim Ghosh and Arunchandar Vasan A Reinforcement Leaning Approach for Electric Vehicle Routing Problem with Vehicle-to-Grid Supply [video]
34 Qisong Yang, Thiago D. Simão, Nils Jansen, Simon H. Tindemans and Matthijs T. J. Spaan Training and Transferring Safe Policies in Reinforcement Learning [video]
35 Andries Rosseau, Raphaël Avalos and Ann Nowé Autocurricula and Emergent Sociality from a Gene Perspective [video]
36 Chaitanya Kharyal, Tanmay Sinha and Matthew Taylor Work-in-Progress: Multi-Teacher Curriculum Design for Sparse Reward Environments [video]
37 Henrique Donâncio and Laurent Vercouter Safety through Intrinsically Motivated Imitation Learning [video]
38 Eugenio Bargiacchi, Raphael Avalos, Timothy Verstraeten, Pieter Libin, Ann Nowé and Diederik M. Roijers Multi-agent RMax for Multi-Agent Multi-Armed Bandits [video]
39 Caroline Wang, Ishan Durugkar, Elad Liebman and Peter Stone Decentralized Multi-Agent Reinforcement Learning via Distribution Matching [video]
41 Neale Van Stralen, Seung Hyun Kim, Huy T. Tran and Girish Chowdhary Feature Specialization and Clustering Improves Hierarchical Subtask Learning [video]
42 Aamal Hussain and Francesco Belardinelli Equilibria and Convergence of Fictitious Play on Network Aggregative Games [video]
43 Sahir, Ercüment İlhan, Srijita Das and Matthew Taylor Methodical Advice Collection and Reuse in Deep Reinforcement Learning [video]

Invited Talks

Natasha Jaques

Affiliation: Google Brain & UC Berkeley

Website: https://natashajaques.ai

Bio: Natasha Jaques holds a joint position as a Senior Research Scientist at Google Brain and Visiting Postdoctoral Scholar at UC Berkeley. Her research focuses on Social Reinforcement Learning in multi-agent and human-AI interactions. Natasha completed her PhD at MIT, where her thesis received the Outstanding PhD Dissertation Award from the Association for the Advancement of Affective Computing. Her work has also received Best Demo at NeurIPS, an honourable mention for Best Paper at ICML, Best of Collection in the IEEE Transactions on Affective Computing, and Best Paper at the NeurIPS workshops on ML for Healthcare and Cooperative AI. She has interned at DeepMind, Google Brain, and was an OpenAI Scholars mentor. Her work has been featured in Science Magazine, Quartz, IEEE Spectrum, MIT Technology Review, Boston Magazine, and on CBC radio. Natasha earned her Masters degree from the University of British Columbia, and undergraduate degrees in Computer Science and Psychology from the University of Regina. More about Natasha, but more importantly her research, can be found on her website: https://natashajaques.ai

Title: Social Reinforcement Learning

Abstract: Social learning helps humans and animals rapidly adapt to new circumstances, coordinate with others, and drives the emergence of complex learned behaviors. What if it could do the same for AI? This talk describes how Social Reinforcement Learning in multi-agent and human-AI interactions can address fundamental issues in AI such as learning and generalization, while improving social abilities like coordination. I propose a unified method for improving coordination and communication based on causal social influence. I then demonstrate that multi-agent training can be a useful tool for improving learning and generalization. I present PAIRED, in which an adversary learns to construct training environments to maximize regret between a pair of learners, leading to the generation of a complex curriculum of environments. Agents trained with PAIRED generalize more than 20x better to unknown test environments. Finally, I demonstrate the difference between social learning and imitation learning, and present a method for selectively learning who and what to imitate by computing when following other agents’ policies would pay off under the learner’s own preferences. Together, this work argues that Social RL is a valuable approach for developing more general, sophisticated, and cooperative AI.

Bei Peng

Affiliation: University of Liverpool

Website: https://beipeng.github.io/

Bio: Bei Peng is currently a Lecturer (Assistant Professor) in the Department of Computer Science at the University of Liverpool. Her research focuses mainly on deep reinforcement learning, multi-agent systems, interactive machine learning, and curriculum learning. Prior to Liverpool, Bei was a Postdoctoral Researcher in reinforcement learning at the Whiteson Research Lab at the University of Oxford, and a Non-Stipendiary Lecturer in Computer Science at St Catherine's College. Bei received a B.S. in Computer Science from the Huazhong University of Science and Technology in China in 2012 and a Ph.D. in Computer Science from the Washington State University in 2018.

Title: Cooperative Multi-Agent Reinforcement Learning

Abstract: Many real-world learning problems involve multiple agents acting and interacting in the same environment to achieve some common goal, which can be naturally modelled as cooperative multi-agent systems. In this talk, I first overview some of the key challenges we focus on in cooperative multi-agent reinforcement learning. I then focus on discussing two of our recent work on addressing some of these challenges. In one work, we show how overestimation in deep multi-agent Q-learning can be more severe than previously acknowledged and can lead to divergent learning behaviour in practice. We propose a method that uses a new regularisation-based update scheme and an approximate softmax operator, to reduce the potential overestimation bias. In another work, we propose a deep multi-agent actor-critic method that uses a centralised but factored critic and a new centralised policy gradient, to enable more efficient and scalable learning in both discrete and continuous cooperative tasks. Finally, I present Multi-Agent MuJoCo, a new comprehensive benchmark suite that we developed, based on the popular single-agent MuJoCo benchmark, to allow the study of decentralised continuous control

Patrick MacAlpine

Affiliation: Sony AI

Bio: Patrick MacAlpine is a research scientist at Sony AI, and his research spans the areas of autonomous multiagent systems, robotics, and machine learning with an emphasis on reinforcement learning. He completed a Ph.D. in the computer science department at the University of Texas at Austin where he was advised by Peter Stone. During his Ph.D. Patrick served as the team leader of the robot soccer UT Austin Villa RoboCup 3D Simulation League team, and much of his dissertation work contributed to the team winning the RoboCup 3D Simulation League world championship multiple years. Patrick received both bachelor’s and master’s degrees in electrical engineering from Rice University. Prior to joining Sony AI, Patrick was a postdoctoral researcher in the reinforcement learning group at Microsoft Research.

Title: Outracing Champion Gran Turismo Drivers with Deep Reinforcement Learning

Abstract: Many potential applications of artificial intelligence involve making real-time decisions in physical systems while interacting with humans. Automobile racing represents an extreme example of these conditions; drivers must execute complex tactical manoeuvres to pass or block opponents while operating their vehicles at their traction limits. Racing simulations, such as the PlayStation game Gran Turismo, faithfully reproduce the non-linear control challenges of real race cars while also encapsulating the complex multi-agent interactions. Here we describe how we trained agents for Gran Turismo that can compete with the world’s best e-sports drivers. We combine state-of-the-art, model-free, deep reinforcement learning algorithms with mixed-scenario training to learn an integrated control policy that combines exceptional speed with impressive tactics. In addition, we construct a reward function that enables the agent to be competitive while adhering to racing’s important, but under-specified, sportsmanship rules. We demonstrate the capabilities of our agent, Gran Turismo Sophy, by winning a head-to-head competition against four of the world’s best Gran Turismo drivers. By describing how we trained championship-level racers, we demonstrate the possibilities and challenges of using these techniques to control complex dynamical systems in domains where agents must respect imprecisely defined human norms.

Herke van Hoof

Affiliation: University of Amsterdam

Bio: Herke van Hoof is currently assistant professor at the University of Amsterdam in the Netherlands, where he is part of the Amlab. He is interested in reinforcement learning with structured data and prior knowledge. Examples of this line of work include reinforcement learning (RL) for combinatorial optimisation, RL with symbolic prior knowledge, and equivariant RL. Before joining the University of Amsterdam, Herke van Hoof was a postdoc at McGill University in Montreal, Canada, where he worked with Professors Joelle Pineau, Dave Meger, and Gregory Dudek. He obtained his PhD at TU Darmstadt, Germany, under the supervision of Professor Jan Peters, where he graduated in November 2016. Herke got his bachelor and master degrees in Artificial Intelligence at the University of Groningen in the Netherlands.

Title: Learning agent policies with RL and structure

Abstract: Reinforcement learning is a very general framework for learning agent policies. It can be applied in many different settings but this comes at the price of data-inefficiency. To learn better in practicable conditions, with my team I aim to impose more structure on models and/or architectures. In this talk, I will discuss two of our recent results in this space. In (multi-agent) MDP homomorphic networks, together with Elise van der Pol and others, we studied how assumptions on symmetries in the environment interaction can be used to improve the data-efficiency of deep reinforcement learning, while still allowing decentralised execution. In another work, with Niklas Höpner and Iliari Tiddi, we studied how prior knowledge of a concept ontology can improve data-efficiency and generalisation performance of an agent in text-based common sense games.

Awards

Best Paper Award

Sponsored by Neural Computing & Applications

15Ghada Sokar, Elena Mocanu, Decebal Constantin Mocanu, Mykola Pechenizkiy and Peter Stone Dynamic Sparse Training for Deep Reinforcement Learning [video]

Visionary Paper Award

32 Conor F. Hayes, Diederik M. Roijers, Enda Howley and Patrick Mannion Multi-Objective Distributional Value Iteration [video]

Cogment Challenge Winners

Sponsored by AI Redefined

36 Chaitanya Kharyal, Tanmay Sinha and Matthew Taylor Work-in-Progress: Multi-Teacher Curriculum Design for Sparse Reward Environments [video]
5Benjamin Wexler, Elad Sarafian and Sarit Kraus Analyzing and Overcoming Degradation in Warm-Start Off-Policy Reinforcement Learning [video]

Programe Committee

  • Erman Acar, Leiden University & Vrije Universiteit Amsterdam, NL
  • Adrian Agogino, University of California Santa Cruz, USA
  • Arrasy Rahman, University of Edinburgh, UK
  • Nicolas Anastassacos, University College London, UK
  • Rahman Arrasy, University of Edinburgh, UK
  • Raphael Avalos, Vrije Universiteit Brussel, BE
  • Angel Ayala, Universidade de Pernambuco, BR
  • Wolfram Barfuss, Tuebingen AI Center, University of Tuebingen, GE
  • Pablo Barros, University of Hamburg, GE
  • Daan Bloembergen, City of Amsterdam, NL
  • Rodrigo Bonini, Federal University of ABC, BR
  • Roland Bouffanais, University of Ottawa, CA
  • Mustafa Mert Çelikok, Aalto University, FI
  • Filippos Christianos, University of Edinburgh, UK
  • Raphael Cobe, Sao Paulo State University, BR
  • Vinicius Renan de Carvalho, University of São Paulo, BR
  • Yunshu Du, Sony AI, US
  • Elias Fernández Domingos, Vrije Universiteit Brussels, BE
  • Marek Grzes, University of Kent, BE
  • Brent Harrison, University of Kentucky, US
  • Fredrik Heintz, Linköping University, SE
  • Daniel Hernandez, University of York, UK
  • Johan Källström, Linköping University, SE
  • Thommen Karimpanal George, Deakin University, AU
  • Sammie Katt, Northeastern University, US
  • Mari Kawakatsu, Princeton University, US
  • Matt Knudson, NASA, US
  • Mikel Landajuela, Lawrence Livermore National Laboratory, US
  • Guangliang Li, Ocean University of China, CN
  • Pieter Libin, Vrije Universiteit Brussel, BE
  • Udari Madhushani, Princeton University, US
  • Kleanthis Malialis, University of Cyprus, CY
  • Karl Mason, Georgia Institute of Technology, US
  • Cristian Camilo Millán Arias, Universidade de Pernambuco, BR
  • Nicolás Navarro-Guerrero, Deutsches Forschungszentrum für Künstliche Intelligenz, GE
  • Bei Peng, University of Liverpool, UK
  • Hélène Plisnier, Vrije Universiteit Brussel, BE
  • Canmanie Ponnambalam, Delft University of Technology, NL
  • Roxana Radulescu, Vrije Universiteit Brussel, BE
  • Pablo Hernandez-Leal, Borealis AI, CA
  • Gabriel De O. Ramos, Universidade do Vale do Rio dos Sinos, BR
  • Diederik M. Roijers, Vrije Universiteit Brussel & HU University of Applied Sciences Utrecht, NL
  • Willem Röpke, Vrije Universiteit Brussel, BE
  • Francisco C. Santos, INESC-ID and Instituto Superior Técnico, Universidade de Lisboa, PT
  • Craig Sherstan, University of Alberta, CA
  • Alexey Shpilman, JetBrains Research, HSE University, RU
  • Jivko Sinapov, The University of Texas at Austin, US
  • Miguel Solis, Universidad Andrés Bello, CL
  • Miguel Suau, Delft Univesity of Technology, NL
  • Paolo Turrini, University of Warwick, UK
  • Victor Uc-Cetina, Universidad Autónoma de Yucatán, MX
  • Peter Vamplew, Federation University Australia, AU
  • Miguel Vasco, INESC-ID and Instituto Superior Técnico, Universidade de Lisboa, PT
  • Vítor V. Vasconcelos, University of Amsterdam, NL
  • Connor Yates, Oregon State University, US

Organization

This year's workshop is organised by: Senior Steering Committee Members:
  • Enda Howley (National University of Ireland Galway, IE)
  • Daniel Kudenko (University of York, UK)
  • Patrick Mannion (National University of Ireland Galway, IE)
  • Ann Nowé (Vrije Universiteit Brussel, BE)
  • Sandip Sen (University of Tulsa, US)
  • Peter Stone (University of Texas at Austin, US)
  • Matthew Taylor (Washington State University, US)
  • Kagan Tumer (Oregon State University, US)
  • Karl Tuyls (University of Liverpool, UK)

Trustworthy Adaptive and Learning Agents

Authors and attendees of ALA 2022 who are interested in trustworthiness in agent-based systems are invited to submit their work to a topical collection (TC) on Trustworthy Adaptive and Learning Agents (TALA). This TC solicits original research articles, reviews/surveys, and opinion pieces/commentaries relating to trustworthiness in agent-based systems, including those that employ learning and/or adaptation. The TALA TC has an open call for papers; it is not necessary to submit preliminary work to the ALA workshop in order to have your manuscript considered for publication in this TC.

AI and Ethics

NCA

Contact

If you have any questions about the ALA workshop, please contact the organizers at:
ala.workshop.2022 AT gmail.com

For more general news, discussion, collaboration and networking opportunities with others interested in Adaptive Learning Agents then please join our Linkedin Group