Session 1- AI Agents in a LLM world: what and why?
10:00 AM to 11:00 AM
Studio 4
This session explores the nature and purpose of AI and robotic agents. With the rise of large language models, terms like AI agents, agentic, and multi-agents have become buzzwords frequently seen in the news and social media. But what are they exactly? How do we define them? This session aims to understand what these agents are and how they push our understanding of notions such as intelligence and cooperation. With experts from academia, industry, and cities, we will explore the various ways agents are perceived.
Meet the speakers
-

Tom Lenaerts
MODERATOR
FARI Academic Director - Université Libre de Bruxelles (ULB)
Tom Lenaerts is Full professor in the Computer Science department at the Université Libre de Bruxelles (ULB), where he is co-heading the Machine Learning Group (MLG), focussing on AI and Computational Biology. He holds a partial affiliation as research professor with the Artificial Intelligence Lab of the Vrije Universiteit Brussel and is affiliated researcher at the Center for Human-Compatible AI of UC Berkeley. He was board member, vice-chair and finally chair of the Benelux Association for Artificial Intelligence between 2016 and 2024 and Director of the Interuniversity Institute of Bioinformatics in Brussels between 2017 and 2021. He currently is the Academic Director of FARI, the Brussels AI for Common Good institute, AI expert in the Global Partnership on Artificial Intelligence and national contact point for the CAIRNE hub in Brussels. He has been publishing in a variety of interdisciplinary domains on AI and Machine Learning, involving topics related to optimization, multi-agent systems, collective intelligence, evolutionary game theory, computational biology and bioinformatics.
-

Jin Zhijing
SPEAKER
Incoming Assistant Professor at University of Toronto
Jin Zhijing is an incoming Assistant Professor at the University of Toronto, and currently a research scientist at the Max Planck Institute with Bernhard Schoelkopf, based in Europe. She is also a CIFAR AI Chair, faculty member at the Vector Institute, an ELLIS advisor, and faculty affiliate at the Schwartz Reisman Institute.
Her research areas are Large Language Models (LLMs), Causal Inference, and Responsible AI. Specifically, her vertical work focuses on Causal Reasoning with LLMs (Causal AI Scientist, Corr2Cause, CLadder, Quriosity, Survey), Multi-Agent LLMs (GovSim, SanctSim, MoralSim[Slides] [Blogpost]), and Moral Reasoning in LLMs (TrolleyProblems, MoralLens, MoralExceptQA). To support the quality of her vertical work, her horizontal work brings in Mechanistic Interpretability (CompMechs, Mem vs Reasoning), and Adversarial Robustness (CRL Defense, TextFooler, AccidentalVulnerability, RouterAttack). Her research contributes to AI Safety and AI for Science.
She is the recipient of 3 Rising Star awards, 2 Best Paper Awards at NeurIPS 2024 Workshops, and several fellowships at Open Philanthropy and the Future of Life Institute. In the international academic community, she is a co-chair of the ACL Ethics Committee, co-organizer of the ACL Year-Round Mentorship, and a main supporter of the NLP for Positive Impact Workshop series. Her work is reported in CHIP Magazine, WIRED, and MIT News. Her research is funded by NSERC, MPI, UofT, Schmidt Sciences, Open Phil, AISF, and Cooperative AI Foundation.
-

Xabier Barandiaran
SPEAKER
Lecturer in Philosophy at the University of the Basque Country and researcher at the IAS-Research Centre for Life, Mind, and Society
Xabier Barandiaran is philosopher, engaged in the intersection of social practices, cognitive sciences and technopolitics. He is developing an interdisciplinary approach at the intersection of cognitive science, artificial life, neuroscience and social science, while actively involved in hacktivism, the reproduction of the commons and participatory democracy. He is a lecturer in philosophy at the University of the Basque Country and a researcher at IAS-Research. He is also co-founder of several social innovation initiatives such as Wikitoki, FLOK Society and Decidim. His academic and activist career has taken him to work in Spain, the UK, France and Austria, with a PhD on the autonomy of cognitive agents and a strong involvement in the design of democratic digital infrastructures, notably as director of democratic innovation at Barcelona City Council.
-

James Wilson
SPEAKER
AI Ethicist in the AI Labs | Lead AI Architect | Advocate for the Safe AI for Children Alliance (SAIFCA)
James Wilson is an AI Ethicist in the AI Futures Lab at Capgemini where he helps to set policies and principles around the safe and ethical adoption of AI for both his own organisation and their clients, as well as publishing extensive thought leadership on the topic. Previously he worked at Gartner where he held similar responsibilities. Outreach and societal projects are critical to James, and amongst these, he is an ambassador for the Safe AI for Children Alliance (SAIFCA) and a member of the International Association for Safe and Ethical AI (IASEAI). In 2022 he published the book “Artificial Negligence”, designed to raise awareness of the risks related to AI adoption.