The Brussels Responsible AI Network (BRAIN) Forum is a one-day event that invites researchers, practitioners, and policymakers from Belgium and beyond to discuss the latest advances in AI and its societal implications. The forum is hosted at the Brussels Institute for Advanced Studies (BrIAS) and is part of the BrIAS series of public forums.
BRAIN brings together the three Belgian AI research programmes: the AI for the Common Good Institute (FARI) in Brussels; the Trusted AI Labs (TRAIL) of the Walloon and Brussels regions; and the Flanders AI Research programme (FAIR).
The BRAIN Forum is co-organised by the Brussels Institute for Advanced Studies (BrIAS), Université Libre de Bruxelles (ULB), Vrije Universiteit Brussel (VUB), AI for the Common Good Institute (FARI), and Université de Namur (UNamur), with funding from the Brussels Institute of Advanced Studies (BrIAS), the Fund for Scientific Research (F.R.S.–FNRS), and the European Union.
The BRAIN Forum is structured around four sessions:
Start | End | Session |
---|---|---|
08:00 | Doors Open | |
08:30 | 09:00 | Welcome and Introduction |
09:00 | 10:40 | Session 1 - Research |
10:40 | 11:00 | Coffee Break |
11:00 | 12:30 | Session 2 - Practice |
12:30 | 13:30 | Lunch |
13:30 | 15:30 | Session 3 - Policy |
15:30 | 16:00 | Coffee Break |
16:00 | 17:30 | Session 4 - Networking |
17:30 | Reception |
This session provides an expert-driven overview of emerging research trends in responsible AI.
09:00 | 09:25 |
Jerzy Stefanowski (Poznan University of Technology, Poland) "The Role of Counterfactual Explanations: New Methods and Open Challenges" |
09:25 | 09:50 |
Kostas Stefanidis (Tampere University, Finland) "Responsible Recommender Systems" |
09:50 | 10:15 |
Eirini Ntoutsi (Bundeswehr University Munich, Germany) "From Single Focus to Intersectional Perspective: Tackling Multi-dimensional Discrimination in AI" |
10:15 | 10:40 |
Willem Zuidema (University of Amsterdam, the Netherlands) "The Explanation Conundrum: Why XAI Methods Don’t Work Well Enough Yet for AI That Really Matters" |
This session includes practical perspectives on responsible AI from the three Belgian AI research programs (FARI, FAIR, and TRAIL), academia, and industry.
11:00 | 11:15 |
Tom Lenaerts (ULB, Belgium) "The perspective of FARI on Responsible AI" |
11:15 | 11:30 |
Pierre Geurts (ULiège, Belgium) "The perspective of TRAIL (Trusted AI Labs) on Responsible AI" |
11:30 | 11:45 |
Sabine Demey (Flanders AI Research program and imec, Belgium) "The perspective of FAIR (Flanders AI Research program) on Responsible AI" |
11:45 | 12:05 |
Vincent Ginis (VUB, Belgium) "Work, Teach, and Learn with Generative AI" |
12:05 | 12:30 |
Panagiotis Germanakos (SAP SE, Germany) "Trust in AI: The Human-Computer Interaction Perspective" |
This session includes presentations and a panel on the EU’s policy and strategic approach to responsible AI, featuring insights from the European AI Office.
13:30 | 14:30 | Opening Statements |
Milena Machała (European AI Office) "The European AI Strategy" |
||
Yves Moreau (KU Leuven, Belgium) "Dual-Use Research in the Horizon Program: Feeding an AI Arms Race?" |
||
Sabine Demey (Flanders AI Research program and imec, Belgium) "Responsible AI is crucial for value creation with AI at scale" |
||
Grégory Lewkowicz (ULB, Belgium) "Will the European digital omnibus run over responsible AI?" |
||
Benoît Frénay (UNamur, Belgium) "Implementing the AI Act: Mission Impossible?" |
||
14:30 | 15:30 | Panel Discussion |
Moderated by Rob Heyman (VUB, Belgium) |
This session fosters networking among members of Belgium’s three AI research programmes and explores opportunities for collaboration.
16:00 | 17:30 | Flash Presentations |
17:30 | Poster Presentations during Reception |
Jerzy Stefanowski works as a full professor at Poznan University of Technology, Institute of Computing Science. He received his Ph.D and Habilitation degrees from the same University. In 2021 he was elected as a corresponding member of Polish Academy of Sciences, where he also plays a role of a Chair of Scientific Council of Institute of Computer Science (IPI PAN) in Warsaw. His research interests include data mining, machine learning and XAI. Major results are concerned with: ensemble classifiers, learning from class-imbalanced data, online learning from evolving data streams, explainable AI, induction of various types of rules, data preprocessing, generalizations of rough set theory, descriptive clustering of texts and medical applications of data mining. He is the author and co-author of over 170 research papers and 2 books, which are highly cited. Moreover, he was a visiting professor or researcher in several universities, mainly in France, Italy, Belgium, Spain and Germany.
In addition to his research activities, he served in a number of organizational capacities: including positions in bodies of Polish Academy of Sciences, current vice-president of Polish Artificial Intelligence Society (vice-president since 2014); co-founder and co-leader of Polish Special Interest Group on Machine Learning. Moreover, he is the Editor in Chief of Foundations of Computing and Decision Science journal since 2012 and Action Editor of other journals.
More information can be found at http://www.cs.put.poznan.pl/jstefanowski/.
Kostas Stefanidis is a Professor on Data Science at the Faculty of Information Technology and Communication Sciences of the Tampere University in Finland, where he also leads the Data Science Research Centre and the Group on Recommender Systems. He has more than 10 years of experience in different roles at ICS-FORTH in Greece, NTNU in Norway and CUHK in Hong Kong. He got his PhD in personalized data management from the Univ. of Ioannina in Greece. His research interests are in the broader area of big data. His work focuses on personalization and recommender systems, entity resolution, data exploration and data analytics, with a special focus recently on socio-technical aspects in data management like fairness and transparency, and published in more than 100 papers in top-tier conferences and journals. He has been involved in several international and national research projects, and he is also actively serving the scientific community. Currently, he is the General co-Chair of ADBIS 2025, TPDL 2025, and EDBT/ICDT 2026.
Prof. Eirini Ntoutsi is Professor for Open Source Intelligence at the Department of Computer Sciences of the University of the Bundeswehr Munich. Her research interests lie in the fields of Artificial Intelligence (AI) and Machine Learning (ML). She has been focusing on designing intelligent algorithms that learn from data continuously following the cumulative nature of human learning, while ensuring that what has been learned helps driving positive societal impact. Her current research areas include continuous learning over non-stationary data and data streams, responsible AI and in particular fairness-aware machine learning and explainable AI, and generative AI, that is using machines to generate new plausible data and artifacts.
Prof. Ntoutsi is an active member of the research community serving regularly as a program committee member for several conferences and workshops. She was for instance co-chair or co-organizer multiple times for essential conferences and workshops such as CIKM, ICDM, ECMLPKDD or AAAI on topics like bias and fairness in AI, evaluation and experimental design in data mining and machine learning, or business applications of social network analysis. In 2018 she was co-organizer of the Dagstuhl perspectives workshop 18262 “10 years of Web Science: Closing the Loop" and is currently guest editor for the special issue on bias and fairness in AI in the Data Mining and Knowledge Discovery journal. Prof. Ntoutsi is a member of ACM, IEEE and German Informatics Society (GI). Her research is supported by several national (DFG, Volkswagen Foundation, BMWi, BMBF) and EU funds (ITN, H2020).
Willem Zuidema is an Associate Professor in Natural Language Processing, Explainable AI, and Cognitive Modelling at the Institute for Logic, Language and Computation. His research focuses on these areas, and he leads the Cognition, Language and Computation Lab, supervising several PhD and MSc students. He teaches in the interdisciplinary Master’s programmes in Artificial Intelligence and Brain & Cognitive Sciences, and coordinates the Cognitive Science track within the MBCS. He also occasionally gives public talks on artificial intelligence and the evolution of language.
More information can be found at https://staff.fnwi.uva.nl/w.zuidema/.
Tom Lenaerts is Professor in the Computer Science department at the Université Libre de Bruxelles (ULB), where he is co-heading the Machine Learning Group (MLG). He holds a partial affiliation as research professor with the Artificial Intelligence Lab of the Vrije Universiteit Brussel and is affiliated researcher at the Center for Human-Compatible AI of UC Berkeley. He was board member, vice-chair and finally chair of the Benelux Association for Artificial Intelligence between 2016 and 2024. He currently is the Academic Director of FARI, the Brussels AI for Common Good institute, AI expert in the Global Partnership on Artificial Intelligence and national contact point for the CAIRNE hub in Brussels. He has been publishing in a variety of interdisciplinary domains on AI and Machine Learning, involving topics related to optimization, multi-agent systems, collective intelligence, evolutionary game theory, computational biology and bioinformatics.
More information can be found at https://mlg.ulb.ac.be/wordpress/members/tom-lenaerts/.
Pierre Geurts is a full professor in the EECS department at the University of Liège (Montefiore Institute). He received his degree in electrical engineering (computer science) in 1998 and his PhD in applied sciences in 2002 from the University of Liège. From 2005 to 2007, he held a CNRS postdoctoral position at the University of Evry (France) and from 2006 to 2011, he was a research associate of the FNRS (Belgium). His research interests include the design, empirical and theoretical analysis of machine learning algorithms, with a focus on scalability, explainability, and usability of these algorithms. He develops real-world applications of these techniques in various fields, including computational and systems biology, computer vision and digital humanities. Pierre is the acting president of TRAIL for the 2024-2025 academic year.
More information can be found at https://people.montefiore.uliege.be/geurts/.
Sabine Demey is the director of the Flanders AI Research Program (FAIR). She brings together researchers from 11 research partners in Flanders (universities and research centres). Together they tackle challenging AI Research Challenges and apply the new AI methods in healthcare, in industry 5.0, for the energy transition, in society. She believes it is important for technological developments such as AI to have a meaningful impact on people, industry and society. Sabine is a computer scientist and holds a PhD in robotics. Prior to leading the AI Research Program in Flanders since 2020, she has 20+ years industrial experience in research, product and business development, software for the manufacturing industry and for healthcare.
Vincent Ginis received his B.Sc. degree in engineering, summa cum laude, in 2007, and the M.Sc. degree in Photonics Engineering, summa cum laude, in 2009 from the Vrije Universiteit Brussel (Belgium). In May 2014, he received the degree of doctor in applied sciences, summa cum laude and felicitations of the exam committee. Currently, Vincent is appointed as an assistant professor at the Vrije Universiteit Brussel. He also works as a visiting professor in the group of Prof. Federico Capasso at Harvard University.
To date, Vincent has published his research in around 20 international publications with a high impact factor, including 1 article in the Proceedings of the National Academy of Sciences, several letters in Physical Review Letters–among which 2 cover articles–and many publications that were highlighted as Editor’s Suggestion. He has presented his work in more than 40 international conference proceedings and he was invited or plenary speaker at 9 international conferences.
Vincent has received many national and international awards, including Agathon De Potter Award in Physics (2018), the Solvay Award for PhD dissertations (2016), the Vocatio fellowship (2015), the FWO/BCG Best Paper Award (2014), the international SPIE Scholarship in Optical Science and Engineering (2013), the IEEE Photonics Graduate Student fellowship (2012), the KVIV engineering award (2010), and the FWO/Barco Award (2010). Vincent also serves as an editor of the journal Applied Metamaterials and as a reviewer for several important journals in his field, including Nature Photonics, Physical Review Letters, Nature Communications, and New Journal of Physics. He is also member of the scientific committee of several international conferences, including SPIE Photonics Europe and META. Vincent regularly appears in the general media to discuss research breakthroughs. In 2017, he was elected as one of the 10 new members of the Young Academy of Belgium and as one of the top 50 tech pioneers in Belgium.
More information can be found at https://ai.vub.ac.be/team/vincent-ginis/.
Panagiotis Germanakos is a Principal User Experience Research Scientist and Instructor at SAP SE, leading and supporting user research initiatives of product teams for delivering usable, high quality, human-centred solutions. As an SAP University Alliances Ambassador, he acts as a liason between industry and academia, consulting and sharing knowledge to inspire innovation. For many years, he has been exploring the human-machine interaction, striving to understand their evolving symbiosis. His research focuses on the coexistence of human-, artificial-, and quantum- intelligence in developing optimal, adaptive, and personalized solutions tailored to individual users. In 2021, he co-founded PulseX Non-Profit Research Institute, dedicated to promoting responsible scientific research, knowledge, and innovations that enhance human experiences and improve lives. He holds a PhD in Human-Centered Computing from the National & Kapodistrian University of Athens (2008) and has authored over 140 publications in top-tier conferences and journals, including nine books. His work has received multiple awards, along with seven patents. Additionally, he is a co-founder of international scientific events such as the ACM HAAPIE and HUMANIZE workshop series. He actively serves on editorial boards, program committees, and advisory panels for leading conferences and journals, including ACM UMAP, IUI, INTERACT, and CHI. He is also a member of international research networks and professional organizations such as ACM SIGCHI, AIS, and the Expert Network of HCI-KDD.
More information can be found at http://pgermanakos.com/.
Milena Machała is an Antitrust Lawyer, and a Legal and Policy Officer in the European AI Office.
Yves Moreau is a professor at the University of Leuven, Belgium. His team develops machine learning methods for clinical genetics and drug discovery: (1) privacy-preserving analysis of clinical genetic data, (2) data fusion algorithms for the identification of candidate disease genes and variants in rare genetic disorders, and (3) data fusion for drug discovery and drug design. Methodologically, he focuses on the development of novel artificial intelligence methods (Bayesian matrix factorization and deep learning) for the fusion of heterogeneous, sparsely observed data, and on privacy-preserving implementations of such methods. He aims at demonstrated clinical or industrial applicability of those methods and proven effectiveness in human genetics research and drug discovery. He is a tech innovator interested in identifying relevant business models for emerging technologies and developing projects up to the precompetitive stage and the startup of university spin-offs. He was a co-founder of Data4S, a data mining company specialized in fraud detection and anti-moneylaundering, which is now part of BAE Systems. He was also a co-founder of Cartagenia, specialized in ICT solutions for clinical genetic diagnosis, now part of Agilent Technologies. He is also engaged in a reflection on how information technology and artificial intelligence are transforming our world and on how to make sure this transformation is beneficial for all.
More information can be found at https://ai.kuleuven.be/members/00012794.
Former representative of the National Fund for Scientific Research (F.R.S.-FNRS), Grégory Lewkowicz is a professor at the Université libre de Bruxelles, a member of the Perelman Centre, and the director of the SMART Law Hub within the Faculty of Law and Criminology. He is also the academic director at the Institute of Artificial Intelligence for the Common Good (FARI) in Brussels. He teaches the course “Smart Law: Algorithms, Metrics & Artificial Intelligence” at Sciences Po Law School in Paris. Additionally, he lectures at Paris II Panthéon-Assas University and the University of Liège. He is a Koyré Senior Research Fellow in Economic Law and Artificial Intelligence as part of the 3IA Chair at Université Côte d’Azur. He is also a recurring professor in the executive education programs on digital transformation and law at HEC-Paris.
His research pragmatically examines the interactions between law and digital technologies (SMART Law), global and transnational law, as well as the contemporary transformations of law and legal professions. He leads several research programs on algorithmic law and the application of artificial intelligence techniques to the development, analysis, implementation, and enforcement of legal or related norms. He is also involved in multiple research and development projects with public and private partners. Grégory Lewkowicz frequently advises public authorities and companies on digital strategy and regulation.
Grégory Lewkowicz oversees the “Penser le droit” collection at Bruylant Publishing. He serves on the board of directors of the European Academy of Legal Theory and the Brussels Academic Higher Education Pole. He is a member of the advisory board of AI4Belgium and the European Committee on AI (AI Board). He established the Brussels Bar Observatory and chaired the European Incubator of the Brussels Bar from 2017 to 2022.
Benoît Frénay is an associate professor at the Faculty of Computer Science of the Université de Namur. He completed a degree in computing science engineering (specialising in artificial intelligence) in 2007 at the Université catholique de Louvain. Then, he obtained a PhD in the UCL Machine Learning Group in 2013. The topic of his thesis was Uncertainty and Label Noise in Machine Learning. In parallel, he also completed a master’s in pedagogy in 2010 with a focus on problem-based learning. Additionally, he had the opportunity to undertake research stays at Aalto University, Radboud University Nijmegen, and the CITEC centre of excellence at Bielefeld University. In 2014, he received the Scientific Prize IBM Belgium for Informatics for his PhD thesis.
His main research interests in machine learning include support vector machines, label noise, efficient learning, graphical models, classification, clustering, density estimation, interpretability, visualisation, and feature selection. He enjoys collaboration and is open to new topics, including projects in partnership with enterprises (industry, IT, etc.).
More information can be found at https://bfrenay.wordpress.com.
Rob Heyman is a coordinator of the Knowledge Centre Data and Society which is part of the Flemish strategic plan on AI. He is a senior researcher at imec-SMIT where he researches participative methods in innovation projects between different stakeholders (legal, civil society, end-users) so that societal, legal and ethical values are integrated during development. He also has given lectures and courses at the ULB and VUB on online marketing, research methods, privacy and challenges of the ongoing digitalisation.
More information can be found at https://smit.research.vub.be/en/prof-dr-rob-heyman.
Jerzy Stefanowski (Poznan University of Technology, Poland)
Explaining the predictions of modern black-box machine learning systems is essential for the development of trustworthy artificial intelligence. In this talk we will discuss counterfactual explanations (counterfactuals), which provides information about how the description of an example should be changed in order to obtain a more desired prediction of the machine learning model. In order to generate a good counterfactual, several desirable properties are formulated, such as the validity of the decision change, its proximity to the input instance, the sparsity of the recommended changes, its actionability, its plausibility and others The talk will cover recent author’s works on handling plausible counterfactuals, and ongoing research on robust explanations to model changes. The final part of this talk will address selected open research questions and challenges.
Kostas Stefanidis (Tampere University, Finland)
Due to the significant impact of recommendations on users’ experiences and the often sensitive nature of recommendation tasks, it is crucial to carefully design the processes by which recommendations are generated. This has led to a growing emphasis on developing recommender systems that adhere to key principles of responsibility, such as fairness—ensuring the absence of bias—and transparency—enhancing user understanding of system decisions. In this talk, we will present a toolkit of definitions, models, and methods for promoting fairness in recommender systems, with a special focus on group recommendations. Additionally, since users may struggle to comprehend why a particular suggestion is made, many systems incorporate explanations to improve transparency. We will discuss different types of explanations in recommender systems, including why-not and counterfactual explanations, both for individual users and groups.
Eirini Ntoutsi (Bundeswehr University Munich, Germany)
Despite growing attention to fairness in AI systems, many approaches still focus on single identity attributes (mono-discrimination). However, human identities are inherently multi-dimensional, and discrimination can manifest across various social identities—such as race, gender, disability, class, and sexual orientation (multi-discrimination). In this presentation, we explore the challenges posed by multidimensional discrimination, highlighting how the intersection of social identities complicates both the definition and measurement of fairness. We also discuss the challenges of learning in such settings, particularly in the face of data scarcity that affects both group and class levels. Finally, we present approaches to mitigating multidimensional discrimination, with a particular focus on a multi-objective optimization framework as an effective strategy for balancing multiple, often conflicting, learning goals.
Willem Zuidema (University of Amsterdam, the Netherlands)
The field of explainable AI (XAI) has developed many techniques to explain the inner workings of popular 'blackbox' AI models, and thus adress 'the blackbox problem' that plagues deep learning and generative AI. Unfortunately, however, these explanation methods themselves are often untrustworthy: it is difficult to demonstrate that their explanations are really faithful to underlying causes in the blackbox model, and alternative explainers often produce radically different explanations in practice. In this talk I explain this 'explanation conundrum' and the practical challenges it presents for policy makers, and I sketch a possible way out, based on ongoing technical research in 'mechanistic interpretability' and 'disentanglement'.
Tom Lenaerts (ULB, Belgium)
Pierre Geurts (ULiège, Belgium)
This talk will present TRAIL's perspective on Responsible AI. TRAIL, for Trusted AI Labs, is a research and business community that brings together the expertise of all French-speaking universities and four research centers active in the field of AI. TRAIL is strongly supported by federal and regional strategic players such as DigitalWallonia4.AI and AI4Belgium, as well as an international pool of AI experts. Our mission is to develop a trusted approach to artificial intelligence, ensuring that it becomes a transformative force for our societies and contributes to the sustainable well-being of citizens. TRAIL’s research is organized around four key research themes (human-AI interaction, trustworthy AI, model-driven AI, and embedded and green AI), which collectively contribute to the development of Responsible AI. Recently, we introduced a fifth cross-cutting research theme that specifically addresses the societal, legal, and ethical dimensions of AI, with the dual goals of informing researchers and businesses about these dimensions and ensuring that ethical and legal considerations are integrated into the technical innovations emerging from the ecosystem.
Sabine Demey (Flanders AI Research program and imec, Belgium)
Responsible AI is integrated in the Flanders AI Research program at various levels. All partners are responsible for compliance with existing regulations. We also adhere to ethical principles and reflect on ethical impacts of research. We run responsible AI self-assessments on the use cases in the program and emphasize the responsible use of AI in research for all participants in the program. The choice of use cases and research themes also reflects the attention for Responsible AI, with focus on use cases with meaningful impact on people, society, economy, planet and with central research themes such as trustworthy AI (explainable AI, fairness, causal learning and causality inference) and resource-efficient AI (energy-efficient AI, data-efficient AI).
Vincent Ginis (VUB, Belgium)
Panagiotis Germanakos (SAP SE, Germany)
Artificial Intelligence is revolutionizing Human-Computer Interaction (HCI) and User Experience (UX), redefining design principles and transforming human-AI relationships. Yet, trust remains a critical challenge, users hesitate to trust what they don't understand. While transparency in algorithmic processes and decision-making is essential for building confidence, a fundamental question persists: How can we make complex AI systems truly accessible and relatable, particularly for non-experts? This talk embraces an innovative approach through visual storytelling, bridging the gap between AI's technical complexities and human understanding. We present an academic graphic novel that reveals the hidden world of algorithms, networks, and logical decision-making. The narrative follows Agent Black, an AI entity that mimics human behavior, as it navigates decentralized digital landscapes to complete a seemingly simple task: finding the perfect birthday venue. By anthropomorphizing AI's challenges, including ethical dilemmas, computational constraints, and decision-making processes, we transform abstract concepts into tangible human experiences. Combining humor, narrative tension, and expert insights, the graphic novel turns computational logic into an engaging and intuitive journey. Ultimately, this talk explores how HCI and UX can reshape AI's future as a trustworthy, comprehensible partner in our daily lives.
Milena Machała (European AI Office)
Yves Moreau (KU Leuven, Belgium)
Sabine Demay (Flanders AI Research program and imec, Belgium)
Grégory Lewkowicz (ULB, Belgium)
Benoît Frénay (UNamur, Belgium)
The AI Act has gone far beyond the classical questions of interpretability and explainability raised earlier by the GDPR. It entails requirements on the ethics, robustness, security, reproducibility and trustworthiness of AI systems. Verifying the conformity of AI systems in specific domains will require an enormous number of trained experts, which raises the question: how can Europe ensure that it is able to actually enforce the AI Act and related frameworks? We will shortly discuss this and stress the importance of accelerating the education of a new generation of technical experts that are well versed in legal matters (incl. AI Act and related digital regulations).
The event will be held at the BrIAS seminar room, Blvd General Jacques 210, USquare Building AB-0, 1050 Ixelles, Belgium. Participation is free of charge, but registration is required.