Balint Gyevnar
Edinburgh, GB
balint.gyevnar@ed.ac.uk

A UK-based researcher of trustworthy explainable autonomous agency in multi-agent systems for achieving safer AI, with applications to autonomous vehicles.

Education


University of Edinburgh
September 2021
 — 
December 2025
PhD in Natural Language Processing with Integrated Studies (1)
University of Edinburgh
September 2016
 — 
May 2021
Master's in Master of Informatics (2)
Nanyang Technological University
August 2018
 — 
June 2019
Exchange Year in Computer Science (3)

Experience


Research Assistant
July 2023
 — 
Present
University of Edinburgh

Researching the intersection of AI safety and AI ethics through content analysis methods. Assistant to Dr. Atoosa Kasirzadeh of Carnegie Mellon University.

  • Large scale quantitative literature analysis with unsupervised natural language processing tools.
  • Curation, topic coding, and qualitative analysis of large corpora of papers.
Teaching Assistant
September 2020
 — 
Present
University of Edinburgh

Teaching assistant helping with the delivery and teaching of university courses.

  • Assistant supervisor for two master's students.
  • Online tutorial sessions to ~12 students for introductory machine learning course.
  • Coursework and exam marker for courses in the School of Informatics, including Doing Research in NLP, Reinforcement Learning, Computer Systems, and Machine Learning.
Research Intern
May 2020
 — 
October 2020
FiveAI

Development and evaluation of novel planning and prediction algorithm for autonomous vehicles.

  • Developed and evaluated IGP2, a goal-based interpretable prediction and planning system for autonomous vehicles with intuitive explanations.
  • Publication at International Conference on Robotics and Automation (ICRA), 2021.

Projects


Causal Explanations for Sequential Decision-Making in Multi-Agent Systems
September 2021
 — 
Present

I am interested in combining technologies from explainable AI (XAI), causal reasoning, and natural language processing to support the creation of trustworthy and safe AI systems, with a focus on the domain of autonomous driving.

  • CEMA: Combining counterfactual causation with RL-based planning to create causally-grounded explanations in naturual language
  • Two large human subjects studies to elicit and evaluate naturally occuring and automatically generated explanations
  • HEADD: The Human Explanations for Autonomous Driving Decisions dataset
  • Integration of LLMs with CEMA in an RAG approach to improve the quality of explanations
Bridging AI Safety and AI Ethics: An Emprical Approach
July 2024
 — 
Present
  • Curation of corpus of 3K+ papers on AI safety and AI ethics
  • Data analysis and visualization techniques to identify overlapping and different topics in the literature
  • Advocating for an epistemically inclusive approach to AI safety that considers long standing safe ML research and AI ethics
  • Research done with Shannon Vallor and Atoosa Kasirzadeh
Interpretable and Verifiable Goal-Based Prediction and Planning for Autonomous Driving
May 2020
 — 
October 2022

Developed methods to improve the interpretability and verifiability of autonomous vehicle decision-making.

  • IGP2: Implementing rational inverse planning and Monte Carlo Tree Search for interpretable goal-based prediction and planning in autonomous vehicles
  • GRIT: Training and evaluating decision tree-based verifiable goal recognition models for autonomous driving

Volunteer


Executive Committee Member
September 2022
 — 
June 2025
Edinburgh University Volleyball Club

Executive committee member of EUVC responsible for the deliverance of the club's volleyball programme to more than 200 members.

  • Responsible for public outreach to and networking with alumni members and organizing a two-day event series.
  • Large-scale event organization, public speaking, timetabling, and human resource management of 8 teams, 10 coaches, and 220+ members.
  • Managing a cash flow of approximately £70k, setting up an annual budget, and managing thousands of transactions.

Publications


Causal Explanations for Sequential Decision-Making in Multi-Agent Systems
May 2024
23rd International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2024

Balint Gyevnar and Cheng Wang and Christopher G. Lucas and Shay B. Cohen and Stefano V. Albrecht

Explainable AI for Safe and Trustworthy Autonomous Driving: A Systematic Review
December 2024
IEEE Transactions on Intelligent Transportation Systems

Anton Kuznietsov* and Balint Gyevnar* and Cheng Wang and Steven Peters and Stefano V. Albrecht

Towards Trustworthy Autonomous Systems via Conversations and Explanations
March 2024
38th AAAI Conference on Artificial Intelligence, AAAI 2024

Balint Gyevnar

People Attribute Purpose to Autonomous Vehicles When Explaining Their Behavior
December 2024
[Under Review @ CHI 2025] arXiv: 2403.08828

Balint Gyevnar and Stephanie Droop and Tadeg Quillien and Shay B. Cohen and Neil R. Bramley and Christopher G. Lucas and Stefano V. Albrecht

Bridging the Transparency Gap: What Can Explainable AI Learn From the AI Act?
May 2023
26th European Conference on Artificial Intelligence, ECAI 2023

Balint Gyevnar and Nick Ferguson and Burkhard Schafer

A Human-Centric Method for Generating Causal Explanations in Natural Language for Autonomous Vehicle Motion Planning
May 2022
IJCAI 2022 Workshop on Artificial Intelligence for Autonomous Driving, AI4AD 2022

Balint Gyevnar and Massimiliano Tamborski and Cheng Wang and Christopher G. Lucas and Shay B. Cohen and Stefano V. Albrecht

GRIT: Fast, Interpretable, and Verifiable Goal Recognition with Learned Decision Trees for Autonomous Driving
May 2021
IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2021

Cillian Brewitt and Balint Gyevnar and Samuel Garcin and Stefano V. Albrecht

Interpretable Goal-based Prediction and Planning for Autonomous Driving
May 2021
IEEE International Conference on Robotics and Automation, ICRA 2021

Stefano V. Albrecht and Cillian Brewitt and John Wilhelm and Balint Gyevnar and Francisco Eiras and Mihai Dobre and Subramanian Ramamoorthy

Awards


Colours Award
June 2024
Edinburgh University Sports Union

Colours reward those individuals have given time and effort above and beyond the call of duty to their chosen sport or Club. University Sport could not operate without these volunteers organising and co-ordinating Clubs and the Colours award recognises this endeavour.

AI100 Early Career Essay Competition
August 2023
AI100 Committee at Stanford University

Researchers from 18 countries answered the call, offering intriguing perspectives on AI and its impacts on society. In addition to the winner, AI100 selected a collection of five essays that thoughtfully consider AI at the intersection of morality, regulation, love, labor, and religion. Balint's essay was selected as one of the five finalists, titled 'Love, Sex, and AI'.

Trustworthy Autonomous Systems Early Career Research Award, Knowledge Transfer Track
July 2022
United Kingdom Research and Innovation (UKRI)

The inaugural TAS Early Career Researcher (ECR) Awards were initiated to celebrate outstanding contributions made by PhD students and postdoctoral researchers to any area of Trustworthy Autonomous Systems research in the last three years. Balint was awarded £4000 by the UKRI TAS Hub to achieve his vision for more trustworthy autonomous systems (TAS) through explainability and conversations.

"Shape the Future of ITS" Competition
June 2021
IEEE Intelligent Transportation Systems Society (ITSS)

In 2021, the Intelligent Transportation Systems Society (ITSS) organized a competition on the topic “Shape the Future of ITS” by asking participants to present a futuristic vision on transportation systems, the way they will operate, and the product and services they will provide. Balint Gyevnar's vision was selected as a 3rd place winner, titled 'Cars that Explain: Building Trust in Autonomous Vehicles through Explanations and Conversations'.

Languages


Hungarian:
Native speaker
English:
Fluent
German:
Fluent
Japanese:
Advanced
Chinese:
Beginner
Polish:
Beginner

Skills


Data Analysis:
Human Subjects Studies, Text Analysis, Unsupervised Topic Modelling, Mixed Effects Modelling, Statistical Hypothesis Testing, Data Visualization
Programming:
Python (PyTorch, Transformers, Pandas, Matplotlib, etc.), R (dplyr, ggplot2, rlmer, brms, etc.), C#, C++
Artificial Intelligence:
Natural Language Processing, Reinforcement Learning (MCTS, PPO, DQN, etc.), Explainable AI (SHAP, LIME, etc.), Autonomous Vehicles
Soft Skills:
Leadership, Event Organization, Public Speaking, Presentation Skills

Interests


Explainable AI:
Classical XAI, Human-Centered XAI, Explainable Reinforcement Learning, Human-Computer Interaction
Deep Learning:
Large Language Models, Grounding LLMs, Human-LLM Interaction, In-Context Causal Learning
Robotics:
Reinforecement Learning, Autonomous Vehicles, Multi-Agent Systems
AI Safety:
Epistemic Foundations of AI Safety, AI Safety and Ethics, Agentic Systems