Master Thesis: Development of an Adaptive AI for Cooperative Problem-Solving in the Context of the Card Game Bridge

Background

This master’s thesis is dedicated to the creation of an advanced artificial intelligence (AI) system specifically tailored for collaborative problem-solving, a critical aspect of Human-Computer Interaction (HCI). Within the HCI framework, the card game Bridge serves as an exemplary use case, providing a complex, strategic environment where partners (AI and human) can intersect and interact.
The objective is to harness the intricacies of Bridge – a game that inherently requires partnership and cooperation – to develop an AI that not only understands the rules and strategies of the game but can also adapt its play style in response to the behavior and preferences of its human partner. Such adaptability goes beyond mere reactive play; it entails an AI that can predict, learn, and synergize with a human’s unique approach to the game.

Tasks

  1. Development of an API for the game of contract Bridge to establish an experimental foundation.
  2. Application and testing of state-of-the-art reinforcement learning methods to develop an AI that collaborates with human players.
  3. The AI should not solely aim for optimal decisions but should particularly adapt to the skill level or preferences of the human partner.

Skillset

  • Good Knowledge in Machine Learning
  • Knowledge in Reinforcement learning would be even better
  • Good Python skills with experience in relevant libraries (e.g. TensorFlow/Keras or PyTorch)

Be aware that this is a time demanding topic. If you like to have a challenging but also very interesting topic or just have questions about the topic in any way, please reach out to Alexander Studt (studt@teco.edu) and I can give a more detailed overview of the thesis topic.
If you have your own ideas, which are somewhat similar, you can also pitch it to me and maybe we will find a promising topic together.