Themes

The RAIR Centre will pursue core research questions that will enable AI's safe and responsible deployment. The Centre is focused on four distinct themes:
Theme 1: Tackling misinformation
This theme focuses on ensuring we can trace the origin of AI-generated content and verify its authenticity. It aims to develop methods to detect fake or manipulated content created by AI. The goal is to maintain trust in digital information and combat misinformation.
Theme 2: Safe AI in the real world
This theme explores how AI can understand the physical environment and utilise the means and tools to interact with it safely. It aims to create AI systems that can perform deep reasoning for complex navigation and manipulation tasks, and allow natural interfaces for the human users to communicate with such systems. This gears towards the vision of safe AI systems that can have enough awareness and reasoning capability to make efficient and safe feedback to the environment, as well as to assist humans in real-world tasks.
Theme 3: AI system evaluation for safety
This theme focuses on developing AI systems and safeguards that can accurately assess AI systems’ knowledge limitations and reasoning flaws, and reliably express uncertainty when safety matters. It aims to create AI that can recognise when it lacks sufficient information or expertise impacting safety, prompting it to seek additional data or human input rather than providing potentially inaccurate responses. The goal is to make trustworthy AI systems that provide better support to decision-making in real-world applications, particularly in the ÐÓ°ÉÖ±²¥n workplace context, by reducing risks, overconfidence and hallucination.
Theme 4: Causal AI for a changing world
This research seeks to develop AI that understands cause-and-effect relationships, beyond correlations, particularly in complex and dynamic environments. The aim is to create AI systems that can adapt to new situations, make more accurate predictions, and model the actual consequences of interventions to help shape future outcomes. Ultimately, the goal is to enable AI to reason about the world in a more human-like way, considering context, causation, and consequences.