AI^2 Forum August 2024
On Wednesday, 28th August, we were privileged to host Dr Colin Paterson, who delivered a thought-provoking talk titled “That Can’t Be Right? Making Decisions in an Uncertain World.” His presentation touched on the complexities of decision-making, particularly in environments where uncertainty is a constant companion.
Dr Paterson explored the concept of uncertainty, which often involves dealing with scenarios that have never been encountered before. In such cases, decisions are often based on “best guesses” rather than concrete data. This uncertainty extends to the use of AI, which, while powerful, is not infallible. He raised the provocative question: Could AI be used as liability sinks? This idea highlights the ongoing struggle to define accountability in AI-driven decisions.
Understanding the Unusual in an Unfamiliar World
How do you know what’s unusual if you don’t know what usual is? In a world where data and scenarios are constantly evolving, establishing a baseline for “normal” can be elusive. This ambiguity makes decision-making even more challenging, as it requires not just judgement but also a deep understanding of context.
The Layers of Decision-Making: From Difficulty to Consequence
Decision-making is inherently difficult, but as Dr Paterson pointed out, justifying those decisions is even harder, and dealing with the fallout from poor decisions is the most challenging aspect of all. This is particularly true in fields like AI and safety, where decisions often carry significant consequences.
The Role of Models: Assumptions and Averages
Dr Paterson emphasised that models, which are central to many decision-making processes, come with their own set of assumptions. These models act as averaging technologies, but what exactly are we averaging? Is it meaningful data, or are we merely averaging out the noise? This question is crucial, especially in safety-critical systems where edge cases—those rare but potentially catastrophic events—must be carefully considered.
The Human Element: Domain Experts and Experience
In a world increasingly dominated by algorithms and data, Dr Paterson reminded us of the importance of involving domain experts and drawing on human experience. While AI can make suggestions, it doesn’t always mean those suggestions will—or should—be followed. The human element remains indispensable, particularly when it comes to interpreting data and making informed decisions.
Making the Most out of Social Media: A Double-Edged Sword
An interesting point raised during the talk was the potential of social media as a tool for data collection. In many cases, social media platforms can provide information more quickly than emergency services can arrive at a scene. Could this speed be harnessed to optimise response times or gather real-time data? While promising, this idea also brings its own set of challenges, including the reliability and accuracy of the information being shared.
The Regulatory Landscape: A Work in Progress
One of the key takeaways from the talk was the acknowledgment that there is no definitive stamp of approval for AI models, particularly in critical fields like healthcare. Regulatory bodies like the MHRA are grappling with how to effectively regulate AI as medical devices, underscoring the complexity of ensuring safety and efficacy in AI applications.
Safety: A Proactive Approach
Safety is not something that can be added after the fact; it must be an integral part of the development process. He cited TRIPOD-AI as an example of ensuring that AI systems are reproducible and can be safely validated. Constraints will always exist, but the goal should be to create systems with acceptable levels of safety from the outset.
The Decision-Making Loop: Sense, Understand, Decide, Act
In discussing the intricacies of autonomous systems, Dr Paterson described the continuous loop of sensing, understanding, deciding, and acting. For instance, autonomous cars are constantly evaluating their environment to predict safety and make decisions—whether to accelerate, slow down, stop, or maintain speed. This process involves balancing uncertainty, time, and trade-offs, all within a dynamic and ever-changing context.
An example shared was of an autonomous robot designed to assist individuals in getting dressed. This task may seem straightforward, but it requires the consideration of social, legal, ethical, and cultural factors—far beyond just technical efficiency. For instance, if the person asks the robot to open the curtains, should the robot comply? It seems simple, but not if the person is undressed. In this case, the robot must have additional systems in place, such as software to assess the user’s stress level or the appropriateness of the request. This example illustrates that designing AI systems isn’t just about minimising loss functions; it’s about embedding empathy, ethics, and cultural awareness into the technology.
Dr Paterson posed a critical question: When do you stop mitigating risks? The answer is when everyone on the board, from lawyers to computer scientists to philosophers, agrees that the system is safe. This process underscores the importance of multidisciplinary collaboration in developing AI systems.
The Distracting Side of AI and Safety
Interestingly, Dr Paterson pointed out that sometimes, the very systems designed to enhance safety can become distractions themselves. He cited Tesla cars as an example, where excessive beeping and frequent screen changes can overwhelm drivers, potentially leading to unsafe situations. Even advanced driver-assist systems can cause confusion, such as unexpectedly slowing a car from 70 to 50 mph on a motorway, which could result in rear-end collisions or make drivers think their vehicle is malfunctioning.
Conclusion: A Call for Thoughtful Integration
Dr Paterson’s talk was a compelling reminder that while AI and advanced technologies offer tremendous potential, their integration into real-world systems requires careful thought, collaboration across disciplines, and a deep commitment to safety. As we continue to navigate the uncertain landscape of technological advancement, it is crucial that we prioritise not just innovation, but also the ethical, legal, and social implications of the decisions we make.
You can view the slides from this talk here!
For the second half of the event, the AI^2 team hosted an AI and medical themed Articulate game. Which was also highly entertaining!
Blog written by: Zoe Hancox (with help formatting using ChatGPT and image generated by DALL-E 3)
AI_squared
]