What does it take to trust AI?

The NHS is preparing for another challenging winter, which will strain capacity and resources. Pressing concerns include the needs of an ageing population, high rates of chronic conditions, growing living costs, long waiting times, staff shortages, and outdated infrastructure.

As winter approaches, there will be an increased need for innovations to significantly improve care. Instead of reacting to these challenges, there's a call for a proactive approach that relies on intelligence for informed decision-making – an approach well understood by the DataRobot team.

By utilising the transformative capabilities of artificial intelligence (AI) and top-tier data analytics, DataRobot collaborates closely with NHS organisations to improve resource allocation, streamline workflows, and enhance clinical decision-making.

Effective data interaction, leverage, and optimisation is critical to system resilience. Implementing data-driven solutions, such as the DataRobot AI platform, is an opportunity to analyse care pathways and optimise components through AI.

As a customer-facing data scientist at DataRobot, I believe the application of AI tools -marks the beginning of true innovation in patient flow, enabling advanced, data driven decisions to optimise and proactively use known information. AI and predictive modelling can help health services move beyond crisis reaction to pre-emptively address the impact of events across the healthcare flow spectrum.

Partnership and system-wide coordination

I started my Data Science journey as a researcher at the University of Oxford, where I specialised in applying data science to better understand the dynamics of urban development. A key component of my work was collaborating with local governments in diverse locations - from Greater Manchester to Medellin Colombia - to identify relevant research questions and integrate research findings into policy-making.

The most important lesson I learnt was the need to align the interests and goals of all stakeholders at each step of the process. A route to implementation, together with clear ownership of each step, are crucial to successful adoption of data science in the public sector.

Among the main challenges is getting buy-in from non data scientists, who will usually have reservations about the robustness of AI models, as well as concerns around privacy and security. To answer these challenges, we need a two-pronged approach. Firstly, we need to demystify AI and change the mindset of those who are not familiar with the intricacies of Machine Learning.

As our Global AI Ethicist Haniyeh Mahmoudian says: “In our trustworthy AI framework, the first component is people. Education on data and AI literacy is the first step to engendering trust and foundation for discussion on risks of AI”.

Secondly, we need to ensure that explainability and governance are baked into every aspect of the AI life cycle. This will allow people to understand what the models are doing and why they are making the predictions they make. Fostering trust in AI systems will lift the barriers to introducing transformative AI technologies into the most impactful and sensitive processes that governments must manage.

Building trustworthy AI

The team at DataRobot sees trust as the core of the platform and seeks to ensure that AI solutions are trustworthy, reliable, and responsible. The Ethical AI team at DataRobot has designed a ‘Trusted AI’ framework that works across three dimensions: trust in the performance of your AI model, trust in the operation of your AI model and trust in the ethics of your workflow.

It’s worth acknowledging that trust in an AI system varies from person to person, and from use case to use case. A model that predicts sales of carbonated drinks will be looked at differently than one that is used to screen job applications. But the tools required to understand and monitor both systems will be the same, and DataRobot ensures that every step is explainable and documented.

Among these tools is one of the most recent additions to the platform: ‘Bias Mitigation’. While it was already possible to conduct bias and fairness monitoring in DataRobot, now one can go beyond measuring.

A case study conducted by BCG and DataRobot looked into the disparity of income between men and women, and used it to illustrate how this new feature allows us not only to monitor the behaviour of the model with respect to protected characteristics, but to implement techniques that reduced the bias of the predictions.

But perhaps the most crucial addition to DataRobot’s Trusted AI framework has been Generative AI. Whether it is AI assistants or customer-facing chatbots, the biggest challenge we see is the confidence gap. Large language models are difficult to evaluate and monitor, and the possibility of hallucinations and misleading answers is a huge risk.

Just as with predictive AI, Datarobot has developed the tools necessary for government institutions to not just easily develop their own Generative AI solutions, but also have the necessary guardrails to monitor, govern, audit and fine-tune them.

Ethical AI

While the first two dimensions of trusted AI look at the model’s performance and operation, the last dimension is the one that determines the impact of the model.

DataRobot’s Ethical AI team has played a crucial role in the development of the platform, and is also an active participant in worldwide conversations on the ethics of AI.

Haniyeh Mahmoudian, who leads DataRobot’s Ethical AI team, is also a Member of the National AI Advisory Committee. Haniyehhas served as a witness on AI ethics in congressional hearings, and our Field CTO Ted Kwartler has also presented on AI to congress. Their influence is reflected in the way ethics is thought of at DataRobot, where the principles of purpose, disclosure, governance and fairness are visible across the platform, from experimentation to production.

When it all comes together

In March 2020, the world confronted an unprecedented crisis with the onset of the Covid-19 pandemic. Policymakers were met with disparate predictions of potential fatalities, ranging from 60,000 to 2.2 million American deaths.

Faced with a dearth of useful information, DataRobot, renowned for its top-tier data scientists, embarked on the creation of an AI simulation known as the Covid-19 Decision Intelligence Platform to assist the U.S. Department of Health and Human Services.

Through collaborations with health institutions this initiative yielded innovative approaches such as expediting vaccine trials and addressing challenges related to patient enrollment. The platform proved pivotal in enhancing data accuracy for hospital and test reporting, at-home antigen test distribution, and the management of hospitalisations.

The model delivered 21% higher accuracy compared with the alternative approaches. More importantly, DataRobot was able to surface the bias in the vaccine-trial dataset, where minorities were poorly represented. This finding was incorporated into subsequent vaccine-trials, where the participation of these communities jumped from 10% to 44%. This experience showcased the importance of alignment with stakeholders, the value of automation, and the need for elevated standards in trust and fairness when modelling for effective pandemic preparedness and response.

How DataRobot was used in the pandemic to predict and improve the efficiency and effectiveness of healthcare services is only the beginning. By improving trust in AI, and public awareness of how it works and what it can do, DataRobot is poised to unlock the power of machine learning to radically improve public policy.

Also Read