Introduction
AI is creating opportunities to extend the reach and impact of support services. It’s no longer a question of whether or not to start, it’s now about how to use AI meaningfully to enhance delivery, improve experiences for staff and improve outcomes for end users. But according to the 2024 Charity Digital Skills Report, only 22% of charities feel prepared to respond to AI opportunities. Similarly, from Healthia’s work across the sector, we know that many organisations have yet to start and those that have are in the very early stages of experimentation. Right now, there isn’t a manual on how to work with AI, so the best way to learn is from peers through knowledge sharing and experimentation. So we have conducted research to explore: ● How people are getting started ● Where AI has helped organisations do more with less ● Opportunities, risks and ethical considerations This report shares practical insights and lessons on navigating AI’s potential - whether it’s improving internal processes, tailoring user support, or enabling more accessible services. Our research involved depth interviews with people already using AI in service delivery, a short survey of our network plus desk research into examples and case studies.
We have included contributions from:
Executive summary
People we spoke to recognise that AI is moving fast but are often working with limited resources, so it can be hard to keep up. There is a gap between current activity and what people think could be possible. People talked about the potential for AI to:
● Accelerate digital service transformation on a budget ● Make services more user-centred and accessible ● Boost internal process efficien ● Allow staff more time to focus on human connections
Through the research, three clear themes emerged:
- Top-down vs bottom-up, where to start with AI? While some organisations are adopting top-down AI strategies, supporting frontline staff to experiment and learn can spark practical solutions and uncover scalable innovation.
- Digital vs. human support, do you need to choose? How do we retain the human touch as the world becomes digital? Technology should be an enabler not a replacement for what people do best.
- Opportunity vs. risk, how to strike the right balance? Concerns about unpredictable and sometimes inaccurate results are being mitigated by starting small and taking an iterative approach. We include case studies from Macmillan Cancer Support, RNIB, Scope, and Breast Cancer Now, who are exploring AI integration with services. And lastly, based on this research we’ve drafted a set of principles and shared ten ideas around ways to get started with AI in service delivery.
Themes
- Top-down vs. bottom-up: where should we start?
AI is multifaceted and evolving so it can be hard to know where to start. This might imply a risk-averse approach is best with a focus on creating a detailed organisational strategy factoring in all possible risks and mitigations before allowing experimentation. However some organisations have taken a different approach by supporting a process of small-scale experiments within a governance framework. This supports staff to test and evaluate ideas that may warrant further large-scale exploration. This can also lead to AI tools that have been developed for one particular service that can be adapted and reused elsewhere. Rachael Gilthorpe from Action for Children told us their approach is to “build once and implement twice”.
Creating an environment for innovation:
Some organisations foster innovation by giving staff dedicated research and development time to explore and test AI ideas. Those closest to service delivery are often best-placed to uncover challenges and needs that can be addressed through technology. Giving staff the space to experiment is a great way to get started, but it also requires open-door policies from cross-functional leadership to allow ideas with potential to scale safely.
Starting small still requires a best-practice and safe approach:
‘Just starting’ with AI needs to go hand-in-hand with an understanding of the risks and limitations. Changes need to be made in an open and collaborative way and It’s important to have ways to assess risks and blockers early in the process.
Example:
Macmillan creates space for innovation by providing dedicated R&D time, and an open-door policy with technical leadership, which encourages staff to try out new ideas. Using this time, the digital team developed an AI-powered semantic search that helps website users find the right information, even if the questions they ask are obscure contain spelling errors or slang. The tool was designed to overcome risks of AI giving false information or sensitive data by training it to only provide answers in the form of links to publicly available content on Macmillan’s website. ‘Smart Search’ is now live and Macmillan gathers real-time data on user satisfaction and effectiveness.
“A visitor to our website might search for ‘Who will look after my goldfish?’. O standard keyword search would return zero results for that phrase, but our new Smart Search, which is an AI-driven semantic search, will successfully return a link to our ‘Pet Care’ page, even though the word ‘goldfish’ does not appear on the page Howard Bayliss – Macmillan
What value does this bring? ● It enables users to ask questions in a way that feels natural to them, resulting in better engagement and support ● It saves Macmillan time on signposting and uses content that is already available
- Digital vs. human support: do we need to choose?
"The most important thing for us is to stay true to the needs of our users. People want to speak to a 'real person' so we need to find out how to optimise service delive whilst answering that need." Rachael Gilthorpe - Action for Children
Charities are stretched more than ever before. As demand increases for services, budgets are decreasing. But what if AI can support doing more with less, allowing more face-to-face support for those in need without having to compromise? Face-to-face interventions tend to be prioritised by service providers as it's often the best way to personalise support and help people to feel cared for. This can result in digital being deprioritised. But a user centred approach—prototyping and testing changes with end users and staff—will identify where AI is fit-for-purpose and importantly, where it not. Jonathan Chevallier at Charity Digital points out: “You should always be user-led in your designs. Too many businesses before were maybe just saying, ’how do we stop these calls coming in and save money?’ And they created massively frustrated users.” (source: Third Sector Podcast). AI can’t fix everything but by focusing on the needs of staff and end users we can identi the problems worth solving.
What if AI can help you to: ● Extend service provision out-of-hours? ● Identify the most vulnerable people so their needs can be prioritised? ● Spot insights that may go uncovered by humans? ● Limit the exposure of staff to distressing data?
Enhance don’t replace AI offers new ways to manage, make sense of and enhance our use of data to better serve people. AI chatbots and dynamic questionnaires are helping services to enhance their front door, making their services more accessible to people even out-of-hours. A proactive chatbot can also help gather more information about users and their needs, to help spot hidden needs earlier in the journey. This means that when staff get back to users they’re not starting from scratch and already have case notes to work with.
Example:
Carefree’s AI chatbot FIN AI allows them to support users when staff aren’t available and respond to the growing demand without needing more staff. This is a powerful example of how AI can extend the reach of smaller organisations.
“We treat FIN AI like a member of the team [...]. Our values are a very important part of our process so the bot needs to talk like us and be warm, caring, proactive and supportive." Charlotte Newman and Miruna Harpa - Carefree
Sometimes technology is the best way to meet needs: In some specific contexts or for particular user groups, digital-first is the answer. example, during our service design work for the NHS, we spoke to teenagers who were more comfortable opening-up about their mental health struggles online instead of attending a clinic drop-in session. Offering multiple front doors into a service increases the chance to ‘meet’ people on their terms when they’re ready to get help.
Content tailoring and accessibility: AI tools can help organisations tailor content more easily and speed up the creation of accessibility features. As technology advances, it can go beyond simple tasks like text-to-speech, adding user-focused features like applying Plain English to online content or automatically creating subtitles for videos and transcripts for podcasts.
Increasing understanding of user needs and responding more quickly Some organisations are already using AI to increase their knowledge of what people need. For example, Parkinson’s UK uses AI to monitor and analyse users' conversations and trending topics across digital channels, and use the subsequent data to shape content strategies and tailor resources around the biggest needs.
Balancing human and digital support It’s important to understand the opportunities and limitations of AI-powered technology to establish where it is fit for purpose and where it isn’t. Building context-specific knowl and solutions based on user needs increase the chances to make a positive impact on lives.
Example:
Scope is exploring how AI can assist user researchers in the manual aspects of processing and sharing user insights. Their content design team is investigating how AI can help reduce time spent on manual tasks. They have been conducting experiments with mock data to see how AI can support their needs and processes. One experiment is focusing on developing an internal AI chat tool to enable other departments to quickly access user insights, which the team currently provides manually. Another experiment is exploring whether there is potential for AI to assist in uploading emotionally challenging qualitative data to their database, which would reduce effort as well as the emotional burden on the team.
“We have to manually input a lot of emotionally challenging, qualitative data into our database...could AI help with this? This could help take some of the emotional load off the team, and allow us to focus on other parts of our work” Nina M. - Scope
What value does this bring? ● Reducing time spent on manual tasks can allow the team to focus on the work where their skills add most value, for example, uncovering research insights, writing user stories. ● Easy access to user insights within the organisation will inform evidence based decision making.
Example:
The Royal National Institute of Blind People (RNIB) is using AI to make bank statements accessible. They know that blind and partially sighted users struggle to get essential information quickly from transcribed documents. Traditional text-to-audio software reads every word in order without any prioritisation, making it tedious to access details. To address this, RNIB conducted research to understand what information is important to users across different document-types. They went on to develop an AI solution using Azure OpenAI, to interpret documents and provide a more tailored, helpful audio experience for users.
“We see ourselves as pushing the boundaries of what’s possible. By combining user research and AI solutions, we are able to provide a more tailored, helpful audio experience for blind and partially sighted bank customers.” Aidan Forman - RNIB
What value does this bring? ● This innovation improves experiences for blind and partially sighted people, going beyond simply providing simple access to information. ● The tool ensures users can access the information they need quickly and easily, reducing cognitive load.
- Opportunity vs. risk: how do we strike a balance?
AI comes with its own set of risks and limitations including ethical and privacy concerns. In the context of customer facing tools, the risk of biases or data breaches need to be addressed. A poorly designed AI tool risks alienating users through its use of language or a lack of awareness of condition specific contex For some organisations and in certain contexts, the risks outweigh the opportunities, resulting in a blanket ban on use of AI. But this can lead to well-intentioned teams bypassing restrictions and deploying their own solutions. Instead, people told us they have had more success through increasing AI literacy across an organisation while simultaneously educating staff around risks and limitations. Some organisations are already providing AI training for their staff through guidance documentation and ‘lunch and learn’ sessions. The importance of data governance and AI policy People we spoke to said that even when encouraging a culture of experimentation, strong data governance and AI policies are essential. Involving stakeholders from across an organisation—IT, data security, digital, marketing, accessibility and customer support— in creating these policies will result in policies that are usable and adhered to. The people closest to the work are often best-placed to spot specific issue Policies need to be regularly reviewed and updated to keep up with AI development and an evolving understanding of user needs. There are templates and guidance available online that can help organisations get started with creating an AI policy.
Working in the open By working openly and learning from others, both within and outside their fiel organisations can better identify and manage risks, helping them move forward. Although it may seem counterintuitive, involving more risk-averse teams early on can encourage innovation through teamwork, inclusion, and co-creation. Engaging with individuals and teams from across the whole organisation invites individuals to share thinking, experience and perspectives on the considerations and decisions around AI. Collaboration can also accelerate the process. For example, Comic Relief has joined an AI collaborative working group alongside other UK charities. They meet regularly to learn from each other, build skills and discuss best practices in a safe, open setting. Restricting AI by design to minimise risk AI isn’t always completely accurate - hallucinations are well documented. Some risk can be mitigated through design. It’s possible to design tools in a way which restricts the type of data the AI references, and controls the type of answers shared to users. Constraints can be a good thing. Howard Bayliss at Macmillan told us that necessary governance and restriction had led to innovation in the organisation.
Example:
Breast Cancer Now is focusing on developing AI literacy across staff, leadership and trustees. They want to adopt AI systems that are effective and efficient but also that avo discrimination or harm. They are doing this by hosting lunchtime learning sessions and writing internal guidance to help increase knowledge around the use of AI. Through open dialogue, they want to engage everyone in the journey, and ensure everyone is aware of the risks and limitations of AI. They are also piloting ‘off the shelf’ tools so that they can learn through experience.
“Charities are thinking 'should we or shouldn't we' with AI. But I think this is the wrong question. The question is ‘how is AI impacting the people we serve and how should we respond to that now and in the future?” Cath Biddle - Breast Cancer Now
What value does this bring? ● Staff can use AI more confidently and responsibly when they understand its ris and limitations. ● Involving everyone also brings diverse perspectives, helping to uncover important considerations such as related to ethics, accessibility, or data biases that might otherwise be missed.
Getting started
Principles
- Start with people and prioritise the problems worth solving There are so many things AI could do, how do you know what are the right problems to solve? By starting with people’s needs and evaluating concepts against these needs, we reduce the risk of investing in the wrong solution. Starting with staff needs can result in process efficiencies that also improve experiences for e users.
- Understand and mitigate potential risks People who had already started their journey with AI told us that a strong data governance and AI policy—with the right voices involved—were the foundation of secure and ethical AI development. Given the pace AI is evolving, these will need to be regularly reviewed.
- Create an environment for innovation Innovation doesn’t happen in a vacuum. Strong leadership is necessary to drive change. By educating and encouraging staff to test new ideas, we can expedite learning. Having open-door policies or creating AI multi-disciplinary working groups will foster the right environment for innovation.
- Start small and iterate You don't need to be an AI expert to start experimenting or begin the conversation. It’s OK to start with small scale pilots using off-the-shelf solutions and see where this takes you.
- Don’t go it alone Embrace collective power, collaborate with people inside and outside your organisation. By sharing insight, ideas, challenges and solutions we can accelerate AI development that leads to increased service impact and reach.
Ten example support service AI use cases
- Casework management Simplify tedious casework management to free time for more face-to-face support. HelpFirst uses AI to help staff prioritise the most vulnerable users and summarise case notes.
- Out-of-hours support Keep the service front door open using enhanced chat and questionnaires. Carefree uses AI to support users when staff aren’t available.
- Online customer insight tracking Monitor and analyse users' public conversations and topics and use data to shape content strategies. Parkinson's UK uses AI to better listen to and respond to the needs of their online communities.
- Funding applications AI can help with drafting and refining grant applications, suggesting structure a language enhancements based on previous, successful submissions.
- Accessibility Improve access to information for people with access needs. RNIB uses AI to enhance its accessible document service.
- Volunteer matching Help managers coordinate volunteers by matching their availability, preferences and skills to opportunities. LiveImpact have been using opportunities for leveraging AI for effective volunteer management in non-profit
- Triage first conta AI can facilitate initial contact. Mind uses Limbic’s virtual referral assistant.
- Educational resources in emergencies Support users with quick advice to specific questions in an emergency. Tailor answers can draw on data and guidance in the context of a specific situation. Sa the Children are developing an AI tool to provide context-specific child protecti advice.
- Website navigation and search AI-powered search tools can help users quickly navigate content rich websites. Macmillan have developed an AI-powered semantic search tool to help users to find what they need onlin 10.Adding accessible content Use AI to automatically generate subtitles and transcripts for videos and transcripts for podcasts.
Collaborative service design for effective transformation
Healthia is a strategic service design consultancy working across health, care and public services. Find out how we can help you deliver human centred services here.
ServiceShift We run ServiceShift - a concise, structured process to help you identify and prioritise impactful AI opportunities, align stakeholders and define a roadmap with actionab next steps. For more details, please contact: gareth.fryer@healthia.services - 07984 972 234.
Problems Worth Solving is our monthly email and podcast with discussion, articles, events, tools and techniques to help you improve services by putting users firs Find out more and sign up here.
Appendix Resources for charities ● Guidance on AI and data protection (Information Commissioner’s Offic ● AI policy template for charities (Platypus Digital and William Joseph) ● AI hub (Charity Comms) ● Evaluating generative AI text tools (Neontribe via CAST’s Shared Digital Guides) References ● Amar, Z. (2024) How is AI changing organisations? ● Amar, Z., Ramsay, N. (2024) Charity Digital Skills Report ● Bayliss. H (2024) AI Semantic Search ● Charity Digital (2023) How are charities using artificial intelligence in servi delivery? ● Good Innovation (2024) AI Charity Collaboration ● Joseph Rowntree Foundation (2024) Grassroots and non-profit perspectives generative AI ● Latham, P. (2024) Charities and Artificial Intelligen ● Scurr, D. (2024) Charities harness AI for greater impact ● Tanner, J. (2023) Five tips for leading in the AI era ● Third Sector (2023) Third Sector Podcast: Charities and ChatGPT ● Twentyman, J. (2021) Parkinson’s UK turns to AI to head off the ‘charity crunch’ Disclaimer Any tools mentioned in this white paper are not endorsements. The examples and case studies are for illustration only, and readers should conduct their own research before adopting similar approaches or tools. The contributions from the charities we spoke to reflect their professional experiences, but don’t necessarily represent the views of the employers or any related organisations. All information is shared in good faith to encourage thoughtful discussion and to help you deliver better services. Paper authors and editors Molly Northcote, Cécile Pujol, Gareth Fryer, Claire Reynolds, Sam Menter