We hosted our AI and Data Conference at Bletchley Park this January. The historic backdrop inspired us to look to the future potential of AI and data in social care. Over the course of the day we heard for customer panels, product managers and Nourish users on and off stage. It was an inspiring event that energised all attendees for the year ahead!
The day started once everyone had settled in to the historic venue. Our Chief Product Officer Matthew Stewart opened the proceedings with an overview of our position on AI, our ambitions for care intelligence, and the odd season joke here and there. This was followed by our Director of AI & Data Sudha Regmi detailing the Nourish view into Analytics, Insights and Nourish AI.
We had two panels with customers, one on either side of lunch. The first panel, led by our Chief Customer Officer Paul Barnes was focussed on Nourish Analytics. Paul was joined by Emma Lindblom, Head of Quality Improvement at MHA, Gareth Williams, Flexible Workforce Manager at Brandon Trust and Benjamin Winfield the Product Owner Lead for Quality and Business Systems at Lifeways Group. They built upon Sudha’s point about moving descriptive to prescriptive with Analytics and the journey of embedding the functionality in an organisation.
Mark Gray, Nourish AI Product Lead, hosted our second panel. It focussed on our new AI platform Nourish Confidence and it’s co-production journey. Our Clinical and Safety Lead Carrie Taylor and Emma Brazier, Business Analyst for Sanctuary Supported Living joined him on the panel. They discussed our journey to finding the right application for AI. As well as how we can change the ways of approaching audits to drive continuous improvement for people with support in addition to how we can bring together different functionality to provide holistic, truly person centred care and support.
Over lunch attendees were treated to a tour of the venue. Getting to take in the history and groundbreaking work that took place in the birthplace of AI.
The best discussions happened in between all of the sessions. We took the opportunity to display many of the new platforms and features of the Nourish ecosystem. Talking to people from across the wide range of care and support services represented by our attendees is always the highlight of these events. We closed on a discussion of our roadmap. And our Chief Technology Officer Jamie Hibbard making a special announcement about our AI Labs!
We are incredibly grateful to everyone who joined us in Bletchley Park. Some of our attendees even jumped on camera with us to share their experiences and perspectives. You can see these Nourish users in the video, and look out for longer versions of their interviews on our social media!
Ozayr Patel, Development Manager, Lancashire County Council
Steve Bowler, Digital Transformation and Implementation Manager, Greensleeves Care
Sian Smail, Health and Social Care Data Analyst, Care Dorset UK
Elliot Goodwin, Area Director, Consensus Support Services
Alicia Ingham, Operational System Improvement Lead, MHA
Jay Harper, Head of Communications and Projects, Rehability UK
Jane Hayden, Head of Technology, Treloar’s
Ross Watson, Senior Product Manager, HC-One
Gemma Pitman-McGrath, Clinical Development Nurse, Barchester
Steve Daniels, Operational Change Lead, iVolve Care & Support
We are so excited for the future of AI & data in health, care and support technology. Find out why on our Nourish AI page.
Anthropic recently published a new constitution for ‘Claude’, their Gen AI model. The AI is designed for natural conversation, complex reasoning, coding, analysis, and content creation. Anthropic was founded with the aim of creating AI with a focus on safety and ethics. An ambition shared in the health and care sectors, where safety is a paramount concern given the huge responsibility communities entrust in their care and support providers. Throughout their technology’s iterations Anthropic repeatedly look to connect their functionality to human values. This approach resonates with us at Nourish, where we firmly root our AI models in lived experience by always keeping a ‘human in the loop’ throughout the design process. We believe principals are essential for applied AI, as we continue to search for the most impactful and effective applications of AI in health and care technology.
Our director of Data and AI, Sudha Regmi, saw several parallels between Claude’s new constitution. Here are her three key takeaways from the recent announcement.
“When you’re raising kids, you can’t just hand them a list of rules and hope it covers every weird eventuality life throws at them,” explained Sudha. “You try to teach values and principles, then trust they’ll generalise when it matters.”
Anthropic is basically saying the same thing. No, not that all GenAI is inherently attention seeking and impulse driven. (That’s just a good portion of the user base). They are saying, don’t just tell the model what to do, teach it why with context. This gives the model the ability to generalise across novel situations. With ‘rules’ in place, GenAI can become confused by edge cases. With ‘principals’ the model is better designed to understand context and infer the correct decision.
“This is an understandable but misleading hangover from early AI scepticism. Pretraining is very important for AI models, but is not the be all and end all, especially for applied AI. Post-training is also very important. Along with feedback loops, policy layers and product choices. When choosing how to apply AI models to modern challenges, there is no predictable answer. We must, to borrow a technique from social care, review our impact, continuously improve and remain connected to lived experience.”
No two experiences are exactly the same. You can see this in the real world with “the same base model” behaving differently depending on where you use it. Put the same prompt into different models and look at the output. These differences express themselves across both different models and different iterations. For example, think about the difference in output when you have Copilot (in gpt5 mode), ChatGPT or OpenAI GPT models doing the same tasks via APIs. Or the differences when using Claude Sonnet in Claude Code vs Claude Cursor.

There are vast differences in models that share similar origin stories. Applied AI will continue to develop in a huge variety of fields. We must be able to more deeply understand the context of the model, before we can effectively teach the model the context of its application.
“The constitution is explicit about priority order: safe → ethical → compliant → helpful. It includes ‘hard constraints’ for especially high-stakes work. It validates that the process of building AI is just as important as the product.”
The process of building applied AI is crucial. Principles are deliberate, and they can’t just live on a slide deck. They must be represented in your approach to AI design, including:
Incentivising the right kind of outputs for the context
Health and care are not vibes-based domains. Decision carry a huge amount of weight and privacy. Context is essential for understanding both best practice and the reality of the responsibility.
Human-in-the-loop validation with clinical experts
Lived experience remains the alpha and the omega for technology design. Without human input into the design process, AI can never be considered responsibly designed.
Tight feedback loops with our customers to pressure-test what “good” looks like in practice
That’s what breeds confidence in applied AI. Not reverential ‘this is what the model said to do’. Not black box decision makers. But results driven by a combination of principles, process and validation.
If you’re building with AI right now or even thinking about adopting AI tools, we’d recommend reading the constitution. Even if you disagree with parts of it. It’s a useful window into how one lab is translating “alignment” into something operational.
If you want to know more about Nourish Care’s approach to applied AI for health, care and support, click here.
At UK Care Week 2025 our Director of Data & AI Sudha Regmi took to the Caring & Sharing stage to address a topic sweeping the social care sector, Artificial Intelligence. Sudha spent time laying out the Nourish approach to AI design and model development. One roted in responsible AI, co-production and transparent modelling.
Sudha draws on her extensive 15 years of experience developing AI models in a range of industries to lay out the potential applications of AI in social care. Starting with data analysis and carrying through to predictive and prescriptive applications of AI. With specific regard to the ‘Triple Aim’ of improving care quality, personal outcomes and care outcomes.
Sudha also explores the guiding aims and design principles that shape AI model’s development. She details potential risks, why responsible, transparent design is crucial and how these tie into the UK government’s responsible AI principles.
Find out more about Nourish Care’s commitment to responsible AI design, and how we are building the future of social care alongside our users.
AI is a discussion taking many forms. In care and support it is vital to ensure these forms always keep the people utilising your service at the forefront of their AI design process. This can only be guaranteed through a commitment to coproduction and collaboration across health and care providers, suppliers and communities.
If you’d like to learn more about how we work with other suppliers, make sure you check out the Nourish Partnership Programme for a list of compatible technologies we integrate with. If you are an exsiting Nourish user, you can contact your Account Manager directly to learn more.
If you’d like to learn more about working with Nourish check out our case studies. We cover a range of care and support types including residential, home care, learning development, mental health and more. Read the case studies here.