Gazelle Global
job description
Join Gazelle Global as an AI Test Lead within the AI Foundry, where you will stand at the forefront of the next technological frontier. In this high-impact contract role, you are the guardian of trust and reliability, ensuring that cutting-edge AI and Copilot solutions are not only high-performing but also ethically sound and compliant. You will work in a space where traditional quality assurance meets modern AI challenges, helping one of the UK’s most respected sectors navigate the complexities of bias, hallucination, and explainability with precision and foresight.
AI Test Lead (AI Foundry)
As an AI Test Lead, you will take end-to-end ownership of the enterprise testing strategy for AI and Copilot ecosystems. Your role is deeply strategic and technical; you will define the quality gates that prevent unreliable models from reaching production while designing rigorous evaluations for prompt variability, edge-case simulation, and ethical compliance. Beyond initial testing, you will lead the charge on post-deployment monitoring and drift detection, partnering closely with Risk, Legal, and Compliance teams to ensure every AI behavior aligns with internal policies and global regulatory standards like ISO42001.
Functional/Technical (Role Specific)
-
Higher education qualification (or equivalent experience) in Ethics, Law, Risk Management, Social Sciences, Data/Computer Science or relevant field
-
Experience with designing and leading testing for complex digital or data‑driven systems, including multi‑component architectures, API‑integrated platforms, event‑driven workflows and systems operating under regulatory or high‑assurance constraints.
-
Clear understanding of AI‑specific risks such as hallucinations, bias, drift, explainability gaps, safety breaches and misuse pathways, paired with the ability to design targeted tests that uncover model blind spots and systemic weaknesses.
-
Knowledge of model‑evaluation techniques, prompt‑testing strategies and scenario‑based testing approaches, including stress‑testing prompts, adversarial input creation, failure‑mode exploration and behaviour‑driven evaluation.
-
Familiarity with governance, audit and regulatory standards for AI, data and digital services, ensuring testing evidence aligns with internal risk frameworks, ISO42001 controls, Responsible AI policies and external regulatory expectations.
-
Experience developing structured QA strategies that integrate traditional and AI‑specific assurance, mapping out test plans, risk‑based prioritisation, acceptance criteria, model‑readiness thresholds and quality gates aligned to lifecycle stages.
-
Ability to define and execute test plans across functional, non‑functional, ethical and performance dimensions, validating accuracy, latency, robustness, security, fairness, reliability and user‑journey consistency.
-
Strong analytical mindset with the ability to identify root causes of defects or unexpected AI behaviour, performing deep‑dive diagnostics across data pipelines, vector stores, prompt flows, orchestration logic and human‑in‑the‑loop checkpoints.
-
Experience with post‑deployment monitoring, drift detection and continuous validation, designing alerts, retraining triggers, performance thresholds and evaluation cadences to maintain long‑term model integrity.
-
Comfortable learning and adapting to emerging AI technologies and engineering patterns.
-
Excellent stakeholder management and communication skills, including senior‑level engagement.
-
Commercial awareness and a value‑driven mindset.
-
Use of professional networks and external influencers with clear evidence of learning and development to build and maintain skills and expertise.
Sector (desirable)
-
Understanding of financial services industry, markets and competitors
-
Understanding of how financial services organisations operate and the associated regulatory environment, or other regulated industries
-
Awareness of the Mutual Sector and the needs and interests of Members.
Commercial
-
Ability to work with autonomy and make operational decisions
-
Experience of delivering organisational change
-
Understanding of related functions and/or services outside of the role’s direct remit.
-
Experience of managing a set of internal and external stakeholder relationships
Interpersonal
-
Good interpersonal skills and ability to build and maintain strong working relationships
-
Ability to work effectively in diverse teams
-
A problem-solving approach with curiosity and proactivity to engage and understand both the strategic business goals and our customer’s needs
-
Ability to identify areas of improvement and create innovative approaches to delivering better quality service.
-
Experience working in cross-functional teams and agile environments
-
Ability to identify, nurture and realise the potential in others
-
Strong communication, engagement and influencing skills
-
Ability to effectively represent YBS through building collaborative relationships.
By joining this Leeds-based team, you will contribute to a culture of continuous improvement, evaluating the latest tools in LLM-monitoring and synthetic data generation to mature our AI assurance capabilities. You will act as a mentor and champion for responsible AI principles, ensuring transparency and fairness are baked into the development lifecycle. If you are a proactive problem-solver who thrives in agile environments and is ready to manage the unique risks of the AI era, we want to hear from you.
To apply for this job please visit uk.linkedin.com.