AI Technical Governance Specialist
Helsinki, FI, 00500 Taastrup, DK, 2630 Stockholm, SE, 111 46
Job ID: 4043
Welcome to Group Technology, where we pride ourselves on engineering solutions that directly impact Nordea's 2030 goals to modernize data technology and accelerate AI.
We're seeking an AI Technical Governance Specialist to manage technical risks in AI and agentic systems. You'll focus on technical architecture, infrastructure, and operational risks while collaborating with engineering, architecture, and information security teams. This role enables responsible AI adoption and regulatory compliance through innovative governance solutions, working at the intersection of AI technology and risk management to guide safe implementations and advance our AI governance capabilities.
Nordea is a place where traditions meet tomorrow. We're not just a bank; we're a tech employer on a mission to evolve finance securely and responsibly. Together, we impact millions of people's daily lives by ensuring they can access our solutions anytime, anywhere, while safeguarding their personal data and wealth. Join us in making an impact on the banking industry.
This position is a full-time, permanent role with hybrid working location based in Helsinki, Finland, Copenhagen, Denmark, or Stockholm, Sweden.
About our team
Meet the AI Portfolio Delivery team - a collaborative unit within Group AI responsible for governance, architecture, and management of Nordea-wide AI applications. We transform ambitious AI visions into practical, compliant solutions across the organization.
Our culture emphasizes collaboration, continuous learning, and innovation in an open environment where diverse ideas thrive. We support each other's growth while pushing technology boundaries, prioritizing work-life balance, and open communication.
What you will be doing:
- Support AI initiatives with deep technical expertise, collaborating with colleagues who bring model risk and business risk knowledge
- Collaborate with senior technical stakeholders across the organization to develop pragmatic governance approaches to emerging AI risks
- Assess AI platforms, systems, and architectures from both technical and information security perspectives
- Contribute to the development of risk-based governance frameworks and streamlined approval processes for AI implementations
- Support the creation and refinement of AI governance standards, including risk assessment criteria and operational guidelines
- Provide technical input on vendor risk assessments and contractual considerations for AI solutions
- Work with cross-functional teams to establish governance controls for AI experimentation and production environments
- Drive innovative approaches to governance challenges in the evolving AI landscape
Who you are
You have technical experience engaging with engineering, architecture, IT security, and compliance teams on complex AI risk topics. You take a structured approach to analyzing technology and risk, translating technical realities into practical guidance for responsible AI development. You're collaborative, working across teams to ensure governance discussions reflect real-world technology implementation.
We're looking for someone with most of the following experience and skills:
- BSc or MSc in Business, Engineering, Risk Management, or a related Legal discipline
- Experience with AI platforms, features, systems, relevant infrastructure both from a technical implementation and governance challenges perspective
- An understanding of information security principles as they apply to AI systems with experience implementing security controls for AI in financial services
- 5+ years of experience in software engineering, AI/ML engineering, platform engineering, or similar
- 5+ years of experience working with AI/ML technologies, including understanding of system architectures and data pipelines
- Hands-on experience working with cloud platforms (e.g. AWS, GCP) in an AI context
- Ability to communicate effectively with technical and non-technical stakeholders at all levels
- Strong analytical and problem-solving skills with attention to detail
It would be ideal if you also have:
- Experience with regulatory frameworks and compliance requirements in financial services or other regulated industries
- An understanding of data governance and privacy considerations for AI
- Familiarity with third-party risk management processes
- Experience with AI model validation, monitoring, and performance assessment in the financial industry
- Experience with risk assessments and technical due diligence for AI features and platforms
- Professional risk management qualifications
- Experience working in risk management or governance functions
We encourage applications from candidates who are passionate about AI governance and meet most of these criteria. We value diverse perspectives and welcome applicants from all backgrounds.
If this sounds like you, get in touch!
Next steps
We kindly ask you to submit your application as soon as possible, but no later than 28/04/2026. Any applications or CVs sent by email, direct messages, or any other channel than our application forms, will not be accepted or considered.
At Nordea, we know that an inclusive workplace is a sustainable workplace. We deeply believe that our diverse backgrounds, experiences, characteristics, and traits make us better at serving our customers and communities.
If you have any questions about our recruitment process, please reach out to our tech recruiter and main point of contact anna.dahlstrom@consult.nordea.com.
Only for candidates in Sweden: For union information, please contact finansforbundet@nordea.se or SACONordea@nordea.com.