AI Engineering Program Manager

Acorn

Acorn

Software Engineering, Operations, Data Science
New South Wales, Australia
Posted on Aug 1, 2025

About Acorn

At Acorn PLMS, we’re pioneering performance-based corporate learning powered by AI. We're seeking a passionate AI enthusiast to lead the architecture, delivery and scale of our internal AI systems built on a serverless AWS stack incorporating CloudFront, S3, API Gateway, Cognito, Lambda, DynamoDB, Amazon Bedrock, RAG components (Aurora/OpenSearch/Kendra), and key data sources like S3, SharePoint, and Confluence. If you have a passion for designing and managing generative AI and retrieval-augmented systems at enterprise scale, this is the role for you.

We’re looking for someone based in Sydney or Canberra Australia to join our in-office team.

Role overview

As our AI Engineering Program Manager, you will lead the development of key AI infrastructure and feature delivery projects, including:

  • Generative AI agents powered by Amazon Bedrock
  • Retrieval‑Augmented Generation (RAG) systems using Aurora, OpenSearch Serverless, and Kendra
  • Cloud-native orchestration using AWS Lambda, API Gateway, Cognito, DynamoDB, and S3

You’ll be responsible for the entire project lifecycle - from gathering requirements and architecting the system, to execution and deployment - working closely with data engineers, machine learning researchers, product teams, and business stakeholders.

5 key capabilities

The best part… we use our Acorn software at Acorn. We believe in the power of our Managers and Staff being aligned on the 5 capabilities needed for each role. Here are the 5 capabilities for our AI Engineering Program Manager to give you an idea of what the role entails:

1. Designing RAG Architectures - Own the architecture of Retrieval-Augmented Generation (RAG) systems, including hybrid pipelines, intent-based routing, embedding-based search, reranking, and contextual generation. You apply best practices in system modularity and scalability to enable reliable, interpretable AI outputs.

2. Semantic Search Strategy & Embedding Optimization - Lead the strategy for vector search and semantic retrieval—selecting embedding models, designing metadata tagging and query expansion, and managing indexing pipelines to ensure high relevance and discoverability in real-world enterprise data.

3. Scalable, Serverless AI Infrastructure - Architect event-driven, serverless cloud infrastructure that supports scalable AI workflows. Leverage services like API Gateway, Lambda, Cognito, DynamoDB, Aurora, OpenSearch/Kendra, and S3 to ensure secure, performant, and cost-efficient deployments.

4. Prompt Engineering & Model Integration in Production - Create prompt strategies and instructions for domain-specific use cases, ensuring effective integration of foundation models (e.g., via Amazon Bedrock) into production systems. You fine-tune inputs for accuracy, reduce hallucination, and enhance task automation across workflows.

5. Enterprise-Grade Integrations & Governance - Design and implement secure, bi-directional integrations with enterprise tools such as SharePoint, Microsoft Teams, and Jira—enabling knowledge retrieval and action automation. Oversee governance, access controls, monitoring, and optimization to uphold performance, compliance, and data quality standards.

Key Responsibilities

  • Lead the development of key AI infrastructure and feature delivery projects, including generative AI agents and RAG systems
  • Own and execute full project lifecycles from planning and architecture to delivery and deployment, working with our functional leads to deliver AI projects within different areas across Acorn
  • Develop seamless integrations across Bedrock, RAG components, and enterprise systems like SharePoint, Confluence, Teams, and Jira
  • Lead prompt authoring, model tuning, embedding pipelines, and retrieval logic design
  • Oversee architecture set-up using Lambda, API Gateway, Cognito, DynamoDB, Aurora/OpenSearch/Kendra, and S3
  • Monitor system health and drive improvements in performance, accuracy, and cost
  • Collect stakeholder feedback to iteratively refine roadmaps for AI optimisation across Acorn

Required skills & experience

  • Direct experience with Amazon Bedrock, OpenSearch Serverless or Kendra, and Aurora or DynamoDB
  • Expertise building RAG systems, embedding pipelines, prompt engineering, and retrieval optimization
  • Proven capability integrating AI workflows with enterprise collaboration tools like SharePoint, Teams, and Jira
  • Strong stakeholder management across technical and business teams
  • Deep knowledge of AI governance, security, scalability, and cost control in large organisations

Why join us?

  • Lead the creation of a world-class internal AI platform powering enterprise learning and performance
  • Work hands-on with cutting-edge AWS services and language models
  • Join a collaborative, fast-moving team dedicated to innovation and real business impact
  • Influence systems that scale across global teams and drive measurable results

How we hire?

  • We welcome applications from diverse backgrounds and champion inclusive recruitment
  • Our process typically includes stakeholder interviews, technical/architecture discussions, and a case exercise
  • Please let us know if you require any accommodations at any stage of the recruitment process

About Acorn PLMS

Acorn’s AI-powered Performance Learning Management System transforms how organisations learn and perform. Serving millions of learners globally, we’re on a mission to close the loop between learning, capability, and business performance. We’re scaling fast—and we’d love you to be part of this journey.

If you're passionate about designing and implementing production-grade AI systems with scalable, secure integrations we’d love to hear from you!