Success Story
AI Theatre Chatbot — RAG Pipeline with AWS Bedrock & FAISS
AI-powered conversational platform for Ambassador Theatre Group, automating ~80% of show-related inquiries across 80+ UK theatres with a multi-stage NLP pipeline.
Challenge: Ambassador Theatre Group needed to scale customer support across 80+ UK theatres while providing personalized show recommendations and handling show-related inquiries efficiently.
Solution: Built an AI-powered conversational platform with a multi-stage NLP pipeline: PII redaction via AWS Comprehend, question summarization, 3-way classification, top-150 similarity search with FAISS, and response generation via AWS Bedrock Claude v3.5.
Result: ~80% of show-related inquiries automated with fit scores (0-100) for show suggestions, hourly data sync from Bolt API, and GDPR-compliant PII redaction before LLM processing.
Tech Stack
The Story
Ambassador Theatre Group runs 80+ theatres across the UK. Their customer support team was drowning in the same questions over and over. What shows are on in Manchester this weekend. Is there anything good for kids in London. Whats playing at the Apollo Victoria in March. These are real questions with real answers sitting in their show catalogue, but somebody had to manually look them up and respond every single time. I built a chatbot that handles roughly 80% of these inquiries automatically.
The core is a RAG pipeline. Step Functions trigger hourly, pulling the full show catalogue from the Bolt API, enriching it with show details, chunking the data into 1,500-character documents, generating embeddings via Bedrock, and building a FAISS vector store that gets uploaded to S3. When a user asks a question, the system loads the FAISS index, runs a top-150 similarity search, then filters results by location and date. If someone asks about shows in Edinburgh next weekend, they get Edinburgh shows for those dates, not a random dump of everything in the catalogue.
Before anything touches the LLM, AWS Comprehend strips PII from the query. Names, email addresses, phone numbers, all redacted. GDPR compliance is not optional when you are processing user messages at scale. After PII redaction, the question gets summarized to extract intent, location, and date parameters. Then a 3-way classifier routes it: is this a show info request, a recommendation request, or something else entirely. Each path gets different prompt engineering and different context injection.
Claude v3.5 on Bedrock generates the final response with fit scores from 0 to 100 for each recommendation. The chatbot remembers conversation context via DynamoDB, so follow-up questions work naturally. The whole thing is serverless on Lambda, API Gateway, DynamoDB, S3, and Step Functions, deployed with Serverless Framework and Terraform. TypeScript for the chat handler and orchestration, Python for the embeddings pipeline. 80+ venues, hourly data freshness, and roughly 80% of inquiries handled without a human touching them.
How We Delivered
Our Delivery Process
See how our senior engineering pod delivered production-ready results
Multi-Stage NLP Pipeline
- PII redaction via AWS Comprehend as the first stage, ensuring GDPR compliance before any data reaches the LLM.
- Question summarization followed by 3-way classification routing queries to the appropriate processing path.
- AWS Bedrock Claude v3.5 for contextual response generation with fit scores (0-100) for show suggestions.
RAG & Data Sync
- FAISS vector database with top-150 similarity search supporting location and date filtering for relevant results.
- Step Functions orchestrating hourly data sync from Bolt API, regenerating FAISS embeddings stored in S3.
- Chat history persisted in DynamoDB for conversation context and multi-turn interactions.
Infrastructure & Privacy
- Serverless architecture: Lambda, API Gateway, DynamoDB, S3, Step Functions in a pnpm monorepo.
- Infrastructure managed with Serverless Framework and Terraform for reproducible deployments.
- TypeScript and Python codebase with PII redaction ensuring no personally identifiable information reaches the LLM.
Final Outcomes
Results
Working on something similar?
Book a 15-minute call. We'll tell you honestly if we're the right fit.
Book a 15-min Call