Prototype
Demos
Looks-Like (UI) Prototype Demo
Works-Like (MVP) Prototype Demo
Works-Like (MVP) Prototype
To refine key assumptions, we built a works-like MVP using WhatsApp for rapid iteration and real-world engagement testing, conducting a two-week pilot with 15 hospitality employees evaluating the impact of our learning modules and skills profiling. See GitHub Repository


Insight: Higher-skilled employees sent fewer but more detailed messages, suggesting skill level impacts interaction depth and frequency. However, a longer study is needed to assess novelty effects and sustained learning trends.
Learning Modules
Our Learning Modules use LLM-driven personalisation, integrating Google Gemini’s API and the Skills Builder framework to adapt content based on engagement, skill gaps, and workplace context, validating Assumption 1. Dynamic difficulty adjustments and refined AI responses enhance clarity, relevance, and interactivity, boosting engagement and seamlessly integrating learning into hospitality workers' daily routines.



Iteration 1: Learning Module Development
Iteration 2: Adaptive Learning & Personalisation
Iteration 3: Prompt Engineering for Enhanced Responses
Skills Profiling
Skills Profiling assesses users' soft skills in a highly engaging manner, through workplace-specific, scenario-based questions. We do this by adapting the Skills Builder framework to the employees' role, ensuring relevance to their daily challenges. An algorithm-based scoring system, created through prompt engineering, improves accuracy in assigned skill levels (out of 15, depending on users' responses).



Iteration 1: Baseline Skill Assessment
Iteration 2: Prompt Engineering for Improved Accuracy
Iteration 3: Personalised Role and Workplace Related Questions
Chat-Bot
Our chatbot serves as an adaptive learning assistant, using Google Gemini’s API to deliver personalised, context-aware interactions. Leveraging the Skills Builder Framework and retaining users' chat history, it creates natural, engaging conversations. Fine-tuned through prompt engineering, it produces timely reminders and intuitive prompts, enhancing user learning and engagement



Iteration 1: A/B Testing Engagement Strategies
Iteration 2: Context Awareness & Conversational Flow
Iteration 3: Prompt Engineering for Personalisation
Tech Stack Implementation


Iteration 1: Local Mac OS
Iteration 2: Linux Machine
Goal: Code system locally, testing with colleagues
Key Learnings: Local machine had capabilities to run test period comfortably, yet not continuously. Hence, ran for 30 minutes per day throughout pilot period
Goal: Setup the system to run autonomously for two-weeks on a Linux Machine
Key Learnings: Linux machine didn’t have processing capabilities for extensive, continuous API calls and LLM reasoning, hence we ran it locally for the two-week pilot period.
Next Steps

Virtual Machine (VM)
Goal: Integration into VM tech-stack
Develop locally → Test bot on macOS/Linux
Push to GitHub
Deploy to DigitalOcean using SSH & SCP
Run inside a Docker container
Schedule with Cronjob to restart on reboot
Twilio WhatsApp messaging
Monitor via logs & alerts
Connect
Investor
Questions?
Copyright © 2025 Visability Ltd. All rights reserved.