Course Details
Topic 1 Introduction to Large Language Models (LLM) and AI Agents
- Overview of transformer architecture and attention mechanisms in LLMs
- Introduction to AI agents
- NLP applications Powered by LLM and AI agents
- Use cases of LLMs and AI agents
Topic 2 Retrieval-Augmented Generation (RAG)
- Introduction to Retrieval-Augmented Generation (RAG)
- Use cases of RAG
- Overview of tokenization and word embeddings
- Overview of chunking strategies and vector databases
- Build a RAG system
Topic 3: Fundamentals of Fine Tunning LLM
- Fundamentals of LLM Fine Tuning
- Supervised Fine-Tuning (SFT) for custom LLM Tasks
- Parameter Efficient Fine Tuning (PEFT)
- Low-Rank Adaptation (LoRA) for fine tuning LLM
- Group Relative Policy Optimization (GRPO)
- Reinforcement Learning (RT Learning) for fine tunning
Topic 4 Fine Tuning LLM Implementation and Deployments
- Overview of Hugging Face Fine Tuning Libraries
- Implementing Fine Tuning wiht Hugging Face Libraires
- Using Hugging Face datasets and tokenizers for LLMs fine tunning
- Deploying and testing Fine-Tuned models
Final Assessment
- Written Assessment - Short Answer Questions (WA-SAQ)
- Practical Performance (PP)
Course Info
Promotion Code
Promo or discount cannot be applied to WSQ courses
Minimum Entry Requirement
Knowledge and Skills
- Able to operate using computer functions
- Minimum 3 GCE ‘O’ Levels Passes including English or WPL Level 5 (Average of Reading, Listening, Speaking & Writing Scores)
Attitude
- Positive Learning Attitude
- Enthusiastic Learner
Experience
- Minimum of 1 year of working experience.
Minimum Software/Hardware Requirement
Software:
You can download and install the following software:
Hardware: Windows and Mac Laptops
About Progressive Wage Model (PWM)
The Progressive Wage Model (PWM) helps to increase wages of workers through upgrading skills and improving productivity.
Employers must ensure that their Singapore citizen and PR workers meet the PWM training requirements of attaining at least 1 Workforce Skills Qualification (WSQ) Statement of Attainment, out of the list of approved WSQ training modules.
For more information on PWM, please visit MOM site.
Funding Eligility Criteria
| Individual Sponsored Trainee | Employer Sponsored Trainee |
|
|
|
SkillsFuture Credit:
PSEA:
|
Absentee Payroll (AP) Funding:
SFEC:
|
Steps to Apply Skills Future Claim
- The staff will send you an invoice with the fee breakdown.
- Login to the MySkillsFuture portal, select the course you’re enrolling on and enter the course date and schedule.
- Enter the course fee payable by you (including GST) and enter the amount of credit to claim.
- Upload your invoice and click ‘Submit’
SkillsFuture Level-Up Program
The SkillsFuture Level-Up Programme provides greater structural support for mid-career Singaporeans aged 40 years and above to pursue a substantive skills reboot and stay relevant in a changing economy. For more information, visit SkillsFuture Level-Up Programme
Get Additional Course Fee Support Up to $500 under UTAP
The Union Training Assistance Programme (UTAP) is a training benefit provided to NTUC Union Members with an objective of encouraging them to upgrade with skills training. It is provided to minimize the training cost. If you are a NTUC Union Member then you can get 50% funding (capped at $500 per year) under Union Training Assistance Programme (UTAP).
For more information visit NTUC U Portal – Union Training Assistance Program (UTAP)
Steps to Apply UTAP
- Log in to your U Portal account to submit your UTAP application upon completion of the course.
Note
- SSG subsidy is available for Singapore Citizens, Permanent Residents, and Corporates.
- All Singaporeans aged 25 and above can use their SkillsFuture Credit to pay. For more details, visit www.skillsfuture.gov.sg/credit
- An unfunded course fee can be claimed via SkillsFuture Credit or paid in cash.
- UTAP funding for NTUC Union Members is capped at $250 for 39 years and below and at $500 for 40 years and above.
- UTAP support amount will be paid to training provider first and claimed after end of class by learner.
Appeal Process
- The candidate has the right to disagree with the assessment decision made by the assessor.
- When giving feedback to the candidate, the assessor must check with the candidate if he agrees with the assessment outcome.
- If the candidate agrees with the assessment outcome, the assessor & the candidate must sign the Assessment Summary Record.
- If the candidate disagrees with the assessment outcome, he/she should not sign in the Assessment Summary Record.
- If the candidate intends to appeal the decision, he/she should first discuss the matter with the assessor/assessment manager.
- If the candidate is still not satisfied with the decision, the candidate must notify the assessor of the decision to appeal. The assessor will reflect the candidate’s intention in the Feedback Section of the Assessment Summary Record.
- The assessor will notify the assessor manager about the candidate’s intention to lodge an appeal.
- The candidate must lodge the appeal within 7 days, giving reasons for appeal
- The assessor can help the candidate with writing and lodging the appeal.
- he assessment manager will collect information from the candidate & assessor and give a final decision.
- A record of the appeal and any subsequent actions and findings will be made.
- An Assessment Appeal Panel will be formed to review and give a decision.
- The outcome of the appeal will be made known to the candidate within 2 weeks from the date the appeal was lodged.
- The decision of the Assessment Appeal Panel is final and no further appeal will be entertained.
- Please click the link below to fill up the Candidates Appeal Form.
Job Roles
- NLP Engineer
- Data Scientist (specializing in text data)
- Machine Learning Engineer (NLP focus)
- Computational Linguist
- AI Research Scientist (language models)
- Chatbot Developer
- Text Mining Specialist
- AI Solutions Architect (with NLP projects)
- Conversational AI Designer
- Search Algorithm Developer
- Recommendation System Engineer (content-based)
- Content Analysis Engineer
- Information Retrieval Specialist
- Machine Translation Developer
- Speech Recognition Engineer.
Trainers
Dr. Alfred Ang: Dr. Alfred Ang is a distinguished technology leader, AI researcher, and educator with over 20 years of experience in artificial intelligence, cybersecurity, and cloud computing. As the Chief Instructional Designer and CTO of Tertiary Infotech, he has spearheaded the development of over 500 advanced technology courses and led multiple AI-driven innovation projects across industries. His expertise spans deep learning, natural language processing, and enterprise AI system design, with a strong focus on fine-tuning large language models (LLMs) and optimizing retrieval-augmented generation (RAG) pipelines.
In “Fine-Tuning LLM Models and RAG,” Dr. Ang provides in-depth insights into customizing foundation models for domain-specific applications. He guides learners through advanced prompt engineering, model retraining, and integration with vector databases for RAG systems. His sessions emphasize practical experimentation with open-source LLM frameworks, enabling participants to build optimized, high-performance AI solutions tailored to real-world enterprise needs.
Tan Woei Ming: Tan Woei Ming is a data scientist and AI engineer with over 15 years of experience in machine learning, deep learning, and AI-driven automation. He has led industrial AI projects in the semiconductor and manufacturing sectors, deploying predictive analytics and computer vision systems for process optimization. Holding a Master’s in Intelligent Systems from the National University of Singapore, Woei Ming is deeply experienced in developing and fine-tuning neural networks using TensorFlow, PyTorch, and Hugging Face libraries.
In “Fine-Tuning LLM Models and RAG,” Woei Ming teaches participants how to customize and deploy fine-tuned LLMs for business-critical workflows. His sessions explore parameter-efficient tuning, embedding generation, and integration with retrieval systems to enhance contextual accuracy. Combining strong theoretical foundations with hands-on experimentation, he helps learners gain the technical expertise needed to adapt large language models for specific organizational domains.
Yeo Hwee Theng: Yeo Hwee Theng is a data science and AI strategist with extensive experience leading enterprise AI adoption across healthcare, finance, and government sectors. As a Data & Analytics Product Lead at Amplify Health and a former AI Architect at Huawei, she has designed and implemented large-scale machine learning and analytics systems. Her academic background includes a Master of Technology in Enterprise Business Analytics from NUS, where she specialized in data architecture and applied AI.
In “Fine-Tuning LLM Models and RAG,” Hwee Theng focuses on aligning data strategy with LLM fine-tuning workflows. Her sessions delve into model evaluation, data curation, and governance for retrieval-augmented AI systems. She emphasizes the practical integration of AI pipelines into enterprise infrastructure, empowering learners to operationalize LLMs responsibly and efficiently across real-world use cases.
Teh Siew Yee: Teh Siew Yee is a data analytics and digital transformation leader with over two decades of experience across banking, aviation, and technology sectors. With a Master of IT in Business (AI) from SMU and leadership roles at Standard Chartered, TikTok, and HP, he has developed expertise in AI governance, data management, and analytics strategy. As an ACLP-certified trainer, he is known for delivering clear, industry-relevant instruction that bridges business goals with technical execution.
In “Fine-Tuning LLM Models and RAG,” Siew Yee helps participants understand the lifecycle of LLM customization—from data preparation to deployment. His sessions highlight best practices for prompt optimization, hybrid retrieval techniques, and responsible AI alignment. Through guided labs and real-world examples, he equips learners with the skills to fine-tune and deploy scalable, explainable AI models using modern RAG frameworks.
Truman Ng: Truman Ng is an AI infrastructure and cloud automation specialist with more than 20 years of experience in enterprise networking, cybersecurity, and intelligent systems integration. He holds PMP, ACTA, and Huawei HCIE certifications and has trained global corporate teams in DevOps, AI deployment, and cloud orchestration. His expertise lies in building scalable AI pipelines and integrating model fine-tuning within secure cloud-based environments.
In “Fine-Tuning LLM Models and RAG,” Truman teaches how to operationalize and optimize fine-tuned LLMs within hybrid and distributed infrastructures. His sessions focus on model deployment, GPU optimization, and the secure management of vector databases for RAG workflows. By merging AI engineering with infrastructure best practices, he enables learners to design, deploy, and maintain robust end-to-end AI systems with enterprise-level scalability.
Customer Reviews (8)
- will recommend Review by Course Participant/Trainee
-
. (Posted on 4/6/2025)1. Do you find the course meet your expectation? 2. Do you find the trainer knowledgeable in this subject? 3. How do you find the training environment - will recomnend Review by Course Participant/Trainee
-
. (Posted on 4/6/2025)1. Do you find the course meet your expectation? 2. Do you find the trainer knowledgeable in this subject? 3. How do you find the training environment - will recommend Review by Course Participant/Trainee
-
. (Posted on 12/18/2024)1. Do you find the course meet your expectation? 2. Do you find the trainer knowledgeable in this subject? 3. How do you find the training environment - will recommend Review by Course Participant/Trainee
-
The trainer is very knowledgeable. I hope the source code can tally more with the slides. (Posted on 10/27/2024)1. Do you find the course meet your expectation? 2. Do you find the trainer knowledgeable in this subject? 3. How do you find the training environment - will recommend Review by Course Participant/Trainee
-
. (Posted on 1/27/2024)1. Do you find the course meet your expectation? 2. Do you find the trainer knowledgeable in this subject? 3. How do you find the training environment - Interesting course in this GPT era. Review by Course Participant/Trainee
-
So far so good, the pace is match with my learning. (Posted on 12/1/2023)1. Do you find the course meet your expectation? 2. Do you find the trainer knowledgeable in this subject? 3. How do you find the training environment - Already good Review by Course Participant/Trainee
-
Already good (Posted on 12/1/2023)1. Do you find the course meet your expectation? 2. Do you find the trainer knowledgeable in this subject? 3. How do you find the training environment - will recommend Review by Course Participant/Trainee
-
. (Posted on 5/25/2023)1. Do you find the course meet your expectation? 2. Do you find the trainer knowledgeable in this subject? 3. How do you find the training environment








