Applying Bandit Algorithms To Build Live-Learning Systems
Duration: 4h 30m | .MP4 1280x720, 30 fps(r) | AAC, 44100 Hz, 2ch | 1.68 GB
Genre: eLearning | Language: English
Duration: 4h 30m | .MP4 1280x720, 30 fps(r) | AAC, 44100 Hz, 2ch | 1.68 GB
Genre: eLearning | Language: English
Hands-on tuition on how to build smart live-learning MAB agents to improve the click-through rate of ads on the web.
What you'll learn:
Designing the architecture of live-learning systems that uses multi-armed bandit algorithms.
Using Flask to implement MAB agents to optimise click-through rate of advertisements.
Implementations of Epsilon-Greedy, Softmax Exploration, and UCB in a live-learning system.
Transitioning from simulations of MAB problems into real applications.
General best-practices in Python software development.
General backend development with Flask.
Automation of database migrations and seeding.
Requirements:
Basic Object Oriented Programming in Python.
Basic mathematics (High school algebra is enough)
Previously enrolled in "Practical Multi-Armed Bandit Algorithms in Python"
Description:
This course is a sequel to my previous course titled "Practical Multi-Armed Bandit Algorithms In Python" and the goal is to teach how you can readily apply your knowledge on MAB algorithms to build and deploy smarts agents on the web that automatically learns how to improve the click-through rate of advertisements.
Every video in this course is hands-on, and collectively they equip you with expert knowledge on how to build web applications using Flask, and also how to integrate MAB agents that adjust their operations to improve CTR of online ads. By the end of this course, you will know precisely how to implement live-learning agents into web applications to optimise key business goals.
It is one thing knowing how to use simulations to validate the performance of MAB agents. However, transitioning from simulations into their real-world applications require some key skills that are taught in this course. For example, you'll need to know how to do the following :
- store and retrieve information from a database which will be used by the agent to choose actions.
- translate user interactions (such as clicks) into rewards which the agent can use as evaluative feedback information.
- adjusting the agent's knowledge to reflect the true user behaviours that has been observed through interaction.
- implement various MAB algorithms with an API that makes it easier to switch one algorithm for the other.
- design and implement a good software architecture for online live-learning systems.
I highly recommend that you complete my previous course titled "Practical Multi-Armed Bandit Algorithms In Python" before taking this course since it's a follow-up. However, if you already know how to implement various MAB algorithms, then you can jump right into this course and succeed without struggling.
This course is intentionally taught in a very simple way. It doesn't include the use of advanced mathematics and all you need to know is OOP in Python and simple high school algebra.
Thanks for taking this course! I can't wait to see what you will build with the knowledge shared in here!
Who this course is for:
People who already know about multi-armed bandit algorithms and want to transition from simulations into building real applications.
Anyone who wants to learn how to design and implement an architecture for live-learning systems.
Engineers who want to learn how reinforcement learning can be used to optimise click-through rates of adverts.
Students of my previous course "Practical Multi-Armed Bandit Algorithms in Python" who wants to apply their knowledge to real-life situations.
More Info