AWS-MLO
New MLOps Engineering on AWS
Description
This course builds on and extends the DevOps practice in software development
to train and deploy Machine Learning (ML) models. The course highlights the importance of data, model and code for successful ML deployments. It introduces the use of tools, automation, various processes and
teamwork, as well as the challenges associated with hand-offs between data engineers, data scientists, software developers and operators.
The course will also analyze the use of tools and processes that play an important role in monitoring and remediation in cases where forecasting in production differs from performance.
The instructor will encourage the course participants to create a MLOps action plan for their own organization, using the knowledge acquired during the course and with the help of other course participants.
In this course you will learn:
- How Deep Learning works,
- The key differences between DevOps and MLOps,
- the DL workflow,
- the importance of communication in MLOps,
- the end-to-end possibilities of automating ML workflows,
- Amazon SageMaker's key features for automating MLOps,
- the creation of the ML process that builds, trains, tests and deploys models,
- create an automated ML process that retrains the model based on the model code change(s),.
- defining the elements and important steps of the deployment process,
- the elements that can be included in the model package and their use in the teach or inference,
- Amazon SageMaker's options for selecting models for deployment, including support for ML frameworks and built-in algorithms,
- how ML scaling differs from scaling other applications,
- when to use different approaches for inference,
- different deployment strategies, benefits, challenges and typical use cases,.
- the challenges of deploying machine learning to edge devices,
- Amazon SageMaker features relevant to deployment and inference.
- the importance of supervision
- the potential causes of data drift in the underlying input data,
- an introduction to bias control in ML models,
- monitoring model resource consumption and latency,
- how human review of model results can be integrated into production.
Suggested For
- For DevOps Engineers
- For ML Engineers
- for developers who work with ML models
Outline
Module 0: Welcome
- Course introduction
Module 1: Introduction to MLOps
- Machine learning operations
- Goals of MLOps
- Communication
- From DevOps to MLOps
- ML workflow
- Scope
- MLOps view of ML workflow
- MLOps cases Module
2: MLOps Development
- Intro to build, train, and evaluate machine learning models
- MLOps security
- Automating
- Apache Airflow
- Kubernetes integration for MLOps
- Amazon SageMaker for MLOps
- Lab: Bring your own algorithm to an MLOps pipeline
- Demonstration: Amazon SageMaker
- Intro to build, train, and evaluate machine learning models
- Lab: Code and serve your ML model with AWS CodeBuild
- Activity: MLOps Action Plan Workbook
Module 3: MLOps Deployment
- Introduction to deployment operations
- Model packaging
- Inference
- Lab: Deploy your model to production
- SageMaker production variants
- Deployment strategies
- Deploying to the edge
- Lab: Conduct A/B testing
- Activity: MLOps Action Plan Workbook
Module 4: Model Monitoring and Operations
- Lab: Troubleshoot your pipeline
- The importance of monitoring
- Monitoring by design
- Lab: Monitor your ML model
- Human-in-the-loop
- Amazon SageMaker Model Monitor
- Demonstration: Amazon SageMaker Pipelines, Model Monitor, model registry, and Feature Store
- Solving the Problem(s)
- Activity: MLOps Action Plan Workbook
Module 5: Wrap-up
- Course review
- Activity: MLOps Action Plan Workbook
- Wrap-up
Prerequisites
The following training courses are required for the course:
- Technical Essentials on AWS
- DevOps Engineering on AWS
- Practical Data Science with Amazon SageMaker