Learn best practices for feature engineering, model training, and batch/online inference in Python to accelerate your time to value for ML.
Is it taking too long to go from a model in a notebook to a model that is adding to the business?
MLOps and ML Pipelines for architecting machine learning (ML) systems are quickly becoming the de facto way to build production ML systems with a Feature Store. Join us to learn how to architect your ML systems as Feature pipelines, Training pipelines, and Inference pipelines (ML Pipelines) that are connected together via a Feature Store and managed by best practices from MLOps. We will present examples of ML Pipelines for both batch ML systems and online ML systems in the context of the Hopsworks platform.
Learn about best practices for feature engineering, model training, and batch/online inference in Python, Spark, and SQL, and learn about how a Feature Store can accelerate your time to value for ML.
* 9:00am~9:30am: Arrival/Registration
* 9:30am~10:40am: Tech Talks
* 10:40am~11:00am: Break & Networking
* 11:00am~12:00pm: Hands-on lab
- Introduction & Principles for putting ML in Production
- Developing and architecting ML Systems for Production from Day 1
- ML pipelines, MLOps Principles, and the Feature Store
Build an end to end ML System using ML pipelines and the Feature Store
WeWork, 107 Spring St, Seattle, WA 98104. Google Map