Production ML Monitoring: Outliers, Drift, Explainers


Feb 16, 12:00 PM PST
  • Virtual SF Big Analytics
  • 193 RSVP
Description
Speaker
This event is hosted by San Francisco Big Analytics meetup group

Abstract:
The lifecycle of a machine learning model only begins once it is in production. In this talk we provide a practical deep dive on best practices, principles, patterns and techniques around production monitoring of machine learning models. We will cover standard microservice monitoring techniques applied into deployed machine learning models, as well as more advanced paradigms to monitor machine learning models through concept drift, outlier detector and explainability.
We will dive into a hands on example, where we will train an image classification machine learning model from scratch, deploy it as a microservice in Kubernetes, and introduce advanced monitoring components as architectural patterns with hands on examples. These monitoring techniques will include AI Explainers, Outlier Detectors, Concept Drift detectors and Adversarial Detectors. We will also be understanding high level architectural patterns that abstract these complex and advanced monitoring techniques into infrastructural components that will enable for scale, introducing the standardised interfaces required for us to enable monitoring across hundreds or thousands of heterogeneous machine learning models.

Alejandro Saucedo

Alejandro is the Chief Scientist at the Institute for Ethical AI & Machine Learning, where he leads the development of industry standards on machine learning explainability, adversarial robustness and differential privacy. Alejandro is also the Director of Machine Learning Engineering at Seldon Technologies, where he leads large scale projects implementing open source and enterprise infrastructure for Machine Learning Orchestration and Explainability
The event ended.
Watch Recording
*Recordings hosted on Youtube, click the link will open the Youtube page.
Contact Organizer