Adversarial Robustness 360 Toolbox For ML


Aug 24, 10:00 AM PDT
  • Virtual
  • 91 RSVP
Description
Speaker
Welcome to the "AI Trust, Bias and Explainability" learning series, by IBM AI. In collaboration with IBM team, we host a series of practical introductory sessions to AI trust, bias and explainability.

This is the 4th session:
Adversarial samples are inputs to Machine Learning models that an adversary has tampered with in order to cause specific misclassifications. It is surprisingly easy to create adversarial samples and surprisingly difficult to defend ML models against them. This poses potential threats to the deployment of ML in security critical applications.

In this webinar I will review the state-of-the-art on adversarial samples and discuss recent progress in developing ML models that are robust against adversarial samples. Most time will spent on looking how to use the Adversarial Robustness Toolbox (ART) open source project to evaluate the robustness of ML models under various types of threats.

All sessions of the series:

  • Jul 27th - AI Security Privacy-Preserving Machine Learning by IBM AI. Session 1
  • Aug 10th - Explainable AI Workflows using Python. Session 2
  • Aug 17th - Understanding and Removing Unfair Bias in ML. Session 3
  • Aug 24th - Adversarial Robustness 360 Toolbox For ML. Session 4
  • Aug 31st - Workshop: Explainable AI Workflows. Session 5
  • Mathieu Sinn

    I lead global IBM efforts on developing and proving out robust, secure and privacy-preserving AI. My team and I lead several open source projects in this space and partner with world-class R&D organizations from industry and academia to help advance trustworthy and responsible AI.
    The event ended.
    Watch Recording
    *Recordings hosted on Youtube, click the link will open the Youtube page.
    Contact Organizer