This is the 4th session:
Adversarial samples are inputs to Machine Learning models that an adversary has tampered with in order to cause specific misclassifications. It is surprisingly easy to create adversarial samples and surprisingly difficult to defend ML models against them. This poses potential threats to the deployment of ML in security critical applications.
In this webinar I will review the state-of-the-art on adversarial samples and discuss recent progress in developing ML models that are robust against adversarial samples. Most time will spent on looking how to use the Adversarial Robustness Toolbox (ART) open source project to evaluate the robustness of ML models under various types of threats.
All sessions of the series: