Computers are Stupid: Protecting "AI" from Itself
You need to be signed in to add a collection
It seems like everywhere you turn there is another startup or company looking to use "AI" to revolutionize something, or really anything. Is AI hype or substance? In this talk, we'll dive into AI security, looking at the field of adversarial learning. How easy is it to fool an artificial intelligence? What would be needed to create a robust and secure neural network? How are researchers working on solving the security issues within the way we train AI, to help it from making errors or being used for unethical tasks. In that same vein, we'll address how machine learning places data privacy and ethical data use at risk. We'll explore why new efforts like GDPR and privacy-preserving ML might make a way for a safer, more ethical machine learning practice.
Transcript
It seems like everywhere you turn there is another startup or company looking to use "AI" to revolutionize something, or really anything. Is AI hype or substance? In this talk, we'll dive into AI security, looking at the field of adversarial learning. How easy is it to fool an artificial intelligence? What would be needed to create a robust and secure neural network? How are researchers working on solving the security issues within the way we train AI, to help it from making errors or being used for unethical tasks.
In that same vein, we'll address how machine learning places data privacy and ethical data use at risk. We'll explore why new efforts like GDPR and privacy-preserving ML might make a way for a safer, more ethical machine learning practice.