Microsoft, MITRE and 11 other organisations to take on machine learning threats

IT major Microsoft and the federally funded MITRE research organisation today released the ‘Adversarial ML Threat Matrix’, a framework designed to help cybersecurity experts prepare against attacks on artificial intelligence (AI) models.

The framework is available on GitHub. Besides Microsoft Corp. and MITRE, it also includes contributions from a dozen other organisations.

MITRE said on its official blog that Microsoft and MITRE, in collaboration with Bosch, IBM, NVIDIA, Airbus, Deep Instinct, Two Six Labs, the University of Toronto, Cardiff University, Software Engineering Institute/Carnegie Mellon University, PricewaterhouseCoopers, and Berryville Institute of Machine Learning, were releasing the Adversarial ML Threat Matrix, an industry-focused open framework, to empower security analysts to detect, respond, and remediate threats against ML systems.

Microsoft said on in its blog that over the last four years, it had seen a “notable increase” in attacks on commercial ML systems. Market reports were also bringing attention to this problem: Gartner’s Top 10 Strategic Technology Trends for 2020, published in October 2019, predicted that “..through 2022, 30 percent of all AI cyberattacks will leverage training-data poisoning, AI model theft, or adversarial samples to attack AI-powered systems.”

Despite these compelling reasons to secure ML systems, Microsoft’s survey spanning 28 businesses found that most industry practitioners had yet to come to terms with adversarial machine learning. Twenty-five out of the 28 businesses indicated that they did not have the right tools in place to secure their ML systems.

Microsoft added it was seeding this framework with a curated set of vulnerabilities and adversary behaviours that Microsoft and MITRE had vetted to be effective against production ML systems. This way, security analysts could focus on realistic threats to ML systems.

Leave a Reply

Click here to opt out of Google Analytics