Reliable performance is the key
Robustness in AI can be described as predictive certainty of machine learning systems. Robust machine learning systems perform just as they have been trained to, even in unfamiliar settings, and minimise vulnerability to adversarial attacks. Put in other words: a robust AI can detect if input data is meaningfully different to what it has been trained on and mitigate unintended effects. Robustness is therefore a key prerequisite for deploying AI in safety-critical settings.1 The European Commission has, to mitigate possible negative effects of AI on society, established a set of principles for secure and trustworthy AI. Core requirements such as the concepts of explainability of AI systems and the aforementioned robusntess will also feature in future regulations of such technologies alongside the cybersecurity of digital systems and the protection of data.
When is a machine learning system robust enough for the real world?
Imagine a system for image classification that has to determine whether the pictures or bits of pictures show cats or dogs. If slightly altered pixels, shaded spots or distorted angles of the input picture lead to a completely wrong classification, this modification is called an adversarial example. If AI models make mistakes they should not, this is the exact opposite of robustness. While funny with the cats and dogs example, this cannot stand in the real world, where, for example, an autonomously driving vehicle needs to clearly distinguish street signs and obstacles.
Robustness implies that even with perturbed inputs, i.e. with any possible alteration or minuscule change to the unperturbed input, the model still classifies it correctly and does its job just like the human brain would. In practise, verification frameworks are used to test robustness in real world situtions compared to training.
The goal is clear: safety and reliability.
Robustness solves the “never enough data” problem
Whenever an AI is trained for robustness the question arises: how much learning input, i.e. data, is enough to guarantee its functioning? The answer usually is – there is never enough, the number of datapoints or inputs is infinite. That is what makes training an AI costly and time-intensive. Robustness in training makes sure that the input data points are clearly defined and specified. This reduces the amount of input data to a finite number and saves both time and money.
Spiki makes AI robust and reliable
SPIKI is building brain-inspired AI you can trust. We developed an innovative neural networks training method for supervised learning that combines
- Robust Neural Networks
- Specification based training data for reduced data collection efforts
- Built-in formal verification for explainable AI.
We are striving to build trustworthy AI, which can be deployed via microcontroller, FPGA, as a cloud service or even ASIC as a future step in our product development. You profit from an easy to use toolchain, from training to deployment, ready for third-party hardware.
Excited? Contact us!
References
1 Tim G. J. Rudner and Helen Toner, Key Concepts in AI Safety: Robustness and Adversarial Examples, Center for Security and Emerging Technology, March 2021.
2 Hamon, R., Junklewitz, H., Sanchez, I. Robustness and Explainability of Artificial Intelligence – From technical to policy solutions, Publications Office of the European Union, Luxembourg, Luxembourg, 2020