This week, the UK National Cyber Security Centre (NCSC) published guidelines to encourage and promote the secure and responsible creation of AI systems.
With four areas of focus - secure design, secure development, secure deployment, and secure operation and maintenance – NCSC’s aim is to “help providers build AI systems that function as intended, are available when needed, and work without revealing sensitive data to unauthorised parties.”
The guidelines have been developed in cooperation with 21 partner agencies from 17 countries, as well as a range of partners from academia and industry.
NCC Group’s Chief Scientist, Chris Anley shares his thoughts on the publication:
“This is a helpful and timely document addressing an increasingly important subject. AI presents many opportunities, but these opportunities come with significant risks.
The guidance is welcome for several reasons:
- It will help raise awareness of the issues around AI security, for executives and engineers.
- It defines specific actions that can be taken to reduce security risk associated with AI systems.
- It provides useful examples of categories of security issues that we are finding in the wild, for example:
- Leakage of sensitive training data
- Data poisoning
- Remote code execution
- Adversarial attacks, manipulation of decision-making processes, and bypassing of guardrails.
Secure design, development and deployment of AI systems is essential and the fact that these guidelines have been developed with such a broad range of partners is encouraging."
For further information:
Visit our research blog under the Machine Learning category, which contains fully worked examples, code and detailed discussion on this rapidly changing area.
For even more context, check out our recap of the inaugural Global AI Safety Summit where we decode all of the biggest announcements to highlight what's important for your business.