The use of generative AI technology such as ChatGPT is growing, however this creates significant exposures for businesses. Here’s how risk managers can mitigate the threats
The mass availability of generative AI technology such as ChatGPT and Google Bard, has become a top concern for enterprise risk managers, according to a new study.
Generative AI was the second most-frequently named risk in Gartner’s second quarter survey, appearing in the top 10 for the first time.
Ran Xu director of research in the Gartner risk & audit practice said: “This reflects both the rapid growth of public awareness and usage of generative AI tools, as well as the breadth of potential use cases, and therefore potential risks, that these tools engender.”
The need for speed
As generative artificial intelligence innovation continues at a breakneck pace, concerns around security and risk have become increasingly prominent.
Some lawmakers have requested new rules and regulations for AI tools, while tech and business leaders have suggested a pause on the training of AI systems to assess their safety.
However, many experts believe that the genie is out of the bottle, and risk managers must therefore focus on managing exposures rather than hoping for a slowdown in the adoption of the technology.
”Organisations need to act now to formulate an enterprise-wide strategy for AI trust, risk and security management (AI TRiSM).”
Avivah Litan, VP analyst at Gartner said: ”The reality is that generative AI development is not stopping. Organisations need to act now to formulate an enterprise-wide strategy for AI trust, risk and security management (AI TRiSM).
“There is a pressing need for a new class of AI TRiSM tools to manage data and process flows between users and companies who host generative AI foundation models.”
There are currently no off-the-shelf tools on the market that give users systematic privacy assurances or effective content filtering of their engagements with these models, for example, filtering out factual errors, hallucinations, copyrighted materials or confidential information.
Litan added that AI developers must urgently work with policymakers, including new regulatory authorities that may emerge, to establish policies and practices for generative AI oversight and risk management.
Four key risks to manage
In terms of managing enterprise risk, there are several key themes that must be addressed. These are:
It’s important to educate corporate leadership on the need for caution and transparency when using generative tools so that intellectual property risks can be properly mitigated both in terms of input and output.
Xu explained: “Information entered into a generative AI tool can become part of its training set, meaning that sensitive or confidential information could end up in outputs for other users.
“Moreover, using outputs from these tools could well end up inadvertently infringing the intellectual property rights of others who have used it.”
Generative AI tools may share user information with third parties, such as vendors or service providers, without prior notice.
This has the potential to violate privacy laws in many jurisdictions.
For example, regulation has already been implemented in China and the EU, with proposed regulations emerging in the USA, Canada, India and the UK among others.
Hackers are always testing new technologies for ways to subvert it for their own ends, and generative AI is no different.
Xu said: ”We’ve seen examples of malware and ransomware code that generative AI has been tricked into producing, as well as ‘prompt injections’ attacks that can trick these tools into giving away information they should not.
”This is leading to the industrialisation of advanced phishing attacks.”
“Hallucinations”, fabrications and deepfakes
“Hallucinations” and fabrications including factual errors are already emerging issues with generative AI chatbot solutions.
Training data can lead to biased, off-base or wrong responses, but these can be difficult to spot, particularly as solutions are increasingly believable and relied upon.
Deepfakes, when generative AI is used for content creation with malicious intent, are a significant generative AI risk.
These fake images, videos and voice recordings have been used to attack celebrities and politicians, to create and spread misleading information, and even to create fake accounts or take over and break into existing legitimate accounts.
Litan said: ”In a recent example, an AI-generated image of Pope Francis wearing a fashionable white puffer jacket went viral on social media.
”While this example was seemingly innocuous, it provided a glimpse into a future where deepfakes create significant reputational, counterfeit, fraud and political risks for individuals, organisations and governments.”
How risk professionals can manage generative AI risks
There are two general approaches to leveraging ChatGPT and similar applications.
The first is out-of-the-box model usagem which leverages these services as-is, with no direct customisation.
The second approach is prompt engineering, which uses tools to create, tune and evaluate prompt inputs and outputs.
”Establish a governance and compliance framework for enterprise use of these solutions”
Litan said: ”For out-of-the-box usage, organisations must implement manual reviews of all model output to detect incorrect, misinformed or biased results.
”Establish a governance and compliance framework for enterprise use of these solutions, including clear policies that prohibit employees from asking questions that expose sensitive organisational or personal data.”
Organisations should also monitor unsanctioned uses of ChatGPT and similar solutions with existing security controls and dashboards to catch policy violations.
”Steps should be taken to protect internal and other sensitive data used to engineer prompts on third-party infrastructure”
For example, firewalls can block enterprise user access, security information and event management systems can monitor event logs for violations, and secure web gateways can monitor disallowed API calls.
Litan added: “For prompt engineering usage, all of these risk mitigation measures apply.
”Additionally, steps should be taken to protect internal and other sensitive data used to engineer prompts on third-party infrastructure. Create and store engineered prompts as immutable assets.
“These assets can represent vetted engineered prompts that can be safely used. They can also represent a corpus of fine-tuned and highly developed prompts that can be more easily reused, shared or sold.”