Don’t stop a speeding train: How to adopt a guardrail-based Gen AI security strategy

GenAI tool adoption has left the station. How can enterprise security leaders keep it safely on the tracks?
By
Julia Kraut, Head of Marketing
April 16, 2024
5 min read
Share this post

A few months into 2024, we can safely say that the GenAI adoption train has left the station. Unfortunately for CISOs of large enterprises, this train is no Amtrak commuter rail – it’s the hyperloop. Imagine if you were responsible for the safety of a major city’s railway – would your measures for ensuring safety on the rails include setting up heavy roadblocks on the tracks to force it to slow down, or implementing warning lights and alerts to make sure it’s speeding in the right direction? 

CISOs, like train conductors, understand the challenges and hazards in trying to stop a speeding train. For enterprise CISOs, blocking GenAI tools as a strategy is unsustainable, as employees will inevitably continue to find ways to adopt public tools like ChatGPT to streamline productivity and boost performance. Pressure on CISOs also comes from the top down, as business leadership often advocates for rapid GenAI adoption realizing potential revenue-boosting benefits. Whether bottom-up adoption of these tools by employees or top-down pressures from exec leadership, the CISO’s role as “GenAI enabler” has been cemented. Caught between a rock and a hard place, how can CISOs successfully - and safely - drive GenAI adoption for business, while also ensuring the organization’s assets are secure? This is where we see the advent of GenAI security guardrails. 

For security practitioners, the idea of security guardrails is not new. It gained a lot of prevalence in the AppSec world as a way for security to put in controls for secure development, without slowing them down. After struggling to keep up with the pace of development, AppSec teams understood that if security guardrails are baked into the developers' system, they are less likely to try and bypass them. Guardrails in the AppSec world gave security teams a certain level of comfort that bad code wouldn’t make it to production, and development and production would continue unobstructed.

How can we adapt these same principles to the new GenAI frontier? The most common use case we see today is through contextual, or prompt guardrails. Large Language Models (LLMs) have the capacity to generate text that may be harmful, illegal or that violates internal company policies (or all three!). These manipulations can come in both the form of inputs into these models, if, for example, an employee of a FinServ organization inputs the bank account information of a customer into an internal company copilot to pull account details, or output, if, for example, a paralegal uses public ChatGPT to ask a question about relevant legislation, and is provided with inaccurate legal information which is then sent to the client. To protect against such harmful threats, CISOs should set up content-based guardrails to define and then alert on prompts that are risky, malicious or violate compliance standards. At Aim Security, customers can set up and define their own, unique parameters of safe prompts, and alert to and prevent prompts that fall outside of these guardrails. 

Content-based guardrails are only one part of the equation. Understanding that with GenAI, one-size does not fit all, compliance guardrails for highly regulated industries like healthcare, finance, and legal services, are required to make sure that sensitive customer data is protected. Aim Security, for example, redacts keywords, phrases, or entire categories of data (PII, PHI, Bank account numbers etc.), before data exits organizations while maintaining a smooth user experience, audits and monitors model outputs for compliance purposes (bias, incorrect outputs) and ensures compliance with regulatory standards (HIPAA, GDPR). For those organizations using enterprise Gen AI models, authorization guardrails can place data authorization boundaries when connecting GenAI tools to enterprise knowledge bases, to ensure employee use of LLMs is limited to data they are allowed to access

Generational companies are built on superior creativity. Enterprise business leaders understand that GenAI technologies have released unparalleled access to streamlined productivity and creativity. Guardrails for GenAI allow security leaders to continue to foster a culture of flexibility and innovation. By providing guidance rather than restrictions, these guardrails enable organizations to explore new technologies and strategies while ensuring compliance and security standards are upheld. The challenge for CISOs to secure the enterprise without slowing down the business is not new, but with the adoption of GenAI tools, it has become supercharged. To avoid collision, CISOs should implement baseline security guardrails to keep the speeding train on the track.