Preamble Glossary

All terms in the glossary are given for informational purposes.

Rule-based Policy

A Policy that is composed of one or more specific instances of a Rule.

Guardrail

A set of policies designed to ensure the safe, secure, and compliant functioning of AI systems and user activity. Guardrails serve to prevent AI systems from operating outside of acceptable bounds, mitigating risks associated with unacceptable user input and AI behaviors that could lead to unintended consequences or harm.

Natural Language-based Policy

A Policy that is expressed in common human language, as opposed to formal technical or domain-specific language.

Policy

An actioned Rule or set of Rules intended to align User and AI Model behaviors with the governance objectives of an organization. A Policy specifies the action to be taken in response to User input and AI model output. A Policy constrains risk and harm associated with generative AI models and applications which arise from inputs to and/or outputs from a generative AI model. A Policy is typically written from the perspective of the organization seeking to control risk.

A policy can natural language based, or consist of 1 or more Rules.

Rule

Criteria associated with data or information input by a User or output from a generative AI model (e.g., PII, Hacking Instructions, prompt injection, etc.). One or more Rules whose criteria is met triggers a Policy.

Last updated