Guardrails
Programmatic checks around an LLM that prevent unsafe, off-topic, or non-compliant outputs.
Guardrails are the safety net around a model. They include input validation (block PII, profanity, prompt-injection patterns), output validation (verify schema, fact-check, redact secrets), and policy enforcement (block disallowed topics).
Frameworks like NeMo Guardrails, Guardrails AI, and built-in features in Vercel AI SDK and LangChain make this systematic. The core idea: the LLM is fallible, so the surrounding code must catch its failures.
In regulated industries (finance, healthcare, legal), guardrails are not optional. Even in consumer products, weak guardrails are a frequent source of embarrassing AI incidents.