Real-time protection
Evaluate
Here is more details about evaluate API
Evaluate API is used to safeguard generative AI applications by evaluating the input and output of the LLM against a set of policies and guardrails.
Parameters
- Application: The application to be evaluated, it’s the same as the application id in application setup.
- Messages: The messages to be evaluated, it can be
user
,assistant
, orboth
, optionally you can pass thesystem
message. - Policy IDs: The policy ids to be evaluated, it’s a list of policy ids in the same order as the policies in the application. Your application should have these policies set up (see policies setup) or passed by
policies
parameter. - Policies: List of policies to be evaluated, see policies setup for more details.
- Correction: If a policy is violated and
correction_enabled
is set totrue
, the LLM will be corrected by an automatic correction, or a manual override response defined in the policy. - Fail Fast: If
fail_fast
is set totrue
, it will stop the evaluation once any policy is violated.
Evaluation Flow
Once the policies set is enabled or passed by policy_ids
in API request, it will be checked on every evaluation request against the provided messages (user
, assistant
, or both). The policies have a priority order, first in the list has the highest priority. If
fail_fastis set to
true`, it will stop the evaluation once any policy is violated.
Here is the policy flow:
Input request
To create your policies, visit Policies Setup Guide