Quickstart
Start securing your generative AI application in under 5 minutes
Define an application
Create an application in Metatext AI with a name, description, AI provider, model, and model parameters.
Test with red team vulnerabilities
Test your application’s (including provider, model, and system prompt, etc) vulnerabilities and robustness with our probes.
Set up policies and guardrails
Based on your vulnerability scan results, define rules to enforce on your application using our policy catalog.
Run real-time protection
Integrate Metatext AI’s Evaluate API to enforce policies in real-time.
Manage application and policies
Create an application
Create an application
An application represents a unified entity that encapsulates all the information about your generative AI app. It includes several key components:
- Application Name: A unique name to identify your application.
- Description: A brief description of your application.
- Provider: The AI service provider you’re using (e.g., OpenAI, Anthropic, Google).
- Model: The specific AI model being utilized (e.g., GPT-3.5, GPT-4, Claude).
- Parameters: Configuration settings that define how the model behaves (e.g., system prompt, temperature, max tokens).
To get started with setting up your application, visit our Application Setup Guide.
For detailed API, check out our API Reference.
Setup policies and guardrails
Setup policies and guardrails
Policies and guardrails are rules that you want to enforce on your application.
Visit our Policies and Guardrails Setup Guide.
For detailed API information, check out our API Reference.
Protect your application with real-time policies and guardrails
Run evaluate
Run evaluate
Evaluate allows you to enforce your policies and guardrails on your application in real-time.
-
Prepare your request: You’ll need to include the following in your request:
- User input
- LLM output
- Policy IDs to evaluate against
-
Send the request: Use our API to send these details for evaluation.
-
Receive the evaluation: The API will return whether the interaction complies with your policies.
-
Auto-correction: If the LLM output doesn’t comply, evaluate can automatically correct it to ensure compliance.
For detailed API information, check out our API Reference.
Test with red team vulnerabilities
List available probes to test
List available probes to test
Probes are predefined test scenarios or inputs designed to assess the vulnerabilities and robustness of your AI application. They help identify potential weaknesses in your model’s responses.
Examples:
-
Prompt Injection: These probes test if an attacker can manipulate the model’s behavior by inserting malicious instructions.
-
Hallucinations: These probes evaluate if the model generates false or nonsensical information confidently.
-
Data Leakage: These probes test if the model reveals sensitive information it shouldn’t.
To view and select from our comprehensive list of probes, visit our Probe Catalog.
For information, see our Vulnerability Scanning Guide.
Scan vulnerabilities
Scan vulnerabilities
Before running a scan, you should create an application with model and provider details.
-
Run the scan: Initiate the vulnerability scan using our API.
-
Review results: Analyze the scan results to identify:
- Detected vulnerabilities
- Severity levels
- Recommended mitigation strategies
-
Mitigate: Based on the results, implement necessary changes to your policies and guardrails.
For detailed API information and best practices, refer to our Vulnerability Scanning Guide.
Understand the core concepts
To fully leverage our technology, it’s crucial to understand the fundamental concepts that power our platform.
Dive deeper into the following key areas:
Policies and guardrails
Learn how to define and enforce your policies and guardrails.
Evaluation
Learn how to evaluate your application’s compliance with your policies and guardrails.
Red team and tests
Learn how to test your application’s vulnerabilities and robustness.
API reference
Explore our API reference to learn more about our API.