Input API

List view
Quick Start
User Guide
Policies & GuardRails
Witness Anywhere: Remote Device Security
Witness Attack
Administrator Guide
 

Input API

Refer to the API Reference page for an explanation of the API process.

Description

The “Input” API submits a potentially Risky Prompt to WitnessAI. The Risky Prompt is not forwarded to the Destination AI App or LLM (collectively “Apps”).
WitnessAI processes the Risky Prompt in the same way it does for any Apps we support.
WitnessAI returns a Safe Prompt, plus the intention classification, risk scores, redaction tokens, messages, and output of all your Policies and Guardrails that match the source, prompt data, and destination of the input request.
notion image
Your Custom App processes the API response, and can submit the Safe Prompt provided by the input API to the Destination App.
After receiving the Safe Prompt, the Destination App sends back a potentially Risky Response.
WitnessAI receives the Risky Response, processes it through your Policies and Guardrails, then sends the Safe Response back to your Custom App.
 

Field Descriptions

Input Fields

Field: text Type: string Example: “Visa 2134 9394 2049 1100”
Description: The prompt text to be protected.
Field: provider_name Type: string Example: “openai”
Description: Name of the App Provider.
Field: model_name Type: string Example: “gpt-4o”
Description: Name of the model.
Object: user
Field: email Type: string Example: admin@yourco.com
Object: caller
Field: application_name Type: string Example: “My Custom App
Description: Application name as displayed in the console.
Field: application_version Type: string Example: 1.0.1
Description: Application version.
Field: detailed_output Type: boolean Example: [true|false]
Description: Instructs output to include detailed logs and scorecards.
Object: session_provider
Field: conversation_id Type: string
Example: "da82c065-0700-0000-0000-0000cd7d372e".
Description: UUID of the Conversation Session. If this was not provided in the request, a new conversation session and conversation_id will be created, and passed back, and should be included in subsequent prompt calls in this conversation.
 

Response Fields

Field: request_id Type: string
Example: "f0078d68-0700-0000-0000-000023cf7b99"
Description: UUID for this prompt
Field: conversation_id Type: string
Example: "f0078d68-0700-0000-0000-000023cf7b99"
Description: UUID for this conversation
Field: text Type: string
Example: “Visa [TEMPLATE_CREDIT_CARD_1].”
Description: Protected User Prompt that WitnessAI would submit to the Destination App.
Field: report Type: array Example: see Example 200 Response
Description: Array of Scorecards. See Scorecard Fields below.
Field: warning_count Type: integer Example: 6
Description: number of warnings in all of the scorecards
Field: error_count Type: integer Example: 0
Description: number of guardrails that could not execute, e.g. not [pass|fail].
Field: incomplete_count Type: integer Example: 0
Description: not currently used
Field: input_score Type: integer Example: 0
Description: not currently used
Field: output_score Type: integer Example: 0
Description: not currently used
Field: combined_score Type: integer Example: 0
Description: input + output scores (not currently used)
Field: risk_score Type: integer Example: 0
Description: range 0-3; 0 being no risk, 3 being high risk.
Field: result Type: string Example: [pass|fail]
Field: block_flow Type: boolean Example: [true|false]
Description: Based on policy interpretation.

Scorecard Fields

💡
The API Response includes a “report” field, which consists of an array of “Scorecards”.
Field: id Type: string
Example: "f0078d68-0700-0000-0000-000023cf7b99"
Description: UUID for scorecard.
Field: prompt_id Type: string
Example: "f0078d68-0700-0000-0000-000023cf7b99"
Description: UUID for for prompt
Field: report_type Description: not currently used
Field: started Type: datetime Description: start time of run
Field: completed_at. Type: datetime Description: end time of run
Field: data_modified Type: boolean Example: [true|false]
Description: Specifies if input data was modified, or required redaction.
Field: risk_type Type: string Example: "Data Leakage"
Description: readable string of whatever risk was found
Field: risk_metric Type: string Example: "none", "low", "medium", "high".
Field: message Type: string
Description: Message that is guardrail specific. For the intention classifier, it is the User Prompt intent.
Field: result Type: string Example: “pass” or “fail”
Field: filter_identifier Type: string
Example: "fl-pre-intention-classifier"
Description: filter name

Usage

Example Request

Note: Some text has been underlined to highlight data and messages of interest.
curl --request POST \ --url https://api.demo2.witness.ai/v1/guardrail/input \ --header 'accept: application/json' \ --header 'authorization: bearer your-authentication-key' \ --header 'content-type: application/json' \ --data ' { "text": "American Express: 3714-496353-98431 or 371449635398431", "provider_name": "openai", "model_name": "gpt-4o", "user": { "email": "admin@yourco.com" }, "caller": { "application_name": "Custom App", "application_version": "v1.0.1" } } '

Example ‘200’ Response

{ "request_id": "f0078d68-0100-0000-0000-00007845779e", "conversation_id": "f0078d68-0700-0000-0000-000023cf7b99", "text": "American Express: [TEMPLATE_CREDIT_CARD_2] or [TEMPLATE_CREDIT_CARD_1]", "report": { "score_cards": [ { "id": "f0078d68-0c00-0000-0000-00003bd96865", "prompt_id": "f0078d68-0100-0000-0000-00007845779e", "report_type": "", "started_at": "2025-08-01T18:31:12.358245Z", "completed_at": "2025-08-01T18:31:12.366365Z", "data_modified": true, "confidence": 100, "risk_score": 3, "risk_type": "Data Leakage", "risk_metric": "High", "message": "input required anonymization", "result": "pass", "filter_identifier": "fl-pre-anonymizer", "rule_result": "warn", "rule_message": "Data Protection set to Warn Action ." }, { "id": "f0078d68-0c00-0000-0000-000018423116", "prompt_id": "f0078d68-0100-0000-0000-00007845779e", "report_type": "", "started_at": "2025-08-01T18:31:12.349592Z", "completed_at": "2025-08-01T18:31:12.94504Z", "data_modified": false, "risk_type": "None", "risk_metric": "None", "message": "Process financial payment", "result": "pass", "filter_identifier": "fl-pre-topic-categorizer" }, { "id": "f0078d68-0c00-0000-0000-0000009fbefa", "prompt_id": "f0078d68-0100-0000-0000-00007845779e", "report_type": "", "started_at": "2025-08-01T18:31:12.350063Z", "completed_at": "2025-08-01T18:31:12.410792Z", "data_modified": false, "risk_type": "None", "risk_metric": "None", "message": "prompt injection was not detected", "result": "pass", "filter_identifier": "fl-pre-risk-analysis" }, { "id": "f6078d68-0c00-0000-0000-0000a1fdc565", "prompt_id": "f0078d68-0100-0000-0000-00007845779e", "report_type": "", "started_at": "2025-08-01T18:31:18.854834Z", "completed_at": "2025-08-01T18:31:18.918675Z", "data_modified": false, "risk_type": "None", "risk_metric": "None", "message": "[]", "result": "pass", "filter_identifier": "fl-pre-intention-classifier" } ], "warning_count": 0, "error_count": 0, "incomplete_count": 0, "input_score": 0, "output_score": 0, "combined_score": 0, "risk_score": 3, "result": "pass" }, "block_flow": false, "route": false, "text_modified": true, "prompt_instructions": "Below is a request that may contain predefined placeholders, such as [TEMPLATE_PERSON_1], [TEMPLATE_CREDIT_CARD_2], etc.,In your response to the request, if your response needs to include a placeholder for context, you must preserve the format by keeping it in uppercase.Do not modify these placeholders in any manner. Do not add any new placeholders.", "messages": [ "Data Protection set to Warn Action ." ] }

HTTP Status Codes

200: Successful
400: Bad Request
401: Unauthorized
500: Internal Server Error