-p-500.png)
DeepEval Integration Guide for TRACE
This guide walks you through how to configure DeepEval, run evaluation tests, and submit results to the TRACE Metrics API for automated Responsible AI scoring and compliance mapping.
Step 1: Configure DeepEval Metrics
Use DeepEval’s built-in test cases to assess your LLM outputs.
from deepeval.metrics import (
AnswerRelevancyMetric, HallucinationMetric, BiasMetric,
RoleAdherenceMetric, ToolCorrectnessMetric
)
from deepeval.test_case import LLMTestCase
test_case = LLMTestCase(
input="What is the capital of Germany?",
actual_output="Berlin is the capital of Germany.",
expected_output="Berlin",
retrieval_context=["Germany is a country in Europe. Berlin is its capital."]
)
metrics = [
AnswerRelevancyMetric,
HallucinationMetric,
BiasMetric,
RoleAdherenceMetric,
ToolCorrectnessMetric
]
metric_results = {}
for m in metrics:
m.measure(test_case)
metric_results[m.__class__.__name__] = m.score
Step 2: Prepare TRACE Payload
Use the following format to create your submission JSON:
{
"metric_metadata": {
"application_name": "chat-application",
"version": "1.0.0",
"provider": "deepeval",
"use_case": "transportation"
},
"metric_data": {
"deepeval": {
"AnswerRelevancyMetric": 85,
"ContextualPrecisionMetric": 92,
"ContextualRecallMetric": 78,
"ContextualRelevancyMetric": 88,
"ConversationCompletenessMetric": 95,
"ConversationRelevancyMetric": 82
}
}
}
Step 3: Get Your API Key
To submit to TRACE, you need a subscription key.
Read: How to Get and Use Your TRACE API Key
Once you’ve generated it, use it in the header for all API requests.
Step 4: Submit to TRACE API
cURL Example:
curl -X POST https://api.cognitiveview.com/metrics \
-H "Content-Type: application/json" \
-H "Ocp-Apim-Subscription-Key: Bearer YOUR_API_KEY" \
-d @trace_payload.json
Python Example:
import requests
url = "https://api.cognitiveview.com/metrics"
headers = {
"Content-Type": "application/json",
"Ocp-Apim-Subscription-Key": "YOUR_API_KEY"
}
payload = { ... } # Your JSON from above
response = requests.post(url, headers=headers, json=payload)
print("Status Code:", response.status_code)
print("Response:", response.json())
What TRACE Returns
On success, TRACE responds with a JSON containing a report_id, like:
{
"message": "Evaluation in progress.",
"report_id": "gW3eQpV63sRTrQey9uJPNp"
}
This ID is used to retrieve your full evaluation.