References or sources cited by the AI model in its response.
Example["Citation 1","Citation 2"]
Raw JSON
{
"key": "ai.citations",
"brief": "References or sources cited by the AI model in its response.",
"type": "string[]",
"pii": {
"key": "true"
},
"is_in_otel": false,
"example": [
"Citation 1",
"Citation 2"
]
}
Documents or content chunks used as context for the AI model.
Example["document1.txt","document2.pdf"]
Raw JSON
{
"key": "ai.documents",
"brief": "Documents or content chunks used as context for the AI model.",
"type": "string[]",
"pii": {
"key": "true"
},
"is_in_otel": false,
"example": [
"document1.txt",
"document2.pdf"
]
}
For an AI model call, the preamble parameter. Preambles are a part of the prompt used to adjust the model's overall behavior and conversation style.
ExampleYou are now a clown.
Raw JSON
{
"key": "ai.preamble",
"brief": "For an AI model call, the preamble parameter. Preambles are a part of the prompt used to adjust the model's overall behavior and conversation style.",
"type": "string",
"pii": {
"key": "true"
},
"is_in_otel": false,
"example": "You are now a clown."
}
When enabled, the user’s prompt will be sent to the model without any pre-processing.
Exampletrue
Raw JSON
{
"key": "ai.raw_prompting",
"brief": "When enabled, the user’s prompt will be sent to the model without any pre-processing.",
"type": "boolean",
"pii": {
"key": "false"
},
"is_in_otel": false,
"example": true
}
{
"key": "ai.response_format",
"brief": "For an AI model call, the format of the response",
"type": "string",
"pii": {
"key": "maybe"
},
"is_in_otel": false,
"example": "json_object"
}
Example["Hello, how are you?","What is the capital of France?"]
Raw JSON
{
"key": "ai.texts",
"brief": "Raw text inputs provided to the model.",
"type": "string[]",
"pii": {
"key": "true"
},
"is_in_otel": false,
"example": [
"Hello, how are you?",
"What is the capital of France?"
]
}
Used to reduce repetitiveness of generated tokens. The higher the value, the stronger a penalty is applied to previously present tokens, proportional to how many times they have already appeared in the prompt or prior generation.
Example0.5
Aliasesgen_ai.request.frequency_penalty
Use gen_ai.request.frequency_penalty instead.
Raw JSON
{
"key": "ai.frequency_penalty",
"brief": "Used to reduce repetitiveness of generated tokens. The higher the value, the stronger a penalty is applied to previously present tokens, proportional to how many times they have already appeared in the prompt or prior generation.",
"type": "double",
"pii": {
"key": "maybe"
},
"is_in_otel": false,
"example": 0.5,
"deprecation": {
"replacement": "gen_ai.request.frequency_penalty",
"_status": null
},
"alias": [
"gen_ai.request.frequency_penalty"
]
}
For an AI model call, the function that was called. This is deprecated for OpenAI, and replaced by tool_calls
Examplefunction_name
Aliasesgen_ai.tool.name
Use gen_ai.tool.name instead.
Raw JSON
{
"key": "ai.function_call",
"brief": "For an AI model call, the function that was called. This is deprecated for OpenAI, and replaced by tool_calls",
"type": "string",
"pii": {
"key": "true"
},
"is_in_otel": false,
"example": "function_name",
"deprecation": {
"replacement": "gen_ai.tool.name",
"_status": null
},
"alias": [
"gen_ai.tool.name"
]
}
Used to reduce repetitiveness of generated tokens. Similar to frequency_penalty, except that this penalty is applied equally to all tokens that have already appeared, regardless of their exact frequencies.
Example0.5
Aliasesgen_ai.request.presence_penalty
Use gen_ai.request.presence_penalty instead.
Raw JSON
{
"key": "ai.presence_penalty",
"brief": "Used to reduce repetitiveness of generated tokens. Similar to frequency_penalty, except that this penalty is applied equally to all tokens that have already appeared, regardless of their exact frequencies.",
"type": "double",
"pii": {
"key": "maybe"
},
"is_in_otel": false,
"example": 0.5,
"deprecation": {
"replacement": "gen_ai.request.presence_penalty",
"_status": null
},
"alias": [
"gen_ai.request.presence_penalty"
]
}
The seed, ideally models given the same seed and same other parameters will produce the exact same output.
Example1234567890
Aliasesgen_ai.request.seed
Use gen_ai.request.seed instead.
Raw JSON
{
"key": "ai.seed",
"brief": "The seed, ideally models given the same seed and same other parameters will produce the exact same output.",
"type": "string",
"pii": {
"key": "maybe"
},
"is_in_otel": false,
"example": "1234567890",
"deprecation": {
"replacement": "gen_ai.request.seed",
"_status": null
},
"alias": [
"gen_ai.request.seed"
]
}
For an AI model call, the temperature parameter. Temperature essentially means how random the output will be.
Example0.1
Aliasesgen_ai.request.temperature
Use gen_ai.request.temperature instead.
Raw JSON
{
"key": "ai.temperature",
"brief": "For an AI model call, the temperature parameter. Temperature essentially means how random the output will be.",
"type": "double",
"pii": {
"key": "maybe"
},
"is_in_otel": false,
"example": 0.1,
"deprecation": {
"replacement": "gen_ai.request.temperature",
"_status": null
},
"alias": [
"gen_ai.request.temperature"
]
}
Limits the model to only consider the K most likely next tokens, where K is an integer (e.g., top_k=20 means only the 20 highest probability tokens are considered).
Example35
Aliasesgen_ai.request.top_k
Use gen_ai.request.top_k instead.
Raw JSON
{
"key": "ai.top_k",
"brief": "Limits the model to only consider the K most likely next tokens, where K is an integer (e.g., top_k=20 means only the 20 highest probability tokens are considered).",
"type": "integer",
"pii": {
"key": "maybe"
},
"is_in_otel": false,
"example": 35,
"deprecation": {
"replacement": "gen_ai.request.top_k",
"_status": null
},
"alias": [
"gen_ai.request.top_k"
]
}
Limits the model to only consider tokens whose cumulative probability mass adds up to p, where p is a float between 0 and 1 (e.g., top_p=0.7 means only tokens that sum up to 70% of the probability mass are considered).
Example0.7
Aliasesgen_ai.request.top_p
Use gen_ai.request.top_p instead.
Raw JSON
{
"key": "ai.top_p",
"brief": "Limits the model to only consider tokens whose cumulative probability mass adds up to p, where p is a float between 0 and 1 (e.g., top_p=0.7 means only tokens that sum up to 70% of the probability mass are considered).",
"type": "double",
"pii": {
"key": "maybe"
},
"is_in_otel": false,
"example": 0.7,
"deprecation": {
"replacement": "gen_ai.request.top_p",
"_status": null
},
"alias": [
"gen_ai.request.top_p"
]
}