{
"key": "gen_ai.conversation.id",
"brief": "The unique identifier for a conversation (session, thread), used to store and correlate messages within this conversation.",
"type": "string",
"pii": {
"key": "maybe"
},
"is_in_otel": true,
"example": "conv_5j66UpCpwteGg4YSxUnt7lPY",
"changelog": [
{
"version": "0.4.0",
"prs": [
250
]
}
]
}
The messages passed to the model. It has to be a stringified version of an array of objects. The `role` attribute of each object must be `"user"`, `"assistant"`, `"tool"`, or `"system"`. For messages of the role `"tool"`, the `content` can be a string or an arbitrary object with information about the tool call. For other messages the `content` can be either a string or a list of objects in the format `{type: "text", text:"..."}`.
{
"key": "gen_ai.input.messages",
"brief": "The messages passed to the model. It has to be a stringified version of an array of objects. The `role` attribute of each object must be `\"user\"`, `\"assistant\"`, `\"tool\"`, or `\"system\"`. For messages of the role `\"tool\"`, the `content` can be a string or an arbitrary object with information about the tool call. For other messages the `content` can be either a string or a list of objects in the format `{type: \"text\", text:\"...\"}`.",
"type": "string",
"pii": {
"key": "maybe"
},
"is_in_otel": true,
"example": "[{\"role\": \"user\", \"parts\": [{\"type\": \"text\", \"content\": \"Weather in Paris?\"}]}, {\"role\": \"assistant\", \"parts\": [{\"type\": \"tool_call\", \"id\": \"call_VSPygqKTWdrhaFErNvMV18Yl\", \"name\": \"get_weather\", \"arguments\": {\"location\": \"Paris\"}}]}, {\"role\": \"tool\", \"parts\": [{\"type\": \"tool_call_response\", \"id\": \"call_VSPygqKTWdrhaFErNvMV18Yl\", \"result\": \"rainy, 57°F\"}]}]",
"alias": [
"ai.texts"
],
"changelog": [
{
"version": "next",
"prs": [
264
]
},
{
"version": "0.4.0",
"prs": [
221
]
}
]
}
The name of the operation being performed. It has the following list of well-known values: 'chat', 'create_agent', 'embeddings', 'execute_tool', 'generate_content', 'invoke_agent', 'text_completion'. If one of them applies, then that value MUST be used. Otherwise a custom value MAY be used.
{
"key": "gen_ai.operation.name",
"brief": "The name of the operation being performed. It has the following list of well-known values: 'chat', 'create_agent', 'embeddings', 'execute_tool', 'generate_content', 'invoke_agent', 'text_completion'. If one of them applies, then that value MUST be used. Otherwise a custom value MAY be used.",
"type": "string",
"pii": {
"key": "maybe"
},
"is_in_otel": true,
"example": "chat",
"changelog": [
{
"version": "0.4.0",
"prs": [
225
]
},
{
"version": "0.1.0",
"prs": [
62,
127
]
}
]
}
The type of AI operation. Must be one of 'agent' (invoke_agent and create_agent spans), 'ai_client' (any LLM call), 'tool' (execute_tool spans), 'handoff' (handoff spans), 'other' (input and output processors, skill loading, guardrails etc.) . Added during ingestion based on span.op and gen_ai.operation.type. Used to filter and aggregate data in the UI
{
"key": "gen_ai.operation.type",
"brief": "The type of AI operation. Must be one of 'agent' (invoke_agent and create_agent spans), 'ai_client' (any LLM call), 'tool' (execute_tool spans), 'handoff' (handoff spans), 'other' (input and output processors, skill loading, guardrails etc.) . Added during ingestion based on span.op and gen_ai.operation.type. Used to filter and aggregate data in the UI",
"type": "string",
"pii": {
"key": "maybe"
},
"is_in_otel": false,
"example": "tool",
"changelog": [
{
"version": "0.4.0",
"prs": [
257
]
},
{
"version": "0.1.0",
"prs": [
113,
127
]
}
]
}
The model's response messages. It has to be a stringified version of an array of message objects, which can include text responses and tool calls.
Example[{"role": "assistant", "parts": [{"type": "text", "content": "The weather in Paris is currently rainy with a temperature of 57°F."}], "finish_reason": "stop"}]
{
"key": "gen_ai.output.messages",
"brief": "The model's response messages. It has to be a stringified version of an array of message objects, which can include text responses and tool calls.",
"type": "string",
"pii": {
"key": "maybe"
},
"is_in_otel": true,
"example": "[{\"role\": \"assistant\", \"parts\": [{\"type\": \"text\", \"content\": \"The weather in Paris is currently rainy with a temperature of 57°F.\"}], \"finish_reason\": \"stop\"}]",
"changelog": [
{
"version": "0.4.0",
"prs": [
221
]
}
]
}
Used to reduce repetitiveness of generated tokens. The higher the value, the stronger a penalty is applied to previously present tokens, proportional to how many times they have already appeared in the prompt or prior generation.
{
"key": "gen_ai.request.frequency_penalty",
"brief": "Used to reduce repetitiveness of generated tokens. The higher the value, the stronger a penalty is applied to previously present tokens, proportional to how many times they have already appeared in the prompt or prior generation.",
"type": "double",
"pii": {
"key": "maybe"
},
"is_in_otel": true,
"example": 0.5,
"alias": [
"ai.frequency_penalty"
],
"changelog": [
{
"version": "0.4.0",
"prs": [
228
]
},
{
"version": "0.1.0",
"prs": [
57
]
}
]
}
Used to reduce repetitiveness of generated tokens. Similar to frequency_penalty, except that this penalty is applied equally to all tokens that have already appeared, regardless of their exact frequencies.
{
"key": "gen_ai.request.seed",
"brief": "The seed, ideally models given the same seed and same other parameters will produce the exact same output.",
"type": "string",
"pii": {
"key": "maybe"
},
"is_in_otel": true,
"example": "1234567890",
"alias": [
"ai.seed"
],
"changelog": [
{
"version": "0.1.0",
"prs": [
57,
127
]
}
]
}
Limits the model to only consider the K most likely next tokens, where K is an integer (e.g., top_k=20 means only the 20 highest probability tokens are considered).
{
"key": "gen_ai.request.top_k",
"brief": "Limits the model to only consider the K most likely next tokens, where K is an integer (e.g., top_k=20 means only the 20 highest probability tokens are considered).",
"type": "integer",
"pii": {
"key": "maybe"
},
"is_in_otel": true,
"example": 35,
"alias": [
"ai.top_k"
],
"changelog": [
{
"version": "0.4.0",
"prs": [
228
]
},
{
"version": "0.1.0",
"prs": [
57
]
}
]
}
Limits the model to only consider tokens whose cumulative probability mass adds up to p, where p is a float between 0 and 1 (e.g., top_p=0.7 means only tokens that sum up to 70% of the probability mass are considered).
{
"key": "gen_ai.request.top_p",
"brief": "Limits the model to only consider tokens whose cumulative probability mass adds up to p, where p is a float between 0 and 1 (e.g., top_p=0.7 means only tokens that sum up to 70% of the probability mass are considered).",
"type": "double",
"pii": {
"key": "maybe"
},
"is_in_otel": true,
"example": 0.7,
"alias": [
"ai.top_p"
],
"changelog": [
{
"version": "0.4.0",
"prs": [
228
]
},
{
"version": "0.1.0",
"prs": [
57
]
}
]
}
{
"key": "gen_ai.tool.call.arguments",
"brief": "The arguments of the tool call. It has to be a stringified version of the arguments to the tool.",
"type": "string",
"pii": {
"key": "maybe"
},
"is_in_otel": true,
"example": "{\"location\": \"Paris\"}",
"alias": [
"gen_ai.tool.input"
],
"changelog": [
{
"version": "next",
"prs": [
265
]
},
{
"version": "0.4.0",
"prs": [
221
]
}
]
}
{
"key": "gen_ai.tool.call.result",
"brief": "The result of the tool call. It has to be a stringified version of the result of the tool.",
"type": "string",
"pii": {
"key": "maybe"
},
"is_in_otel": true,
"example": "rainy, 57°F",
"alias": [
"gen_ai.tool.output",
"gen_ai.tool.message"
],
"changelog": [
{
"version": "next",
"prs": [
265
]
},
{
"version": "0.4.0",
"prs": [
221
]
}
]
}
The list of source system tool definitions available to the GenAI agent or model.
Example[{"type": "function", "name": "get_current_weather", "description": "Get the current weather in a given location", "parameters": {"type": "object", "properties": {"location": {"type": "string", "description": "The city and state, e.g. San Francisco, CA"}, "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}}, "required": ["location", "unit"]}}]
{
"key": "gen_ai.tool.definitions",
"brief": "The list of source system tool definitions available to the GenAI agent or model.",
"type": "string",
"pii": {
"key": "maybe"
},
"is_in_otel": true,
"example": "[{\"type\": \"function\", \"name\": \"get_current_weather\", \"description\": \"Get the current weather in a given location\", \"parameters\": {\"type\": \"object\", \"properties\": {\"location\": {\"type\": \"string\", \"description\": \"The city and state, e.g. San Francisco, CA\"}, \"unit\": {\"type\": \"string\", \"enum\": [\"celsius\", \"fahrenheit\"]}}, \"required\": [\"location\", \"unit\"]}}]",
"changelog": [
{
"version": "0.4.0",
"prs": [
221
]
}
]
}
{
"key": "gen_ai.tool.description",
"brief": "The description of the tool being used.",
"type": "string",
"pii": {
"key": "maybe"
},
"is_in_otel": true,
"example": "Searches the web for current information about a topic",
"changelog": [
{
"version": "0.1.0",
"prs": [
62,
127
]
}
]
}
{
"key": "gen_ai.usage.input_tokens.cache_write",
"brief": "The number of tokens written to the cache when processing the AI input (prompt).",
"type": "integer",
"pii": {
"key": "maybe"
},
"is_in_otel": false,
"example": 100,
"changelog": [
{
"version": "0.4.0",
"prs": [
217,
228
]
}
]
}
The available tools for the model. It has to be a stringified version of an array of objects.
Example[{"name": "get_weather", "description": "Get the weather for a given location"}, {"name": "get_news", "description": "Get the news for a given topic"}]
{
"key": "gen_ai.request.available_tools",
"brief": "The available tools for the model. It has to be a stringified version of an array of objects.",
"type": "string",
"pii": {
"key": "maybe"
},
"is_in_otel": false,
"example": "[{\"name\": \"get_weather\", \"description\": \"Get the weather for a given location\"}, {\"name\": \"get_news\", \"description\": \"Get the news for a given topic\"}]",
"deprecation": {
"replacement": "gen_ai.tool.definitions",
"_status": null
},
"alias": [],
"changelog": [
{
"version": "0.4.0",
"prs": [
221
]
},
{
"version": "0.1.0",
"prs": [
63,
127
]
}
]
}
The messages passed to the model. It has to be a stringified version of an array of objects. The `role` attribute of each object must be `"user"`, `"assistant"`, `"tool"`, or `"system"`. For messages of the role `"tool"`, the `content` can be a string or an arbitrary object with information about the tool call. For other messages the `content` can be either a string or a list of objects in the format `{type: "text", text:"..."}`.
Example[{"role": "system", "content": "Generate a random number."}, {"role": "user", "content": [{"text": "Generate a random number between 0 and 10.", "type": "text"}]}, {"role": "tool", "content": {"toolCallId": "1", "toolName": "Weather", "output": "rainy"}}]
{
"key": "gen_ai.request.messages",
"brief": "The messages passed to the model. It has to be a stringified version of an array of objects. The `role` attribute of each object must be `\"user\"`, `\"assistant\"`, `\"tool\"`, or `\"system\"`. For messages of the role `\"tool\"`, the `content` can be a string or an arbitrary object with information about the tool call. For other messages the `content` can be either a string or a list of objects in the format `{type: \"text\", text:\"...\"}`.",
"type": "string",
"pii": {
"key": "maybe"
},
"is_in_otel": false,
"example": "[{\"role\": \"system\", \"content\": \"Generate a random number.\"}, {\"role\": \"user\", \"content\": [{\"text\": \"Generate a random number between 0 and 10.\", \"type\": \"text\"}]}, {\"role\": \"tool\", \"content\": {\"toolCallId\": \"1\", \"toolName\": \"Weather\", \"output\": \"rainy\"}}]",
"deprecation": {
"replacement": "gen_ai.input.messages",
"_status": null
},
"alias": [
"ai.input_messages"
],
"changelog": [
{
"version": "0.4.0",
"prs": [
221
]
},
{
"version": "0.1.0",
"prs": [
63,
74,
108,
119,
122
]
}
]
}
The model's response text messages. It has to be a stringified version of an array of response text messages.
Example["The weather in Paris is rainy and overcast, with temperatures around 57°F", "The weather in London is sunny and warm, with temperatures around 65°F"]
{
"key": "gen_ai.response.text",
"brief": "The model's response text messages. It has to be a stringified version of an array of response text messages.",
"type": "string",
"pii": {
"key": "maybe"
},
"is_in_otel": false,
"example": "[\"The weather in Paris is rainy and overcast, with temperatures around 57°F\", \"The weather in London is sunny and warm, with temperatures around 65°F\"]",
"deprecation": {
"replacement": "gen_ai.output.messages",
"_status": null
},
"alias": [],
"changelog": [
{
"version": "0.4.0",
"prs": [
221
]
},
{
"version": "0.1.0",
"prs": [
63,
74
]
}
]
}
{
"key": "gen_ai.tool.input",
"brief": "The input of the tool being used. It has to be a stringified version of the input to the tool.",
"type": "string",
"pii": {
"key": "maybe"
},
"is_in_otel": false,
"example": "{\"location\": \"Paris\"}",
"deprecation": {
"replacement": "gen_ai.tool.call.arguments",
"_status": null
},
"alias": [
"gen_ai.tool.call.arguments"
],
"changelog": [
{
"version": "next",
"prs": [
265
]
},
{
"version": "0.1.0",
"prs": [
63,
74
]
}
]
}
{
"key": "gen_ai.tool.output",
"brief": "The output of the tool being used. It has to be a stringified version of the output of the tool.",
"type": "string",
"pii": {
"key": "maybe"
},
"is_in_otel": false,
"example": "rainy, 57°F",
"deprecation": {
"replacement": "gen_ai.tool.call.result",
"_status": null
},
"alias": [
"gen_ai.tool.call.result",
"gen_ai.tool.message"
],
"changelog": [
{
"version": "next",
"prs": [
265
]
},
{
"version": "0.1.0",
"prs": [
63,
74
]
}
]
}