{
"key": "gen_ai.agent.name",
"brief": "The name of the agent being used.",
"type": "string",
"pii": {
"key": "maybe"
},
"is_in_otel": true,
"example": "ResearchAssistant"
}
ExampleThe weather in Paris is rainy and overcast, with temperatures around 57°F
Raw JSON
{
"key": "gen_ai.choice",
"brief": "The model's response message.",
"type": "string",
"pii": {
"key": "true"
},
"is_in_otel": false,
"example": "The weather in Paris is rainy and overcast, with temperatures around 57°F"
}
The unique identifier for a conversation (session, thread), used to store and correlate messages within this conversation.
Exampleconv_5j66UpCpwteGg4YSxUnt7lPY
Raw JSON
{
"key": "gen_ai.conversation.id",
"brief": "The unique identifier for a conversation (session, thread), used to store and correlate messages within this conversation.",
"type": "string",
"pii": {
"key": "maybe"
},
"is_in_otel": true,
"example": "conv_5j66UpCpwteGg4YSxUnt7lPY"
}
The cost of tokens used to process the AI input (prompt) in USD (without cached input tokens).
Example123.45
Raw JSON
{
"key": "gen_ai.cost.input_tokens",
"brief": "The cost of tokens used to process the AI input (prompt) in USD (without cached input tokens).",
"type": "double",
"pii": {
"key": "maybe"
},
"is_in_otel": false,
"example": 123.45
}
The cost of tokens used for creating the AI output in USD (without reasoning tokens).
Example123.45
Raw JSON
{
"key": "gen_ai.cost.output_tokens",
"brief": "The cost of tokens used for creating the AI output in USD (without reasoning tokens).",
"type": "double",
"pii": {
"key": "maybe"
},
"is_in_otel": false,
"example": 123.45
}
The messages passed to the model. It has to be a stringified version of an array of objects. The `role` attribute of each object must be `"user"`, `"assistant"`, `"tool"`, or `"system"`. For messages of the role `"tool"`, the `content` can be a string or an arbitrary object with information about the tool call. For other messages the `content` can be either a string or a list of objects in the format `{type: "text", text:"..."}`.
{
"key": "gen_ai.input.messages",
"brief": "The messages passed to the model. It has to be a stringified version of an array of objects. The `role` attribute of each object must be `\"user\"`, `\"assistant\"`, `\"tool\"`, or `\"system\"`. For messages of the role `\"tool\"`, the `content` can be a string or an arbitrary object with information about the tool call. For other messages the `content` can be either a string or a list of objects in the format `{type: \"text\", text:\"...\"}`.",
"type": "string",
"pii": {
"key": "maybe"
},
"is_in_otel": true,
"example": "[{\"role\": \"user\", \"parts\": [{\"type\": \"text\", \"content\": \"Weather in Paris?\"}]}, {\"role\": \"assistant\", \"parts\": [{\"type\": \"tool_call\", \"id\": \"call_VSPygqKTWdrhaFErNvMV18Yl\", \"name\": \"get_weather\", \"arguments\": {\"location\": \"Paris\"}}]}, {\"role\": \"tool\", \"parts\": [{\"type\": \"tool_call_response\", \"id\": \"call_VSPygqKTWdrhaFErNvMV18Yl\", \"result\": \"rainy, 57°F\"}]}]"
}
The name of the operation being performed. It has the following list of well-known values: 'chat', 'create_agent', 'embeddings', 'execute_tool', 'generate_content', 'invoke_agent', 'text_completion'. If one of them applies, then that value MUST be used. Otherwise a custom value MAY be used.
Examplechat
Raw JSON
{
"key": "gen_ai.operation.name",
"brief": "The name of the operation being performed. It has the following list of well-known values: 'chat', 'create_agent', 'embeddings', 'execute_tool', 'generate_content', 'invoke_agent', 'text_completion'. If one of them applies, then that value MUST be used. Otherwise a custom value MAY be used.",
"type": "string",
"pii": {
"key": "maybe"
},
"is_in_otel": true,
"example": "chat"
}
The type of AI operation. Must be one of 'agent', 'ai_client', 'tool', 'handoff', 'guardrail'. Makes querying for spans in the UI easier.
Exampletool
Raw JSON
{
"key": "gen_ai.operation.type",
"brief": "The type of AI operation. Must be one of 'agent', 'ai_client', 'tool', 'handoff', 'guardrail'. Makes querying for spans in the UI easier.",
"type": "string",
"pii": {
"key": "maybe"
},
"is_in_otel": false,
"example": "tool"
}
The model's response messages. It has to be a stringified version of an array of message objects, which can include text responses and tool calls.
Example[{"role": "assistant", "parts": [{"type": "text", "content": "The weather in Paris is currently rainy with a temperature of 57°F."}], "finish_reason": "stop"}]
Raw JSON
{
"key": "gen_ai.output.messages",
"brief": "The model's response messages. It has to be a stringified version of an array of message objects, which can include text responses and tool calls.",
"type": "string",
"pii": {
"key": "maybe"
},
"is_in_otel": true,
"example": "[{\"role\": \"assistant\", \"parts\": [{\"type\": \"text\", \"content\": \"The weather in Paris is currently rainy with a temperature of 57°F.\"}], \"finish_reason\": \"stop\"}]"
}
Used to reduce repetitiveness of generated tokens. The higher the value, the stronger a penalty is applied to previously present tokens, proportional to how many times they have already appeared in the prompt or prior generation.
Example0.5
Aliasesai.frequency_penalty
Raw JSON
{
"key": "gen_ai.request.frequency_penalty",
"brief": "Used to reduce repetitiveness of generated tokens. The higher the value, the stronger a penalty is applied to previously present tokens, proportional to how many times they have already appeared in the prompt or prior generation.",
"type": "double",
"pii": {
"key": "maybe"
},
"is_in_otel": true,
"example": 0.5,
"alias": [
"ai.frequency_penalty"
]
}
The maximum number of tokens to generate in the response.
Example2048
Raw JSON
{
"key": "gen_ai.request.max_tokens",
"brief": "The maximum number of tokens to generate in the response.",
"type": "integer",
"pii": {
"key": "maybe"
},
"is_in_otel": true,
"example": 2048
}
{
"key": "gen_ai.request.model",
"brief": "The model identifier being used for the request.",
"type": "string",
"pii": {
"key": "maybe"
},
"is_in_otel": true,
"example": "gpt-4-turbo-preview"
}
Used to reduce repetitiveness of generated tokens. Similar to frequency_penalty, except that this penalty is applied equally to all tokens that have already appeared, regardless of their exact frequencies.
Example0.5
Aliasesai.presence_penalty
Raw JSON
{
"key": "gen_ai.request.presence_penalty",
"brief": "Used to reduce repetitiveness of generated tokens. Similar to frequency_penalty, except that this penalty is applied equally to all tokens that have already appeared, regardless of their exact frequencies.",
"type": "double",
"pii": {
"key": "maybe"
},
"is_in_otel": true,
"example": 0.5,
"alias": [
"ai.presence_penalty"
]
}
The seed, ideally models given the same seed and same other parameters will produce the exact same output.
Example1234567890
Aliasesai.seed
Raw JSON
{
"key": "gen_ai.request.seed",
"brief": "The seed, ideally models given the same seed and same other parameters will produce the exact same output.",
"type": "string",
"pii": {
"key": "maybe"
},
"is_in_otel": true,
"example": "1234567890",
"alias": [
"ai.seed"
]
}
For an AI model call, the temperature parameter. Temperature essentially means how random the output will be.
Example0.1
Aliasesai.temperature
Raw JSON
{
"key": "gen_ai.request.temperature",
"brief": "For an AI model call, the temperature parameter. Temperature essentially means how random the output will be.",
"type": "double",
"pii": {
"key": "maybe"
},
"is_in_otel": true,
"example": 0.1,
"alias": [
"ai.temperature"
]
}
Limits the model to only consider the K most likely next tokens, where K is an integer (e.g., top_k=20 means only the 20 highest probability tokens are considered).
Example35
Aliasesai.top_k
Raw JSON
{
"key": "gen_ai.request.top_k",
"brief": "Limits the model to only consider the K most likely next tokens, where K is an integer (e.g., top_k=20 means only the 20 highest probability tokens are considered).",
"type": "integer",
"pii": {
"key": "maybe"
},
"is_in_otel": true,
"example": 35,
"alias": [
"ai.top_k"
]
}
Limits the model to only consider tokens whose cumulative probability mass adds up to p, where p is a float between 0 and 1 (e.g., top_p=0.7 means only tokens that sum up to 70% of the probability mass are considered).
Example0.7
Aliasesai.top_p
Raw JSON
{
"key": "gen_ai.request.top_p",
"brief": "Limits the model to only consider tokens whose cumulative probability mass adds up to p, where p is a float between 0 and 1 (e.g., top_p=0.7 means only tokens that sum up to 70% of the probability mass are considered).",
"type": "double",
"pii": {
"key": "maybe"
},
"is_in_otel": true,
"example": 0.7,
"alias": [
"ai.top_p"
]
}
Whether or not the AI model call's response was streamed back asynchronously
Exampletrue
Aliasesai.streaming
Raw JSON
{
"key": "gen_ai.response.streaming",
"brief": "Whether or not the AI model call's response was streamed back asynchronously",
"type": "boolean",
"pii": {
"key": "false"
},
"is_in_otel": false,
"example": true,
"alias": [
"ai.streaming"
]
}
{
"key": "gen_ai.system_instructions",
"brief": "The system instructions passed to the model.",
"type": "string",
"pii": {
"key": "maybe"
},
"is_in_otel": true,
"example": "You are a helpful assistant"
}
The arguments of the tool call. It has to be a stringified version of the arguments to the tool.
Example{"location": "Paris"}
Raw JSON
{
"key": "gen_ai.tool.call.arguments",
"brief": "The arguments of the tool call. It has to be a stringified version of the arguments to the tool.",
"type": "string",
"pii": {
"key": "maybe"
},
"is_in_otel": true,
"example": "{\"location\": \"Paris\"}"
}
The result of the tool call. It has to be a stringified version of the result of the tool.
Examplerainy, 57°F
Raw JSON
{
"key": "gen_ai.tool.call.result",
"brief": "The result of the tool call. It has to be a stringified version of the result of the tool.",
"type": "string",
"pii": {
"key": "maybe"
},
"is_in_otel": true,
"example": "rainy, 57°F"
}
The list of source system tool definitions available to the GenAI agent or model.
Example[{"type": "function", "name": "get_current_weather", "description": "Get the current weather in a given location", "parameters": {"type": "object", "properties": {"location": {"type": "string", "description": "The city and state, e.g. San Francisco, CA"}, "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}}, "required": ["location", "unit"]}}]
Raw JSON
{
"key": "gen_ai.tool.definitions",
"brief": "The list of source system tool definitions available to the GenAI agent or model.",
"type": "string",
"pii": {
"key": "maybe"
},
"is_in_otel": true,
"example": "[{\"type\": \"function\", \"name\": \"get_current_weather\", \"description\": \"Get the current weather in a given location\", \"parameters\": {\"type\": \"object\", \"properties\": {\"location\": {\"type\": \"string\", \"description\": \"The city and state, e.g. San Francisco, CA\"}, \"unit\": {\"type\": \"string\", \"enum\": [\"celsius\", \"fahrenheit\"]}}, \"required\": [\"location\", \"unit\"]}}]"
}
ExampleSearches the web for current information about a topic
Raw JSON
{
"key": "gen_ai.tool.description",
"brief": "The description of the tool being used.",
"type": "string",
"pii": {
"key": "maybe"
},
"is_in_otel": true,
"example": "Searches the web for current information about a topic"
}
The input of the tool being used. It has to be a stringified version of the input to the tool.
Example{"location": "Paris"}
Raw JSON
{
"key": "gen_ai.tool.input",
"brief": "The input of the tool being used. It has to be a stringified version of the input to the tool.",
"type": "string",
"pii": {
"key": "maybe"
},
"is_in_otel": false,
"example": "{\"location\": \"Paris\"}",
"alias": []
}
The response from a tool or function call passed to the model.
Examplerainy, 57°F
Raw JSON
{
"key": "gen_ai.tool.message",
"brief": "The response from a tool or function call passed to the model.",
"type": "string",
"pii": {
"key": "true"
},
"is_in_otel": false,
"example": "rainy, 57°F"
}
The output of the tool being used. It has to be a stringified version of the output of the tool.
Examplerainy, 57°F
Raw JSON
{
"key": "gen_ai.tool.output",
"brief": "The output of the tool being used. It has to be a stringified version of the output of the tool.",
"type": "string",
"pii": {
"key": "maybe"
},
"is_in_otel": false,
"example": "rainy, 57°F",
"alias": []
}
{
"key": "gen_ai.usage.input_tokens",
"brief": "The number of tokens used to process the AI input (prompt) without cached input tokens.",
"type": "integer",
"pii": {
"key": "maybe"
},
"is_in_otel": true,
"example": 10,
"alias": [
"ai.prompt_tokens.used",
"gen_ai.usage.prompt_tokens"
]
}
The number of tokens written to the cache when processing the AI input (prompt).
Example100
Raw JSON
{
"key": "gen_ai.usage.input_tokens.cache_write",
"brief": "The number of tokens written to the cache when processing the AI input (prompt).",
"type": "integer",
"pii": {
"key": "maybe"
},
"is_in_otel": false,
"example": 100
}
The number of cached tokens used to process the AI input (prompt).
Example50
Raw JSON
{
"key": "gen_ai.usage.input_tokens.cached",
"brief": "The number of cached tokens used to process the AI input (prompt).",
"type": "integer",
"pii": {
"key": "maybe"
},
"is_in_otel": false,
"example": 50
}
{
"key": "gen_ai.usage.output_tokens",
"brief": "The number of tokens used for creating the AI output (without reasoning tokens).",
"type": "integer",
"pii": {
"key": "maybe"
},
"is_in_otel": true,
"example": 10,
"alias": [
"ai.completion_tokens.used",
"gen_ai.usage.completion_tokens"
]
}
The number of tokens used for reasoning to create the AI output.
Example75
Raw JSON
{
"key": "gen_ai.usage.output_tokens.reasoning",
"brief": "The number of tokens used for reasoning to create the AI output.",
"type": "integer",
"pii": {
"key": "maybe"
},
"is_in_otel": false,
"example": 75
}
The total number of tokens used to process the prompt. (input tokens plus output todkens)
Example20
Aliasesai.total_tokens.used
Raw JSON
{
"key": "gen_ai.usage.total_tokens",
"brief": "The total number of tokens used to process the prompt. (input tokens plus output todkens)",
"type": "integer",
"pii": {
"key": "maybe"
},
"is_in_otel": false,
"example": 20,
"alias": [
"ai.total_tokens.used"
]
}
{
"key": "gen_ai.user.message",
"brief": "The user message passed to the model.",
"type": "string",
"pii": {
"key": "true"
},
"is_in_otel": false,
"example": "What's the weather in Paris?"
}
Deprecated Attributes
These attributes are deprecated and should not be used in new code.
See each attribute for migration guidance.
Deprecated from OTEL, use gen_ai.input.messages with the new format instead.
Raw JSON
{
"key": "gen_ai.prompt",
"brief": "The input messages sent to the model",
"type": "string",
"pii": {
"key": "maybe"
},
"is_in_otel": true,
"example": "[{\"role\": \"user\", \"message\": \"hello\"}]",
"deprecation": {
"reason": "Deprecated from OTEL, use gen_ai.input.messages with the new format instead.",
"_status": null
}
}
The available tools for the model. It has to be a stringified version of an array of objects.
Example[{"name": "get_weather", "description": "Get the weather for a given location"}, {"name": "get_news", "description": "Get the news for a given topic"}]
Use gen_ai.tool.definitions instead.
Raw JSON
{
"key": "gen_ai.request.available_tools",
"brief": "The available tools for the model. It has to be a stringified version of an array of objects.",
"type": "string",
"pii": {
"key": "maybe"
},
"is_in_otel": false,
"example": "[{\"name\": \"get_weather\", \"description\": \"Get the weather for a given location\"}, {\"name\": \"get_news\", \"description\": \"Get the news for a given topic\"}]",
"deprecation": {
"replacement": "gen_ai.tool.definitions",
"_status": null
},
"alias": []
}
The messages passed to the model. It has to be a stringified version of an array of objects. The `role` attribute of each object must be `"user"`, `"assistant"`, `"tool"`, or `"system"`. For messages of the role `"tool"`, the `content` can be a string or an arbitrary object with information about the tool call. For other messages the `content` can be either a string or a list of objects in the format `{type: "text", text:"..."}`.
Example[{"role": "system", "content": "Generate a random number."}, {"role": "user", "content": [{"text": "Generate a random number between 0 and 10.", "type": "text"}]}, {"role": "tool", "content": {"toolCallId": "1", "toolName": "Weather", "output": "rainy"}}]
Aliasesai.input_messages
Use gen_ai.input.messages instead.
Raw JSON
{
"key": "gen_ai.request.messages",
"brief": "The messages passed to the model. It has to be a stringified version of an array of objects. The `role` attribute of each object must be `\"user\"`, `\"assistant\"`, `\"tool\"`, or `\"system\"`. For messages of the role `\"tool\"`, the `content` can be a string or an arbitrary object with information about the tool call. For other messages the `content` can be either a string or a list of objects in the format `{type: \"text\", text:\"...\"}`.",
"type": "string",
"pii": {
"key": "maybe"
},
"is_in_otel": false,
"example": "[{\"role\": \"system\", \"content\": \"Generate a random number.\"}, {\"role\": \"user\", \"content\": [{\"text\": \"Generate a random number between 0 and 10.\", \"type\": \"text\"}]}, {\"role\": \"tool\", \"content\": {\"toolCallId\": \"1\", \"toolName\": \"Weather\", \"output\": \"rainy\"}}]",
"deprecation": {
"replacement": "gen_ai.input.messages",
"_status": null
},
"alias": [
"ai.input_messages"
]
}
The model's response text messages. It has to be a stringified version of an array of response text messages.
Example["The weather in Paris is rainy and overcast, with temperatures around 57°F", "The weather in London is sunny and warm, with temperatures around 65°F"]
Use gen_ai.output.messages instead.
Raw JSON
{
"key": "gen_ai.response.text",
"brief": "The model's response text messages. It has to be a stringified version of an array of response text messages.",
"type": "string",
"pii": {
"key": "maybe"
},
"is_in_otel": false,
"example": "[\"The weather in Paris is rainy and overcast, with temperatures around 57°F\", \"The weather in London is sunny and warm, with temperatures around 65°F\"]",
"deprecation": {
"replacement": "gen_ai.output.messages",
"_status": null
},
"alias": []
}
{
"key": "gen_ai.response.tool_calls",
"brief": "The tool calls in the model's response. It has to be a stringified version of an array of objects.",
"type": "string",
"pii": {
"key": "maybe"
},
"is_in_otel": false,
"example": "[{\"name\": \"get_weather\", \"arguments\": {\"location\": \"Paris\"}}]",
"deprecation": {
"replacement": "gen_ai.output.messages",
"_status": null
},
"alias": []
}