Gen_ai Attributes

53 attributes in this category. 45 stable · 8 deprecated

Stable Attributes

gen_ai.agent.name

string PII: Maybe OTel: True

The name of the agent being used.

Example ResearchAssistant
Raw JSON
{
  "key": "gen_ai.agent.name",
  "brief": "The name of the agent being used.",
  "type": "string",
  "pii": {
    "key": "maybe"
  },
  "is_in_otel": true,
  "example": "ResearchAssistant"
}

gen_ai.assistant.message

string PII: True OTel: False

The assistant message passed to the model.

Example get_weather tool call
Raw JSON
{
  "key": "gen_ai.assistant.message",
  "brief": "The assistant message passed to the model.",
  "type": "string",
  "pii": {
    "key": "true"
  },
  "is_in_otel": false,
  "example": "get_weather tool call"
}

gen_ai.choice

string PII: True OTel: False

The model's response message.

Example The weather in Paris is rainy and overcast, with temperatures around 57°F
Raw JSON
{
  "key": "gen_ai.choice",
  "brief": "The model's response message.",
  "type": "string",
  "pii": {
    "key": "true"
  },
  "is_in_otel": false,
  "example": "The weather in Paris is rainy and overcast, with temperatures around 57°F"
}

gen_ai.conversation.id

string PII: Maybe OTel: True

The unique identifier for a conversation (session, thread), used to store and correlate messages within this conversation.

Example conv_5j66UpCpwteGg4YSxUnt7lPY
Raw JSON
{
  "key": "gen_ai.conversation.id",
  "brief": "The unique identifier for a conversation (session, thread), used to store and correlate messages within this conversation.",
  "type": "string",
  "pii": {
    "key": "maybe"
  },
  "is_in_otel": true,
  "example": "conv_5j66UpCpwteGg4YSxUnt7lPY"
}

gen_ai.cost.input_tokens

double PII: Maybe OTel: False

The cost of tokens used to process the AI input (prompt) in USD (without cached input tokens).

Example 123.45
Raw JSON
{
  "key": "gen_ai.cost.input_tokens",
  "brief": "The cost of tokens used to process the AI input (prompt) in USD (without cached input tokens).",
  "type": "double",
  "pii": {
    "key": "maybe"
  },
  "is_in_otel": false,
  "example": 123.45
}

gen_ai.cost.output_tokens

double PII: Maybe OTel: False

The cost of tokens used for creating the AI output in USD (without reasoning tokens).

Example 123.45
Raw JSON
{
  "key": "gen_ai.cost.output_tokens",
  "brief": "The cost of tokens used for creating the AI output in USD (without reasoning tokens).",
  "type": "double",
  "pii": {
    "key": "maybe"
  },
  "is_in_otel": false,
  "example": 123.45
}

gen_ai.cost.total_tokens

double PII: Maybe OTel: False

The total cost for the tokens used.

Example 12.34
Raw JSON
{
  "key": "gen_ai.cost.total_tokens",
  "brief": "The total cost for the tokens used.",
  "type": "double",
  "pii": {
    "key": "maybe"
  },
  "is_in_otel": false,
  "example": 12.34
}

gen_ai.embeddings.input

string PII: Maybe OTel: False

The input to the embeddings model.

Example What's the weather in Paris?
Raw JSON
{
  "key": "gen_ai.embeddings.input",
  "brief": "The input to the embeddings model.",
  "type": "string",
  "pii": {
    "key": "maybe"
  },
  "is_in_otel": false,
  "example": "What's the weather in Paris?"
}

gen_ai.input.messages

string PII: Maybe OTel: True

The messages passed to the model. It has to be a stringified version of an array of objects. The `role` attribute of each object must be `"user"`, `"assistant"`, `"tool"`, or `"system"`. For messages of the role `"tool"`, the `content` can be a string or an arbitrary object with information about the tool call. For other messages the `content` can be either a string or a list of objects in the format `{type: "text", text:"..."}`.

Example [{"role": "user", "parts": [{"type": "text", "content": "Weather in Paris?"}]}, {"role": "assistant", "parts": [{"type": "tool_call", "id": "call_VSPygqKTWdrhaFErNvMV18Yl", "name": "get_weather", "arguments": {"location": "Paris"}}]}, {"role": "tool", "parts": [{"type": "tool_call_response", "id": "call_VSPygqKTWdrhaFErNvMV18Yl", "result": "rainy, 57°F"}]}]
Raw JSON
{
  "key": "gen_ai.input.messages",
  "brief": "The messages passed to the model. It has to be a stringified version of an array of objects. The `role` attribute of each object must be `\"user\"`, `\"assistant\"`, `\"tool\"`, or `\"system\"`. For messages of the role `\"tool\"`, the `content` can be a string or an arbitrary object with information about the tool call. For other messages the `content` can be either a string or a list of objects in the format `{type: \"text\", text:\"...\"}`.",
  "type": "string",
  "pii": {
    "key": "maybe"
  },
  "is_in_otel": true,
  "example": "[{\"role\": \"user\", \"parts\": [{\"type\": \"text\", \"content\": \"Weather in Paris?\"}]}, {\"role\": \"assistant\", \"parts\": [{\"type\": \"tool_call\", \"id\": \"call_VSPygqKTWdrhaFErNvMV18Yl\", \"name\": \"get_weather\", \"arguments\": {\"location\": \"Paris\"}}]}, {\"role\": \"tool\", \"parts\": [{\"type\": \"tool_call_response\", \"id\": \"call_VSPygqKTWdrhaFErNvMV18Yl\", \"result\": \"rainy, 57°F\"}]}]"
}

gen_ai.operation.name

string PII: Maybe OTel: True

The name of the operation being performed. It has the following list of well-known values: 'chat', 'create_agent', 'embeddings', 'execute_tool', 'generate_content', 'invoke_agent', 'text_completion'. If one of them applies, then that value MUST be used. Otherwise a custom value MAY be used.

Example chat
Raw JSON
{
  "key": "gen_ai.operation.name",
  "brief": "The name of the operation being performed. It has the following list of well-known values: 'chat', 'create_agent', 'embeddings', 'execute_tool', 'generate_content', 'invoke_agent', 'text_completion'. If one of them applies, then that value MUST be used. Otherwise a custom value MAY be used.",
  "type": "string",
  "pii": {
    "key": "maybe"
  },
  "is_in_otel": true,
  "example": "chat"
}

gen_ai.operation.type

string PII: Maybe OTel: False

The type of AI operation. Must be one of 'agent', 'ai_client', 'tool', 'handoff', 'guardrail'. Makes querying for spans in the UI easier.

Example tool
Raw JSON
{
  "key": "gen_ai.operation.type",
  "brief": "The type of AI operation. Must be one of 'agent', 'ai_client', 'tool', 'handoff', 'guardrail'. Makes querying for spans in the UI easier.",
  "type": "string",
  "pii": {
    "key": "maybe"
  },
  "is_in_otel": false,
  "example": "tool"
}

gen_ai.output.messages

string PII: Maybe OTel: True

The model's response messages. It has to be a stringified version of an array of message objects, which can include text responses and tool calls.

Example [{"role": "assistant", "parts": [{"type": "text", "content": "The weather in Paris is currently rainy with a temperature of 57°F."}], "finish_reason": "stop"}]
Raw JSON
{
  "key": "gen_ai.output.messages",
  "brief": "The model's response messages. It has to be a stringified version of an array of message objects, which can include text responses and tool calls.",
  "type": "string",
  "pii": {
    "key": "maybe"
  },
  "is_in_otel": true,
  "example": "[{\"role\": \"assistant\", \"parts\": [{\"type\": \"text\", \"content\": \"The weather in Paris is currently rainy with a temperature of 57°F.\"}], \"finish_reason\": \"stop\"}]"
}

gen_ai.pipeline.name

string PII: Maybe OTel: False

Name of the AI pipeline or chain being executed.

Example Autofix Pipeline
Aliases ai.pipeline.name
Raw JSON
{
  "key": "gen_ai.pipeline.name",
  "brief": "Name of the AI pipeline or chain being executed.",
  "type": "string",
  "pii": {
    "key": "maybe"
  },
  "is_in_otel": false,
  "example": "Autofix Pipeline",
  "alias": [
    "ai.pipeline.name"
  ]
}

gen_ai.request.frequency_penalty

double PII: Maybe OTel: True

Used to reduce repetitiveness of generated tokens. The higher the value, the stronger a penalty is applied to previously present tokens, proportional to how many times they have already appeared in the prompt or prior generation.

Example 0.5
Aliases ai.frequency_penalty
Raw JSON
{
  "key": "gen_ai.request.frequency_penalty",
  "brief": "Used to reduce repetitiveness of generated tokens. The higher the value, the stronger a penalty is applied to previously present tokens, proportional to how many times they have already appeared in the prompt or prior generation.",
  "type": "double",
  "pii": {
    "key": "maybe"
  },
  "is_in_otel": true,
  "example": 0.5,
  "alias": [
    "ai.frequency_penalty"
  ]
}

gen_ai.request.max_tokens

integer PII: Maybe OTel: True

The maximum number of tokens to generate in the response.

Example 2048
Raw JSON
{
  "key": "gen_ai.request.max_tokens",
  "brief": "The maximum number of tokens to generate in the response.",
  "type": "integer",
  "pii": {
    "key": "maybe"
  },
  "is_in_otel": true,
  "example": 2048
}

gen_ai.request.model

string PII: Maybe OTel: True

The model identifier being used for the request.

Example gpt-4-turbo-preview
Raw JSON
{
  "key": "gen_ai.request.model",
  "brief": "The model identifier being used for the request.",
  "type": "string",
  "pii": {
    "key": "maybe"
  },
  "is_in_otel": true,
  "example": "gpt-4-turbo-preview"
}

gen_ai.request.presence_penalty

double PII: Maybe OTel: True

Used to reduce repetitiveness of generated tokens. Similar to frequency_penalty, except that this penalty is applied equally to all tokens that have already appeared, regardless of their exact frequencies.

Example 0.5
Aliases ai.presence_penalty
Raw JSON
{
  "key": "gen_ai.request.presence_penalty",
  "brief": "Used to reduce repetitiveness of generated tokens. Similar to frequency_penalty, except that this penalty is applied equally to all tokens that have already appeared, regardless of their exact frequencies.",
  "type": "double",
  "pii": {
    "key": "maybe"
  },
  "is_in_otel": true,
  "example": 0.5,
  "alias": [
    "ai.presence_penalty"
  ]
}

gen_ai.request.seed

string PII: Maybe OTel: True

The seed, ideally models given the same seed and same other parameters will produce the exact same output.

Example 1234567890
Aliases ai.seed
Raw JSON
{
  "key": "gen_ai.request.seed",
  "brief": "The seed, ideally models given the same seed and same other parameters will produce the exact same output.",
  "type": "string",
  "pii": {
    "key": "maybe"
  },
  "is_in_otel": true,
  "example": "1234567890",
  "alias": [
    "ai.seed"
  ]
}

gen_ai.request.temperature

double PII: Maybe OTel: True

For an AI model call, the temperature parameter. Temperature essentially means how random the output will be.

Example 0.1
Aliases ai.temperature
Raw JSON
{
  "key": "gen_ai.request.temperature",
  "brief": "For an AI model call, the temperature parameter. Temperature essentially means how random the output will be.",
  "type": "double",
  "pii": {
    "key": "maybe"
  },
  "is_in_otel": true,
  "example": 0.1,
  "alias": [
    "ai.temperature"
  ]
}

gen_ai.request.top_k

integer PII: Maybe OTel: True

Limits the model to only consider the K most likely next tokens, where K is an integer (e.g., top_k=20 means only the 20 highest probability tokens are considered).

Example 35
Aliases ai.top_k
Raw JSON
{
  "key": "gen_ai.request.top_k",
  "brief": "Limits the model to only consider the K most likely next tokens, where K is an integer (e.g., top_k=20 means only the 20 highest probability tokens are considered).",
  "type": "integer",
  "pii": {
    "key": "maybe"
  },
  "is_in_otel": true,
  "example": 35,
  "alias": [
    "ai.top_k"
  ]
}

gen_ai.request.top_p

double PII: Maybe OTel: True

Limits the model to only consider tokens whose cumulative probability mass adds up to p, where p is a float between 0 and 1 (e.g., top_p=0.7 means only tokens that sum up to 70% of the probability mass are considered).

Example 0.7
Aliases ai.top_p
Raw JSON
{
  "key": "gen_ai.request.top_p",
  "brief": "Limits the model to only consider tokens whose cumulative probability mass adds up to p, where p is a float between 0 and 1 (e.g., top_p=0.7 means only tokens that sum up to 70% of the probability mass are considered).",
  "type": "double",
  "pii": {
    "key": "maybe"
  },
  "is_in_otel": true,
  "example": 0.7,
  "alias": [
    "ai.top_p"
  ]
}

gen_ai.response.finish_reasons

string PII: Maybe OTel: True

The reason why the model stopped generating.

Example COMPLETE
Aliases ai.finish_reason
Raw JSON
{
  "key": "gen_ai.response.finish_reasons",
  "brief": "The reason why the model stopped generating.",
  "type": "string",
  "pii": {
    "key": "maybe"
  },
  "is_in_otel": true,
  "example": "COMPLETE",
  "alias": [
    "ai.finish_reason"
  ]
}

gen_ai.response.id

string PII: Maybe OTel: True

Unique identifier for the completion.

Example gen_123abc
Aliases ai.generation_id
Raw JSON
{
  "key": "gen_ai.response.id",
  "brief": "Unique identifier for the completion.",
  "type": "string",
  "pii": {
    "key": "maybe"
  },
  "is_in_otel": true,
  "example": "gen_123abc",
  "alias": [
    "ai.generation_id"
  ]
}

gen_ai.response.model

string PII: Maybe OTel: True

The vendor-specific ID of the model used.

Example gpt-4
Aliases ai.model_id
Raw JSON
{
  "key": "gen_ai.response.model",
  "brief": "The vendor-specific ID of the model used.",
  "type": "string",
  "pii": {
    "key": "maybe"
  },
  "is_in_otel": true,
  "example": "gpt-4",
  "alias": [
    "ai.model_id"
  ]
}

gen_ai.response.streaming

boolean PII: False OTel: False

Whether or not the AI model call's response was streamed back asynchronously

Example true
Aliases ai.streaming
Raw JSON
{
  "key": "gen_ai.response.streaming",
  "brief": "Whether or not the AI model call's response was streamed back asynchronously",
  "type": "boolean",
  "pii": {
    "key": "false"
  },
  "is_in_otel": false,
  "example": true,
  "alias": [
    "ai.streaming"
  ]
}

gen_ai.response.time_to_first_token

double PII: Maybe OTel: False

Time in seconds when the first response content chunk arrived in streaming responses.

Example 0.6853435
Raw JSON
{
  "key": "gen_ai.response.time_to_first_token",
  "brief": "Time in seconds when the first response content chunk arrived in streaming responses.",
  "type": "double",
  "pii": {
    "key": "maybe"
  },
  "is_in_otel": false,
  "example": 0.6853435
}

gen_ai.response.tokens_per_second

double PII: Maybe OTel: False

The total output tokens per seconds throughput

Example 12345.67
Raw JSON
{
  "key": "gen_ai.response.tokens_per_second",
  "brief": "The total output tokens per seconds throughput",
  "type": "double",
  "pii": {
    "key": "maybe"
  },
  "is_in_otel": false,
  "example": 12345.67
}

gen_ai.system

string PII: Maybe OTel: True

The provider of the model.

Example openai
Aliases ai.model.provider
Raw JSON
{
  "key": "gen_ai.system",
  "brief": "The provider of the model.",
  "type": "string",
  "pii": {
    "key": "maybe"
  },
  "is_in_otel": true,
  "example": "openai",
  "alias": [
    "ai.model.provider"
  ]
}

gen_ai.system_instructions

string PII: Maybe OTel: True

The system instructions passed to the model.

Example You are a helpful assistant
Raw JSON
{
  "key": "gen_ai.system_instructions",
  "brief": "The system instructions passed to the model.",
  "type": "string",
  "pii": {
    "key": "maybe"
  },
  "is_in_otel": true,
  "example": "You are a helpful assistant"
}

gen_ai.tool.call.arguments

string PII: Maybe OTel: True

The arguments of the tool call. It has to be a stringified version of the arguments to the tool.

Example {"location": "Paris"}
Raw JSON
{
  "key": "gen_ai.tool.call.arguments",
  "brief": "The arguments of the tool call. It has to be a stringified version of the arguments to the tool.",
  "type": "string",
  "pii": {
    "key": "maybe"
  },
  "is_in_otel": true,
  "example": "{\"location\": \"Paris\"}"
}

gen_ai.tool.call.result

string PII: Maybe OTel: True

The result of the tool call. It has to be a stringified version of the result of the tool.

Example rainy, 57°F
Raw JSON
{
  "key": "gen_ai.tool.call.result",
  "brief": "The result of the tool call. It has to be a stringified version of the result of the tool.",
  "type": "string",
  "pii": {
    "key": "maybe"
  },
  "is_in_otel": true,
  "example": "rainy, 57°F"
}

gen_ai.tool.definitions

string PII: Maybe OTel: True

The list of source system tool definitions available to the GenAI agent or model.

Example [{"type": "function", "name": "get_current_weather", "description": "Get the current weather in a given location", "parameters": {"type": "object", "properties": {"location": {"type": "string", "description": "The city and state, e.g. San Francisco, CA"}, "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}}, "required": ["location", "unit"]}}]
Raw JSON
{
  "key": "gen_ai.tool.definitions",
  "brief": "The list of source system tool definitions available to the GenAI agent or model.",
  "type": "string",
  "pii": {
    "key": "maybe"
  },
  "is_in_otel": true,
  "example": "[{\"type\": \"function\", \"name\": \"get_current_weather\", \"description\": \"Get the current weather in a given location\", \"parameters\": {\"type\": \"object\", \"properties\": {\"location\": {\"type\": \"string\", \"description\": \"The city and state, e.g. San Francisco, CA\"}, \"unit\": {\"type\": \"string\", \"enum\": [\"celsius\", \"fahrenheit\"]}}, \"required\": [\"location\", \"unit\"]}}]"
}

gen_ai.tool.description

string PII: Maybe OTel: True

The description of the tool being used.

Example Searches the web for current information about a topic
Raw JSON
{
  "key": "gen_ai.tool.description",
  "brief": "The description of the tool being used.",
  "type": "string",
  "pii": {
    "key": "maybe"
  },
  "is_in_otel": true,
  "example": "Searches the web for current information about a topic"
}

gen_ai.tool.input

string PII: Maybe OTel: False

The input of the tool being used. It has to be a stringified version of the input to the tool.

Example {"location": "Paris"}
Raw JSON
{
  "key": "gen_ai.tool.input",
  "brief": "The input of the tool being used. It has to be a stringified version of the input to the tool.",
  "type": "string",
  "pii": {
    "key": "maybe"
  },
  "is_in_otel": false,
  "example": "{\"location\": \"Paris\"}",
  "alias": []
}

gen_ai.tool.message

string PII: True OTel: False

The response from a tool or function call passed to the model.

Example rainy, 57°F
Raw JSON
{
  "key": "gen_ai.tool.message",
  "brief": "The response from a tool or function call passed to the model.",
  "type": "string",
  "pii": {
    "key": "true"
  },
  "is_in_otel": false,
  "example": "rainy, 57°F"
}

gen_ai.tool.name

string PII: Maybe OTel: True

Name of the tool utilized by the agent.

Example Flights
Aliases ai.function_call
Raw JSON
{
  "key": "gen_ai.tool.name",
  "brief": "Name of the tool utilized by the agent.",
  "type": "string",
  "pii": {
    "key": "maybe"
  },
  "is_in_otel": true,
  "example": "Flights",
  "alias": [
    "ai.function_call"
  ]
}

gen_ai.tool.output

string PII: Maybe OTel: False

The output of the tool being used. It has to be a stringified version of the output of the tool.

Example rainy, 57°F
Raw JSON
{
  "key": "gen_ai.tool.output",
  "brief": "The output of the tool being used. It has to be a stringified version of the output of the tool.",
  "type": "string",
  "pii": {
    "key": "maybe"
  },
  "is_in_otel": false,
  "example": "rainy, 57°F",
  "alias": []
}

gen_ai.tool.type

string PII: Maybe OTel: True

The type of tool being used.

Example function
Raw JSON
{
  "key": "gen_ai.tool.type",
  "brief": "The type of tool being used.",
  "type": "string",
  "pii": {
    "key": "maybe"
  },
  "is_in_otel": true,
  "example": "function"
}

gen_ai.usage.input_tokens

integer PII: Maybe OTel: True

The number of tokens used to process the AI input (prompt) without cached input tokens.

Example 10
Aliases ai.prompt_tokens.usedgen_ai.usage.prompt_tokens
Raw JSON
{
  "key": "gen_ai.usage.input_tokens",
  "brief": "The number of tokens used to process the AI input (prompt) without cached input tokens.",
  "type": "integer",
  "pii": {
    "key": "maybe"
  },
  "is_in_otel": true,
  "example": 10,
  "alias": [
    "ai.prompt_tokens.used",
    "gen_ai.usage.prompt_tokens"
  ]
}

gen_ai.usage.input_tokens.cache_write

integer PII: Maybe OTel: False

The number of tokens written to the cache when processing the AI input (prompt).

Example 100
Raw JSON
{
  "key": "gen_ai.usage.input_tokens.cache_write",
  "brief": "The number of tokens written to the cache when processing the AI input (prompt).",
  "type": "integer",
  "pii": {
    "key": "maybe"
  },
  "is_in_otel": false,
  "example": 100
}

gen_ai.usage.input_tokens.cached

integer PII: Maybe OTel: False

The number of cached tokens used to process the AI input (prompt).

Example 50
Raw JSON
{
  "key": "gen_ai.usage.input_tokens.cached",
  "brief": "The number of cached tokens used to process the AI input (prompt).",
  "type": "integer",
  "pii": {
    "key": "maybe"
  },
  "is_in_otel": false,
  "example": 50
}

gen_ai.usage.output_tokens

integer PII: Maybe OTel: True

The number of tokens used for creating the AI output (without reasoning tokens).

Example 10
Aliases ai.completion_tokens.usedgen_ai.usage.completion_tokens
Raw JSON
{
  "key": "gen_ai.usage.output_tokens",
  "brief": "The number of tokens used for creating the AI output (without reasoning tokens).",
  "type": "integer",
  "pii": {
    "key": "maybe"
  },
  "is_in_otel": true,
  "example": 10,
  "alias": [
    "ai.completion_tokens.used",
    "gen_ai.usage.completion_tokens"
  ]
}

gen_ai.usage.output_tokens.reasoning

integer PII: Maybe OTel: False

The number of tokens used for reasoning to create the AI output.

Example 75
Raw JSON
{
  "key": "gen_ai.usage.output_tokens.reasoning",
  "brief": "The number of tokens used for reasoning to create the AI output.",
  "type": "integer",
  "pii": {
    "key": "maybe"
  },
  "is_in_otel": false,
  "example": 75
}

gen_ai.usage.total_tokens

integer PII: Maybe OTel: False

The total number of tokens used to process the prompt. (input tokens plus output todkens)

Example 20
Aliases ai.total_tokens.used
Raw JSON
{
  "key": "gen_ai.usage.total_tokens",
  "brief": "The total number of tokens used to process the prompt. (input tokens plus output todkens)",
  "type": "integer",
  "pii": {
    "key": "maybe"
  },
  "is_in_otel": false,
  "example": 20,
  "alias": [
    "ai.total_tokens.used"
  ]
}

gen_ai.user.message

string PII: True OTel: False

The user message passed to the model.

Example What's the weather in Paris?
Raw JSON
{
  "key": "gen_ai.user.message",
  "brief": "The user message passed to the model.",
  "type": "string",
  "pii": {
    "key": "true"
  },
  "is_in_otel": false,
  "example": "What's the weather in Paris?"
}

Deprecated Attributes

These attributes are deprecated and should not be used in new code. See each attribute for migration guidance.

gen_ai.prompt Deprecated

string PII: Maybe OTel: True

The input messages sent to the model

Example [{"role": "user", "message": "hello"}]

No replacement available at this time.

Deprecated from OTEL, use gen_ai.input.messages with the new format instead.

Raw JSON
{
  "key": "gen_ai.prompt",
  "brief": "The input messages sent to the model",
  "type": "string",
  "pii": {
    "key": "maybe"
  },
  "is_in_otel": true,
  "example": "[{\"role\": \"user\", \"message\": \"hello\"}]",
  "deprecation": {
    "reason": "Deprecated from OTEL, use gen_ai.input.messages with the new format instead.",
    "_status": null
  }
}

gen_ai.request.available_tools Deprecated

string PII: Maybe OTel: False

The available tools for the model. It has to be a stringified version of an array of objects.

Example [{"name": "get_weather", "description": "Get the weather for a given location"}, {"name": "get_news", "description": "Get the news for a given topic"}]

Use gen_ai.tool.definitions instead.

Raw JSON
{
  "key": "gen_ai.request.available_tools",
  "brief": "The available tools for the model. It has to be a stringified version of an array of objects.",
  "type": "string",
  "pii": {
    "key": "maybe"
  },
  "is_in_otel": false,
  "example": "[{\"name\": \"get_weather\", \"description\": \"Get the weather for a given location\"}, {\"name\": \"get_news\", \"description\": \"Get the news for a given topic\"}]",
  "deprecation": {
    "replacement": "gen_ai.tool.definitions",
    "_status": null
  },
  "alias": []
}

gen_ai.request.messages Deprecated

string PII: Maybe OTel: False

The messages passed to the model. It has to be a stringified version of an array of objects. The `role` attribute of each object must be `"user"`, `"assistant"`, `"tool"`, or `"system"`. For messages of the role `"tool"`, the `content` can be a string or an arbitrary object with information about the tool call. For other messages the `content` can be either a string or a list of objects in the format `{type: "text", text:"..."}`.

Example [{"role": "system", "content": "Generate a random number."}, {"role": "user", "content": [{"text": "Generate a random number between 0 and 10.", "type": "text"}]}, {"role": "tool", "content": {"toolCallId": "1", "toolName": "Weather", "output": "rainy"}}]
Aliases ai.input_messages

Use gen_ai.input.messages instead.

Raw JSON
{
  "key": "gen_ai.request.messages",
  "brief": "The messages passed to the model. It has to be a stringified version of an array of objects. The `role` attribute of each object must be `\"user\"`, `\"assistant\"`, `\"tool\"`, or `\"system\"`. For messages of the role `\"tool\"`, the `content` can be a string or an arbitrary object with information about the tool call. For other messages the `content` can be either a string or a list of objects in the format `{type: \"text\", text:\"...\"}`.",
  "type": "string",
  "pii": {
    "key": "maybe"
  },
  "is_in_otel": false,
  "example": "[{\"role\": \"system\", \"content\": \"Generate a random number.\"}, {\"role\": \"user\", \"content\": [{\"text\": \"Generate a random number between 0 and 10.\", \"type\": \"text\"}]}, {\"role\": \"tool\", \"content\": {\"toolCallId\": \"1\", \"toolName\": \"Weather\", \"output\": \"rainy\"}}]",
  "deprecation": {
    "replacement": "gen_ai.input.messages",
    "_status": null
  },
  "alias": [
    "ai.input_messages"
  ]
}

gen_ai.response.text Deprecated

string PII: Maybe OTel: False

The model's response text messages. It has to be a stringified version of an array of response text messages.

Example ["The weather in Paris is rainy and overcast, with temperatures around 57°F", "The weather in London is sunny and warm, with temperatures around 65°F"]

Use gen_ai.output.messages instead.

Raw JSON
{
  "key": "gen_ai.response.text",
  "brief": "The model's response text messages. It has to be a stringified version of an array of response text messages.",
  "type": "string",
  "pii": {
    "key": "maybe"
  },
  "is_in_otel": false,
  "example": "[\"The weather in Paris is rainy and overcast, with temperatures around 57°F\", \"The weather in London is sunny and warm, with temperatures around 65°F\"]",
  "deprecation": {
    "replacement": "gen_ai.output.messages",
    "_status": null
  },
  "alias": []
}

gen_ai.response.tool_calls Deprecated

string PII: Maybe OTel: False

The tool calls in the model's response. It has to be a stringified version of an array of objects.

Example [{"name": "get_weather", "arguments": {"location": "Paris"}}]

Use gen_ai.output.messages instead.

Raw JSON
{
  "key": "gen_ai.response.tool_calls",
  "brief": "The tool calls in the model's response. It has to be a stringified version of an array of objects.",
  "type": "string",
  "pii": {
    "key": "maybe"
  },
  "is_in_otel": false,
  "example": "[{\"name\": \"get_weather\", \"arguments\": {\"location\": \"Paris\"}}]",
  "deprecation": {
    "replacement": "gen_ai.output.messages",
    "_status": null
  },
  "alias": []
}

gen_ai.system.message Deprecated

string PII: True OTel: False

The system instructions passed to the model.

Example You are a helpful assistant

Use gen_ai.system_instructions instead.

Raw JSON
{
  "key": "gen_ai.system.message",
  "brief": "The system instructions passed to the model.",
  "type": "string",
  "pii": {
    "key": "true"
  },
  "is_in_otel": false,
  "example": "You are a helpful assistant",
  "deprecation": {
    "replacement": "gen_ai.system_instructions",
    "_status": null
  }
}

gen_ai.usage.completion_tokens Deprecated

integer PII: Maybe OTel: True

The number of tokens used in the GenAI response (completion).

Example 10
Aliases ai.completion_tokens.usedgen_ai.usage.output_tokens

Use gen_ai.usage.output_tokens instead.

Raw JSON
{
  "key": "gen_ai.usage.completion_tokens",
  "brief": "The number of tokens used in the GenAI response (completion).",
  "type": "integer",
  "pii": {
    "key": "maybe"
  },
  "is_in_otel": true,
  "example": 10,
  "deprecation": {
    "replacement": "gen_ai.usage.output_tokens",
    "_status": null
  },
  "alias": [
    "ai.completion_tokens.used",
    "gen_ai.usage.output_tokens"
  ]
}

gen_ai.usage.prompt_tokens Deprecated

integer PII: Maybe OTel: True

The number of tokens used in the GenAI input (prompt).

Example 20
Aliases ai.prompt_tokens.usedgen_ai.usage.input_tokens

Use gen_ai.usage.input_tokens instead.

Raw JSON
{
  "key": "gen_ai.usage.prompt_tokens",
  "brief": "The number of tokens used in the GenAI input (prompt).",
  "type": "integer",
  "pii": {
    "key": "maybe"
  },
  "is_in_otel": true,
  "example": 20,
  "deprecation": {
    "replacement": "gen_ai.usage.input_tokens",
    "_status": null
  },
  "alias": [
    "ai.prompt_tokens.used",
    "gen_ai.usage.input_tokens"
  ]
}