References or sources cited by the AI model in its response.
| Property | Value |
|---|---|
| Type | string[] |
| Has PII | true |
| Exists in OpenTelemetry | No |
| Example | ["Citation 1","Citation 2"] |
Documents or content chunks used as context for the AI model.
| Property | Value |
|---|---|
| Type | string[] |
| Has PII | true |
| Exists in OpenTelemetry | No |
| Example | ["document1.txt","document2.pdf"] |
Boolean indicating if the model needs to perform a search.
| Property | Value |
|---|---|
| Type | boolean |
| Has PII | false |
| Exists in OpenTelemetry | No |
| Example | false |
Extra metadata passed to an AI pipeline step.
| Property | Value |
|---|---|
| Type | string |
| Has PII | maybe |
| Exists in OpenTelemetry | No |
| Example | {"user_id": 123, "session_id": "abc123"} |
For an AI model call, the preamble parameter. Preambles are a part of the prompt used to adjust the model’s overall behavior and conversation style.
| Property | Value |
|---|---|
| Type | string |
| Has PII | true |
| Exists in OpenTelemetry | No |
| Example | You are now a clown. |
When enabled, the user’s prompt will be sent to the model without any pre-processing.
| Property | Value |
|---|---|
| Type | boolean |
| Has PII | false |
| Exists in OpenTelemetry | No |
| Example | true |
For an AI model call, the format of the response
| Property | Value |
|---|---|
| Type | string |
| Has PII | maybe |
| Exists in OpenTelemetry | No |
| Example | json_object |
Queries used to search for relevant context or documents.
| Property | Value |
|---|---|
| Type | string[] |
| Has PII | true |
| Exists in OpenTelemetry | No |
| Example | ["climate change effects","renewable energy"] |
Results returned from search queries for context.
| Property | Value |
|---|---|
| Type | string[] |
| Has PII | true |
| Exists in OpenTelemetry | No |
| Example | ["search_result_1, search_result_2"] |
Tags that describe an AI pipeline step.
| Property | Value |
|---|---|
| Type | string |
| Has PII | maybe |
| Exists in OpenTelemetry | No |
| Example | {"executed_function": "add_integers"} |
Raw text inputs provided to the model.
| Property | Value |
|---|---|
| Type | string[] |
| Has PII | true |
| Exists in OpenTelemetry | No |
| Example | ["Hello, how are you?","What is the capital of France?"] |
The total cost for the tokens used.
| Property | Value |
|---|---|
| Type | double |
| Has PII | false |
| Exists in OpenTelemetry | No |
| Example | 12.34 |
Warning messages generated during model execution.
| Property | Value |
|---|---|
| Type | string[] |
| Has PII | true |
| Exists in OpenTelemetry | No |
| Example | ["Token limit exceeded"] |
These attributes are deprecated and will be removed in a future version. Please use the recommended replacements.
The number of tokens used to respond to the message.
| Property | Value |
|---|---|
| Type | integer |
| Has PII | false |
| Exists in OpenTelemetry | No |
| Example | 10 |
| Deprecated | Yes, use gen_ai.usage.output_tokens instead |
| Aliases | gen_ai.usage.output_tokens, gen_ai.usage.completion_tokens |
The reason why the model stopped generating.
| Property | Value |
|---|---|
| Type | string |
| Has PII | maybe |
| Exists in OpenTelemetry | No |
| Example | COMPLETE |
| Deprecated | Yes, use gen_ai.response.finish_reason instead |
| Aliases | gen_ai.response.finish_reasons |
Used to reduce repetitiveness of generated tokens. The higher the value, the stronger a penalty is applied to previously present tokens, proportional to how many times they have already appeared in the prompt or prior generation.
| Property | Value |
|---|---|
| Type | double |
| Has PII | false |
| Exists in OpenTelemetry | No |
| Example | 0.5 |
| Deprecated | Yes, use gen_ai.request.frequency_penalty instead |
| Aliases | gen_ai.request.frequency_penalty |
For an AI model call, the function that was called. This is deprecated for OpenAI, and replaced by tool_calls
| Property | Value |
|---|---|
| Type | string |
| Has PII | true |
| Exists in OpenTelemetry | No |
| Example | function_name |
| Deprecated | Yes, use gen_ai.tool.name instead |
| Aliases | gen_ai.tool.name |
Unique identifier for the completion.
| Property | Value |
|---|---|
| Type | string |
| Has PII | maybe |
| Exists in OpenTelemetry | No |
| Example | gen_123abc |
| Deprecated | Yes, use gen_ai.response.id instead |
| Aliases | gen_ai.response.id |
The input messages sent to the model
| Property | Value |
|---|---|
| Type | string |
| Has PII | maybe |
| Exists in OpenTelemetry | No |
| Example | [{"role": "user", "message": "hello"}] |
| Deprecated | Yes, use gen_ai.request.messages instead |
| Aliases | gen_ai.request.messages |
The vendor-specific ID of the model used.
| Property | Value |
|---|---|
| Type | string |
| Has PII | maybe |
| Exists in OpenTelemetry | No |
| Example | gpt-4 |
| Deprecated | Yes, use gen_ai.response.model instead |
| Aliases | gen_ai.response.model |
The provider of the model.
| Property | Value |
|---|---|
| Type | string |
| Has PII | maybe |
| Exists in OpenTelemetry | No |
| Example | openai |
| Deprecated | Yes, use gen_ai.system instead |
| Aliases | gen_ai.system |
The name of the AI pipeline.
| Property | Value |
|---|---|
| Type | string |
| Has PII | maybe |
| Exists in OpenTelemetry | No |
| Example | Autofix Pipeline |
| Deprecated | Yes, use gen_ai.pipeline.name instead |
| Aliases | gen_ai.pipeline.name |
Used to reduce repetitiveness of generated tokens. Similar to frequency_penalty, except that this penalty is applied equally to all tokens that have already appeared, regardless of their exact frequencies.
| Property | Value |
|---|---|
| Type | double |
| Has PII | false |
| Exists in OpenTelemetry | No |
| Example | 0.5 |
| Deprecated | Yes, use gen_ai.request.presence_penalty instead |
| Aliases | gen_ai.request.presence_penalty |
The number of tokens used to process just the prompt.
| Property | Value |
|---|---|
| Type | integer |
| Has PII | false |
| Exists in OpenTelemetry | No |
| Example | 20 |
| Deprecated | Yes, use gen_ai.usage.input_tokens instead |
| Aliases | gen_ai.usage.prompt_tokens, gen_ai.usage.input_tokens |
The response messages sent back by the AI model.
| Property | Value |
|---|---|
| Type | string[] |
| Has PII | maybe |
| Exists in OpenTelemetry | No |
| Example | ["hello","world"] |
| Deprecated | Yes, use gen_ai.response.text instead |
The seed, ideally models given the same seed and same other parameters will produce the exact same output.
| Property | Value |
|---|---|
| Type | string |
| Has PII | maybe |
| Exists in OpenTelemetry | No |
| Example | 1234567890 |
| Deprecated | Yes, use gen_ai.request.seed instead |
| Aliases | gen_ai.request.seed |
Whether the request was streamed back.
| Property | Value |
|---|---|
| Type | boolean |
| Has PII | false |
| Exists in OpenTelemetry | No |
| Example | true |
| Deprecated | Yes, use gen_ai.response.streaming instead |
| Aliases | gen_ai.response.streaming |
For an AI model call, the temperature parameter. Temperature essentially means how random the output will be.
| Property | Value |
|---|---|
| Type | double |
| Has PII | false |
| Exists in OpenTelemetry | No |
| Example | 0.1 |
| Deprecated | Yes, use gen_ai.request.temperature instead |
| Aliases | gen_ai.request.temperature |
For an AI model call, the tool calls that were made.
| Property | Value |
|---|---|
| Type | string[] |
| Has PII | true |
| Exists in OpenTelemetry | No |
| Example | ["tool_call_1","tool_call_2"] |
| Deprecated | Yes, use gen_ai.response.tool_calls instead |
For an AI model call, the functions that are available
| Property | Value |
|---|---|
| Type | string[] |
| Has PII | maybe |
| Exists in OpenTelemetry | No |
| Example | ["function_1","function_2"] |
| Deprecated | Yes, use gen_ai.request.available_tools instead |
Limits the model to only consider the K most likely next tokens, where K is an integer (e.g., top_k=20 means only the 20 highest probability tokens are considered).
| Property | Value |
|---|---|
| Type | integer |
| Has PII | false |
| Exists in OpenTelemetry | No |
| Example | 35 |
| Deprecated | Yes, use gen_ai.request.top_k instead |
| Aliases | gen_ai.request.top_k |
Limits the model to only consider tokens whose cumulative probability mass adds up to p, where p is a float between 0 and 1 (e.g., top_p=0.7 means only tokens that sum up to 70% of the probability mass are considered).
| Property | Value |
|---|---|
| Type | double |
| Has PII | false |
| Exists in OpenTelemetry | No |
| Example | 0.7 |
| Deprecated | Yes, use gen_ai.request.top_p instead |
| Aliases | gen_ai.request.top_p |
The total number of tokens used to process the prompt.
| Property | Value |
|---|---|
| Type | integer |
| Has PII | false |
| Exists in OpenTelemetry | No |
| Example | 30 |
| Deprecated | Yes, use gen_ai.usage.total_tokens instead |
| Aliases | gen_ai.usage.total_tokens |