The number of tokens used to respond to the message.
Property | Value |
---|---|
Type | integer |
Has PII | false |
Exists in OpenTelemetry | No |
Example | 10 |
Aliases | gen_ai.usage.output_tokens , gen_ai.usage.completion_tokens |
The input messages sent to the model
Property | Value |
---|---|
Type | string |
Has PII | maybe |
Exists in OpenTelemetry | No |
Example | [{"role": "user", "message": "hello"}] |
Aliases | gen_ai.prompt |
The vendor-specific ID of the model used.
Property | Value |
---|---|
Type | string |
Has PII | false |
Exists in OpenTelemetry | No |
Example | gpt-4 |
Aliases | gen_ai.response.model |
The number of tokens used to process just the prompt.
Property | Value |
---|---|
Type | integer |
Has PII | false |
Exists in OpenTelemetry | No |
Example | 20 |
Aliases | gen_ai.usage.prompt_tokens , gen_ai.usage.input_tokens |
The response messages sent back by the AI model.
Property | Value |
---|---|
Type | string[] |
Has PII | false |
Exists in OpenTelemetry | No |
Example | ["hello","world"] |
Whether the request was streamed back.
Property | Value |
---|---|
Type | boolean |
Has PII | false |
Exists in OpenTelemetry | No |
Example | true |
The total number of tokens used to process the prompt.
Property | Value |
---|---|
Type | integer |
Has PII | false |
Exists in OpenTelemetry | No |
Example | 30 |