objectstore_client package

class objectstore_client.Client(base_url: str, metrics_backend: MetricsBackend | None = None, propagate_traces: bool = False, retries: int | None = None, timeout_ms: float | None = None, connection_kwargs: Mapping[str, Any] | None = None)[source]

Bases: object

A client for Objectstore. Constructing it initializes a connection pool.

session(usecase: Usecase, **scopes: str | int | bool) Session[source]

Create a [Session] with the Objectstore server, tied to a specific [Usecase] and Scope.

A Scope is a (possibly nested) namespace within a Usecase, given as a sequence of key-value pairs passed as kwargs. IMPORTANT: the order of the kwargs matters!

The admitted characters for keys and values are: A-Za-z0-9_-()$!+*’.

Users are free to choose the scope structure that best suits their Usecase. The combination of Usecase and Scope will determine the physical key/path of the blob in the underlying storage backend.

For most usecases, it’s recommended to use the organization and project ID as the first components of the scope, as follows: ` client.session(usecase, org=organization_id, project=project_id, ...) `

class objectstore_client.GetResult(metadata, payload)[source]

Bases: NamedTuple

metadata: Metadata

Alias for field number 0

payload: IO[bytes]

Alias for field number 1

class objectstore_client.Metadata(content_type: 'str | None', compression: 'Compression | None', expiration_policy: 'ExpirationPolicy | None', custom: 'dict[str, str]')[source]

Bases: object

compression: Literal['zstd'] | Literal['none'] | None
content_type: str | None
custom: dict[str, str]
expiration_policy: TimeToIdle | TimeToLive | None
classmethod from_headers(headers: Mapping[str, str]) Metadata[source]
class objectstore_client.MetricsBackend(*args, **kwargs)[source]

Bases: Protocol

An abstract class that defines the interface for metrics backends.

abstractmethod distribution(name: str, value: int | float, tags: Mapping[str, str] | None = None, unit: str | None = None) None[source]

Records a distribution metric.

abstractmethod gauge(name: str, value: int | float, tags: Mapping[str, str] | None = None) None[source]

Sets a gauge metric to the given value.

abstractmethod increment(name: str, value: int | float = 1, tags: Mapping[str, str] | None = None) None[source]

Increments a counter metric by a given value.

class objectstore_client.NoOpMetricsBackend(*args, **kwargs)[source]

Bases: MetricsBackend

Default metrics backend that does not record anything.

distribution(name: str, value: int | float, tags: Mapping[str, str] | None = None, unit: str | None = None) None[source]

Records a distribution metric.

gauge(name: str, value: int | float, tags: Mapping[str, str] | None = None) None[source]

Sets a gauge metric to the given value.

increment(name: str, value: int | float = 1, tags: Mapping[str, str] | None = None) None[source]

Increments a counter metric by a given value.

exception objectstore_client.RequestError(message: str, status: int, response: str)[source]

Bases: Exception

Exception raised if an API call to Objectstore fails.

class objectstore_client.Session(pool: HTTPConnectionPool, metrics_backend: MetricsBackend, propagate_traces: bool, usecase: Usecase, scope: str)[source]

Bases: object

A session with the Objectstore server, scoped to a specific [Usecase] and Scope.

This should never be constructed directly, use [Client.session].

delete(id: str) None[source]

Deletes the blob with the given id.

get(id: str, decompress: bool = True) GetResult[source]

This fetches the blob with the given id, returning an IO stream that can be read.

By default, content that was uploaded compressed will be automatically decompressed, unless decompress=True is passed.

object_url(id: str) str[source]

Generates a GET url to the object with the given id.

This can then be used by downstream services to fetch the given object. NOTE however that the service does not strictly follow HTTP semantics, in particular in relation to Accept-Encoding.

put(contents: bytes | IO[bytes], id: str | None = None, compression: Literal['zstd'] | Literal['none'] | None = None, content_type: str | None = None, metadata: dict[str, str] | None = None, expiration_policy: TimeToIdle | TimeToLive | None = None) str[source]

Uploads the given contents to blob storage.

If no id is provided, one will be automatically generated and returned from this function.

The client will select the configured default_compression if none is given explicitly. This can be overridden by explicitly giving a compression argument. Providing “none” as the argument will instruct the client to not apply any compression to this upload, which is useful for uncompressible formats.

class objectstore_client.TimeToIdle(delta: 'timedelta')[source]

Bases: object

delta: timedelta
class objectstore_client.TimeToLive(delta: 'timedelta')[source]

Bases: object

delta: timedelta
class objectstore_client.Usecase(name: str, compression: Literal['zstd', 'none'] = 'zstd', expiration_policy: TimeToIdle | TimeToLive | None = None)[source]

Bases: object

An identifier for a workload in Objectstore, along with defaults to use for all operations within that Usecase.

Usecases need to be statically defined in Objectstore’s configuration server-side. Objectstore can make decisions based on the Usecase. For example, choosing the most suitable storage backend.

name: str

Submodules

objectstore_client.client module

class objectstore_client.client.Client(base_url: str, metrics_backend: MetricsBackend | None = None, propagate_traces: bool = False, retries: int | None = None, timeout_ms: float | None = None, connection_kwargs: Mapping[str, Any] | None = None)[source]

Bases: object

A client for Objectstore. Constructing it initializes a connection pool.

session(usecase: Usecase, **scopes: str | int | bool) Session[source]

Create a [Session] with the Objectstore server, tied to a specific [Usecase] and Scope.

A Scope is a (possibly nested) namespace within a Usecase, given as a sequence of key-value pairs passed as kwargs. IMPORTANT: the order of the kwargs matters!

The admitted characters for keys and values are: A-Za-z0-9_-()$!+*’.

Users are free to choose the scope structure that best suits their Usecase. The combination of Usecase and Scope will determine the physical key/path of the blob in the underlying storage backend.

For most usecases, it’s recommended to use the organization and project ID as the first components of the scope, as follows: ` client.session(usecase, org=organization_id, project=project_id, ...) `

class objectstore_client.client.GetResult(metadata, payload)[source]

Bases: NamedTuple

metadata: Metadata

Alias for field number 0

payload: IO[bytes]

Alias for field number 1

exception objectstore_client.client.RequestError(message: str, status: int, response: str)[source]

Bases: Exception

Exception raised if an API call to Objectstore fails.

class objectstore_client.client.Session(pool: HTTPConnectionPool, metrics_backend: MetricsBackend, propagate_traces: bool, usecase: Usecase, scope: str)[source]

Bases: object

A session with the Objectstore server, scoped to a specific [Usecase] and Scope.

This should never be constructed directly, use [Client.session].

delete(id: str) None[source]

Deletes the blob with the given id.

get(id: str, decompress: bool = True) GetResult[source]

This fetches the blob with the given id, returning an IO stream that can be read.

By default, content that was uploaded compressed will be automatically decompressed, unless decompress=True is passed.

object_url(id: str) str[source]

Generates a GET url to the object with the given id.

This can then be used by downstream services to fetch the given object. NOTE however that the service does not strictly follow HTTP semantics, in particular in relation to Accept-Encoding.

put(contents: bytes | IO[bytes], id: str | None = None, compression: Literal['zstd'] | Literal['none'] | None = None, content_type: str | None = None, metadata: dict[str, str] | None = None, expiration_policy: TimeToIdle | TimeToLive | None = None) str[source]

Uploads the given contents to blob storage.

If no id is provided, one will be automatically generated and returned from this function.

The client will select the configured default_compression if none is given explicitly. This can be overridden by explicitly giving a compression argument. Providing “none” as the argument will instruct the client to not apply any compression to this upload, which is useful for uncompressible formats.

class objectstore_client.client.Usecase(name: str, compression: Literal['zstd', 'none'] = 'zstd', expiration_policy: TimeToIdle | TimeToLive | None = None)[source]

Bases: object

An identifier for a workload in Objectstore, along with defaults to use for all operations within that Usecase.

Usecases need to be statically defined in Objectstore’s configuration server-side. Objectstore can make decisions based on the Usecase. For example, choosing the most suitable storage backend.

name: str
objectstore_client.client.raise_for_status(response: BaseHTTPResponse) None[source]

objectstore_client.metadata module

class objectstore_client.metadata.Metadata(content_type: 'str | None', compression: 'Compression | None', expiration_policy: 'ExpirationPolicy | None', custom: 'dict[str, str]')[source]

Bases: object

compression: Literal['zstd'] | Literal['none'] | None
content_type: str | None
custom: dict[str, str]
expiration_policy: TimeToIdle | TimeToLive | None
classmethod from_headers(headers: Mapping[str, str]) Metadata[source]
class objectstore_client.metadata.TimeToIdle(delta: 'timedelta')[source]

Bases: object

delta: timedelta
class objectstore_client.metadata.TimeToLive(delta: 'timedelta')[source]

Bases: object

delta: timedelta
objectstore_client.metadata.format_expiration(expiration_policy: TimeToIdle | TimeToLive) str[source]
objectstore_client.metadata.format_timedelta(delta: timedelta) str[source]
objectstore_client.metadata.itertools_batched(iterable: Iterable[T], n: int, strict: bool = False) Iterator[tuple[T, ...]][source]

Vendored version of itertools.batched, not available in Python 3.11. Batch data from the iterable into tuples of length n. The last batch may be shorter than n. If strict is true, will raise a ValueError if the final batch is shorter than n. Loops over the input iterable and accumulates data into tuples up to size n. The input is consumed lazily, just enough to fill a batch. The result is yielded as soon as the batch is full or when the input iterable is exhausted:

objectstore_client.metadata.parse_expiration(value: str) TimeToIdle | TimeToLive | None[source]
objectstore_client.metadata.parse_timedelta(delta: str) timedelta[source]

objectstore_client.metrics module

class objectstore_client.metrics.MetricsBackend(*args, **kwargs)[source]

Bases: Protocol

An abstract class that defines the interface for metrics backends.

abstractmethod distribution(name: str, value: int | float, tags: Mapping[str, str] | None = None, unit: str | None = None) None[source]

Records a distribution metric.

abstractmethod gauge(name: str, value: int | float, tags: Mapping[str, str] | None = None) None[source]

Sets a gauge metric to the given value.

abstractmethod increment(name: str, value: int | float = 1, tags: Mapping[str, str] | None = None) None[source]

Increments a counter metric by a given value.

class objectstore_client.metrics.NoOpMetricsBackend(*args, **kwargs)[source]

Bases: MetricsBackend

Default metrics backend that does not record anything.

distribution(name: str, value: int | float, tags: Mapping[str, str] | None = None, unit: str | None = None) None[source]

Records a distribution metric.

gauge(name: str, value: int | float, tags: Mapping[str, str] | None = None) None[source]

Sets a gauge metric to the given value.

increment(name: str, value: int | float = 1, tags: Mapping[str, str] | None = None) None[source]

Increments a counter metric by a given value.

class objectstore_client.metrics.StorageMetricEmitter(backend: MetricsBackend, operation: str, usecase: str)[source]

Bases: object

maybe_record_compression_ratio() None[source]
maybe_record_throughputs() None[source]
record_compressed_size(value: int, compression: str = 'unknown') None[source]
record_latency(elapsed: float) None[source]
record_uncompressed_size(value: int) None[source]
objectstore_client.metrics.measure_storage_operation(backend: MetricsBackend, operation: str, usecase: str, uncompressed_size: int | None = None, compressed_size: int | None = None, compression: str = 'unknown') Generator[StorageMetricEmitter][source]

Context manager which records the latency of the enclosed storage operation. Can also record the compressed or uncompressed size of an object, the compression ratio, the throughput, and the inverse throughput.

Yields a StorageMetricEmitter because for some operations (GET) the size is not known until the inside of the enclosed block.