pub struct StorageService { /* private fields */ }Expand description
Asynchronous storage service with a two-tier backend system.
StorageService is the main entry point for storing and retrieving objects.
It routes objects to a high-volume or long-term backend based on size (see
the crate-level documentation for details) and maintains redirect
tombstones so that reads never need to probe both backends.
§Lifecycle
After construction, call start to start the
service’s background processes.
§Redirect Tombstones
Because the ObjectId is backend-independent, reads must be able to find
an object without knowing which backend stores it. A naive approach would
check the long-term backend on every read miss in the high-volume backend —
but that is slow and expensive.
Instead, when an object is stored in the long-term backend, the service
writes a redirect tombstone in the high-volume backend. A redirect
tombstone is an empty object with
is_redirect_tombstone: true
in its metadata. It acts as a signpost: “the real data lives in the other
backend.”
§Consistency Without Locks
The tombstone system maintains consistency through operation ordering rather than distributed locks. The invariant is: a redirect tombstone is always the last thing written and the last thing removed.
- On write, the real object is persisted before the tombstone. If the tombstone write fails, the real object is rolled back.
- On delete, the real object is removed before the tombstone. If the long-term delete fails, the tombstone remains and the data stays reachable.
This ensures that at every intermediate step, either the data is fully reachable (tombstone points to data) or fully absent — never an orphan in either direction.
See the individual methods for per-operation tombstone behavior.
§Run-to-Completion and Panic Isolation
Each operation runs to completion even if the caller is cancelled (e.g., on
client disconnect). This ensures that multi-step operations such as writing
redirect tombstones are never left partially applied. Operations are also
isolated from panics in backend code — a failure in one operation does not
bring down other in-flight work. See Error::Panic.
§Concurrency Limit
A semaphore caps the number of in-flight backend operations. The limit is
configured via with_concurrency_limit;
without an explicit value the default is DEFAULT_CONCURRENCY_LIMIT.
Operations that exceed the limit are rejected immediately with
Error::AtCapacity.
Implementations§
Source§impl StorageService
impl StorageService
Sourcepub async fn new(
high_volume_config: StorageConfig<'_>,
long_term_config: StorageConfig<'_>,
) -> Result<Self>
pub async fn new( high_volume_config: StorageConfig<'_>, long_term_config: StorageConfig<'_>, ) -> Result<Self>
Creates a new StorageService with the specified configuration.
Sourcepub fn with_concurrency_limit(self, max: usize) -> Self
pub fn with_concurrency_limit(self, max: usize) -> Self
Sets the maximum number of concurrent backend operations.
Must be called before start. Operations beyond this
limit are rejected with Error::AtCapacity.
Sourcepub fn tasks_available(&self) -> usize
pub fn tasks_available(&self) -> usize
Returns the number of backend task slots currently available.
Sourcepub fn tasks_running(&self) -> usize
pub fn tasks_running(&self) -> usize
Returns the number of backend tasks currently running.
Sourcepub fn tasks_limit(&self) -> usize
pub fn tasks_limit(&self) -> usize
Returns the configured limit for concurrent backend tasks.
Sourcepub fn stream(&self) -> Result<StreamExecutor>
pub fn stream(&self) -> Result<StreamExecutor>
Prepares to stream multiple operations concurrently against this service.
Operations are executed concurrently up to a window derived from the
service’s current capacity. The permits for that window are reserved
upfront — if the service is at capacity, this returns
Error::AtCapacity immediately before any operations are read.
Sourcepub fn start(&self)
pub fn start(&self)
Starts background processes for the storage service.
Currently spawns a task that emits the service.concurrency.in_use
and service.concurrency.limit gauges once per second.
Sourcepub async fn insert_object(
&self,
context: ObjectContext,
key: Option<String>,
metadata: Metadata,
stream: PayloadStream,
) -> Result<InsertResponse>
pub async fn insert_object( &self, context: ObjectContext, key: Option<String>, metadata: Metadata, stream: PayloadStream, ) -> Result<InsertResponse>
Creates or overwrites an object.
The object is identified by the components of an ObjectId. The
context is required, while the key can be assigned automatically if
set to None.
§Run-to-completion
Once called, the operation runs to completion even if the returned future is dropped (e.g., on client disconnect). This guarantees that partially written objects are never left without their redirect tombstone.
§Tombstone handling
If the object has a caller-provided key and a redirect tombstone already exists at that key, the new write is routed to the long-term backend (preserving the existing tombstone as a redirect to the new data).
For long-term writes, the real object is persisted first, then the tombstone. If the tombstone write fails, the real object is rolled back to avoid orphans.
Sourcepub async fn get_metadata(&self, id: ObjectId) -> Result<MetadataResponse>
pub async fn get_metadata(&self, id: ObjectId) -> Result<MetadataResponse>
Retrieves only the metadata for an object, without the payload.
§Tombstone handling
Looks up the object in the high-volume backend first. If the result is a redirect tombstone, follows the redirect and fetches metadata from the long-term backend instead.
Sourcepub async fn get_object(&self, id: ObjectId) -> Result<GetResponse>
pub async fn get_object(&self, id: ObjectId) -> Result<GetResponse>
Streams the contents of an object.
§Tombstone handling
Looks up the object in the high-volume backend first. If the result is a redirect tombstone, follows the redirect and fetches the object from the long-term backend instead.
Sourcepub async fn delete_object(&self, id: ObjectId) -> Result<DeleteResponse>
pub async fn delete_object(&self, id: ObjectId) -> Result<DeleteResponse>
Deletes an object, if it exists.
§Run-to-completion
Once called, the operation runs to completion even if the returned future is dropped. This guarantees that the tombstone is only removed after the long-term object has been successfully deleted.
§Tombstone handling
Attempts to delete from the high-volume backend, but skips deletion if the entry is a redirect tombstone. When a tombstone is found, the long-term object is deleted first, then the tombstone. This ordering ensures that if the long-term delete fails, the tombstone remains and the data is still reachable.
Trait Implementations§
Source§impl Clone for StorageService
impl Clone for StorageService
Source§fn clone(&self) -> StorageService
fn clone(&self) -> StorageService
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source. Read moreAuto Trait Implementations§
impl Freeze for StorageService
impl !RefUnwindSafe for StorageService
impl Send for StorageService
impl Sync for StorageService
impl Unpin for StorageService
impl !UnwindSafe for StorageService
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
§impl<T> Instrument for T
impl<T> Instrument for T
§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
§impl<T> IntoRequest<T> for T
impl<T> IntoRequest<T> for T
§fn into_request(self) -> Request<T>
fn into_request(self) -> Request<T>
T in a tonic::Request§impl<L> LayerExt<L> for L
impl<L> LayerExt<L> for L
§fn named_layer<S>(&self, service: S) -> Layered<<L as Layer<S>>::Service, S>where
L: Layer<S>,
fn named_layer<S>(&self, service: S) -> Layered<<L as Layer<S>>::Service, S>where
L: Layer<S>,
Layered].