ManagedCode.Storage logo

ManagedCode.Storage

CI Docs Release CodeQL Codecov Quality Gate Status Coverage MCAF .NET License: MIT NuGet

Cross-provider blob storage toolkit for .NET and ASP.NET streaming scenarios.

Documentation

Table of Contents

Quickstart

1) Install a provider package

dotnet add package ManagedCode.Storage.FileSystem

2) Register as default IStorage

using ManagedCode.Storage.Core;
using ManagedCode.Storage.FileSystem.Extensions;

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddFileSystemStorageAsDefault(options =>
{
    options.BaseFolder = Path.Combine(builder.Environment.ContentRootPath, "storage");
});

3) Use IStorage

using ManagedCode.Storage.Core;

public sealed class MyService(IStorage storage)
{
    public Task UploadAsync(CancellationToken ct) =>
        storage.UploadAsync("hello", options => options.FileName = "hello.txt", ct);
}

4) (Optional) Expose HTTP + SignalR endpoints

using ManagedCode.Storage.Server.Extensions.DependencyInjection;
using ManagedCode.Storage.Server.Extensions;

builder.Services.AddControllers();
builder.Services.AddStorageServer();
builder.Services.AddStorageSignalR(); // optional

var app = builder.Build();
app.MapControllers(); // /api/storage/*
app.MapStorageHub();  // /hubs/storage

ManagedCode.Storage wraps vendor SDKs behind a single IStorage abstraction so uploads, downloads, metadata, streaming, and retention behave the same regardless of provider. Swap between Azure Blob Storage, Azure Data Lake, Amazon S3, Google Cloud Storage, OneDrive, Google Drive, Dropbox, CloudKit (iCloud app data), SFTP, and a local file system without rewriting application code — and optionally use the Virtual File System (VFS) overlay for a file/directory API on top of any configured IStorage. Pair it with our ASP.NET controllers and SignalR client to deliver chunked uploads, ranged downloads, and progress notifications end to end.

Motivation

Cloud storage vendors expose distinct SDKs, option models, and authentication patterns. That makes it painful to change providers, run multi-region replication, or stand up hermetic tests. ManagedCode.Storage gives you a universal surface, consistent Result<T> handling, and DI-aware registration helpers so you can plug in any provider, test locally, and keep the same code paths in production.

Features

Packages

Core & Utilities

Package Latest Description
ManagedCode.Storage.Core NuGet Core abstractions, option models, CRC32/MIME helpers, and DI extensions.
ManagedCode.Storage.VirtualFileSystem NuGet Virtual file system overlay on top of IStorage (file/directory API + caching; not a provider).
ManagedCode.Storage.TestFakes NuGet Provider doubles for unit/integration tests without hitting cloud services.

Providers

Package Latest Description
ManagedCode.Storage.Azure NuGet Azure Blob Storage implementation with metadata, streaming, and legal hold support.
ManagedCode.Storage.Azure.DataLake NuGet Azure Data Lake Gen2 provider on top of the unified abstraction.
ManagedCode.Storage.Aws NuGet Amazon S3 provider with Object Lock and legal hold operations.
ManagedCode.Storage.Gcp NuGet Google Cloud Storage integration built on official SDKs.
ManagedCode.Storage.FileSystem NuGet Local file system implementation for hybrid or on-premises workloads.
ManagedCode.Storage.Sftp NuGet SFTP provider powered by SSH.NET for legacy and air-gapped environments.
ManagedCode.Storage.OneDrive NuGet OneDrive provider built on Microsoft Graph.
ManagedCode.Storage.GoogleDrive NuGet Google Drive provider built on the Google Drive API.
ManagedCode.Storage.Dropbox NuGet Dropbox provider built on the Dropbox API.
ManagedCode.Storage.CloudKit NuGet CloudKit (iCloud app data) provider built on CloudKit Web Services.

Configuring OneDrive, Google Drive, Dropbox, and CloudKit

iCloud Drive does not expose a public server-side file API. ManagedCode.Storage.CloudKit targets CloudKit Web Services (iCloud app data), not iCloud Drive.

Credential guide: docs/Development/credentials.md.

These providers follow the same DI patterns as the other backends: use Add*StorageAsDefault(...) to bind IStorage, or Add*Storage(...) to inject the provider interface (IOneDriveStorage, IGoogleDriveStorage, IDropboxStorage, ICloudKitStorage).

Most cloud-drive providers expect you to create the official SDK client (Graph/Drive/Dropbox) with your preferred auth flow and pass it into the storage options. ManagedCode.Storage does not run OAuth flows automatically.

Keyed registrations are available as well (useful for multi-tenant apps):

using ManagedCode.Storage.Core;
using ManagedCode.Storage.Dropbox.Extensions;

builder.Services.AddDropboxStorageAsDefault("tenant-a", options =>
{
    options.AccessToken = configuration["Dropbox:AccessToken"]; // obtained via OAuth (see Dropbox section below)
    options.RootPath = "/apps/my-app";
});

var tenantStorage = app.Services.GetRequiredKeyedService<IStorage>("tenant-a");

OneDrive / Microsoft Graph

  1. Install the provider package and import DI extensions:

    dotnet add package ManagedCode.Storage.OneDrive
    dotnet add package Azure.Identity
    
    using ManagedCode.Storage.OneDrive.Extensions;
    

    Docs: Register an app, Microsoft Graph auth.

  2. Create an app registration in Azure Active Directory (Entra ID) and record the Application (client) ID, Directory (tenant) ID, and a client secret.
  3. In API permissions, add Microsoft Graph permissions:
    • For server-to-server apps: ApplicationFiles.ReadWrite.All (or Sites.ReadWrite.All for SharePoint drives), then Grant admin consent.
    • For user flows: Delegated permissions are also possible, but you must supply a Graph client that authenticates as the user.
  4. Create the Graph client (example uses client credentials):

    using Azure.Identity;
    using Microsoft.Graph;
    
    var tenantId = configuration["OneDrive:TenantId"]!;
    var clientId = configuration["OneDrive:ClientId"]!;
    var clientSecret = configuration["OneDrive:ClientSecret"]!;
    
    var credential = new ClientSecretCredential(tenantId, clientId, clientSecret);
    var graphClient = new GraphServiceClient(credential, new[] { "https://graph.microsoft.com/.default" });
    
  5. Register OneDrive storage with the Graph client and the drive/root you want to scope to:

    builder.Services.AddOneDriveStorageAsDefault(options =>
    {
        options.GraphClient = graphClient;
        options.DriveId = "me";                   // or a specific drive ID
        options.RootPath = "app-data";            // folder will be created when CreateContainerIfNotExists is true
        options.CreateContainerIfNotExists = true;
    });
    
  6. If you need a concrete drive id, fetch it via Graph (example):

    var drive = await graphClient.Me.Drive.GetAsync();
    var driveId = drive?.Id;
    

Google Drive

  1. Install the provider package and import DI extensions:

    dotnet add package ManagedCode.Storage.GoogleDrive
    
    using ManagedCode.Storage.GoogleDrive.Extensions;
    

    Docs: Drive API overview, OAuth 2.0.

  2. In Google Cloud Console, create a project and enable the Google Drive API.
  3. Create credentials:
    • Service account (recommended for server apps): create a service account and download a JSON key.
    • OAuth client (interactive user auth): configure OAuth consent screen and create an OAuth client id/secret.
  4. Create a DriveService.

    Service account example:

    using Google.Apis.Auth.OAuth2;
    using Google.Apis.Drive.v3;
    using Google.Apis.Services;
    
    var credential = GoogleCredential
        .FromFile("service-account.json")
        .CreateScoped(DriveService.Scope.Drive);
    
    var driveService = new DriveService(new BaseClientService.Initializer
    {
        HttpClientInitializer = credential,
        ApplicationName = "MyApp"
    });
    

    If you use a service account, share the target folder/drive with the service account email (or use a Shared Drive) so it can see your files.

  5. Register the Google Drive provider with the configured DriveService and a root folder id:

    builder.Services.AddGoogleDriveStorageAsDefault(options =>
    {
        options.DriveService = driveService;
        options.RootFolderId = "root"; // or a specific folder id you control
        options.CreateContainerIfNotExists = true;
    });
    
  6. Store tokens in user secrets or environment variables; never commit them to source control.

Dropbox

  1. Install the provider package and import DI extensions:

    dotnet add package ManagedCode.Storage.Dropbox
    
    using ManagedCode.Storage.Dropbox.Extensions;
    

    Docs: Dropbox App Console, OAuth guide.

  2. Create an app in the Dropbox App Console and choose Scoped access with the Full Dropbox or App folder type.
  3. Record the App key and App secret (Settings tab).
  4. Under Permissions, enable files.content.write, files.content.read, files.metadata.read, and files.metadata.write (plus any additional scopes you need) and save changes.
  5. Obtain an access token:
    • For quick local testing, you can generate a token in the app console.
    • For production, use OAuth code flow (example):
    using Dropbox.Api;
    
    var appKey = configuration["Dropbox:AppKey"]!;
    var appSecret = configuration["Dropbox:AppSecret"]!;
    var redirectUri = configuration["Dropbox:RedirectUri"]!; // must be registered in Dropbox app console
    
    // 1) Redirect user to:
    // var authorizeUri = DropboxOAuth2Helper.GetAuthorizeUri(OAuthResponseType.Code, appKey, redirectUri, tokenAccessType: TokenAccessType.Offline);
    //
    // 2) Receive the 'code' on your redirect endpoint, then exchange it:
    var auth = await DropboxOAuth2Helper.ProcessCodeFlowAsync(code, appKey, appSecret, redirectUri);
    var accessToken = auth.AccessToken;
    var refreshToken = auth.RefreshToken; // store securely if you requested offline access
    
  6. Register Dropbox storage with a root path (use / for full access apps or /Apps/<your-app> for app folders). You can let the provider create the SDK client from credentials:

    builder.Services.AddDropboxStorageAsDefault(options =>
    {
        var accessToken = configuration["Dropbox:AccessToken"]!;
        options.AccessToken = accessToken;
        options.RootPath = "/apps/my-app";
        options.CreateContainerIfNotExists = true;
    });
    

    Or, for production, prefer refresh tokens (offline access):

    builder.Services.AddDropboxStorageAsDefault(options =>
    {
        options.RefreshToken = configuration["Dropbox:RefreshToken"]!;
        options.AppKey = configuration["Dropbox:AppKey"]!;
        options.AppSecret = configuration["Dropbox:AppSecret"]; // optional when using PKCE
        options.RootPath = "/apps/my-app";
    });
    
  7. Store tokens in user secrets or environment variables; never commit them to source control.

CloudKit (iCloud app data)

  1. Install the provider package and import DI extensions:

    dotnet add package ManagedCode.Storage.CloudKit
    
    using ManagedCode.Storage.CloudKit.Extensions;
    using ManagedCode.Storage.CloudKit.Options;
    

    Docs: CloudKit Web Services Reference.

  2. In Apple Developer / CloudKit Dashboard, configure the container you want to use and note its container id (example: iCloud.com.company.app).
  3. Ensure the file record type exists (default MCStorageFile).
  4. Add these fields to the record type:
    • path (String) — must be queryable/indexed for prefix listing.
    • contentType (String) — optional but recommended.
    • file (Asset) — stores the binary content.
  5. Configure authentication:
    • API token (ckAPIToken): create an API token for your container in CloudKit Dashboard and store it as a secret.
    • Server-to-server key (public DB only): create a CloudKit key in Apple Developer (download the .p8 private key, keep the key id).
  6. Register CloudKit storage:

    builder.Services.AddCloudKitStorageAsDefault(options =>
    {
        options.ContainerId = "iCloud.com.company.app";
        options.Environment = CloudKitEnvironment.Production;
        options.Database = CloudKitDatabase.Public;
        options.RootPath = "app-data";
    
        // Choose ONE auth mode:
        options.ApiToken = configuration["CloudKit:ApiToken"];
        // OR:
        // options.ServerToServerKeyId = configuration["CloudKit:KeyId"];
        // options.ServerToServerPrivateKeyPem = configuration["CloudKit:PrivateKeyPem"]; // paste PEM (.p8) contents
    });
    
  7. CloudKit Web Services impose size limits; keep files reasonably small and validate against your current CloudKit quotas.

ASP.NET & Clients

Package Latest Description
ManagedCode.Storage.Server NuGet ASP.NET controllers, chunk orchestration services, and the SignalR storage hub.
ManagedCode.Storage.Client NuGet .NET client SDK for uploads, downloads, metadata, and SignalR negotiations.
ManagedCode.Storage.Client.SignalR NuGet SignalR streaming client for browsers and native applications.

Architecture

Storage Topology

The topology below shows how applications talk to the shared IStorage surface, optional Virtual File System, and keyed provider factories before landing on the concrete backends.

flowchart LR
    subgraph Applications
        API["ASP.NET Controllers"]
        SignalRClient["SignalR Client"]
        Workers["Background Services"]
    end

    subgraph Abstraction
        Core["IStorage Abstractions"]
        VFS["Virtual File System"]
        Factories["Keyed Provider Factories"]
    end

    subgraph Providers
        Azure["Azure Blob"]
        AzureDL["Azure Data Lake"]
        Aws["Amazon S3"]
        Gcp["Google Cloud Storage"]
        OneDrive["OneDrive (Graph)"]
        GoogleDrive["Google Drive"]
        Dropbox["Dropbox"]
        CloudKit["CloudKit (iCloud app data)"]
        Fs["File System"]
        Sftp["SFTP"]
    end

    Applications --> Core
    Core --> VFS
    Core --> Factories
    Factories --> Azure
    Factories --> AzureDL
    Factories --> Aws
    Factories --> Gcp
    Factories --> OneDrive
    Factories --> GoogleDrive
    Factories --> Dropbox
    Factories --> CloudKit
    Factories --> Fs
    Factories --> Sftp

Keyed provider registrations let you resolve multiple named instances from dependency injection while reusing the same abstraction across Azure, AWS, Google Cloud Storage, Google Drive, OneDrive, Dropbox, CloudKit, SFTP, and local file system storage.

ASP.NET Streaming Controllers

Controllers in ManagedCode.Storage.Server expose minimal routes that stream directly between HTTP clients and blob providers. Uploads arrive as multipart forms or raw streams, flow through the unified IStorage abstraction, and land in whichever provider is registered. Downloads return FileStreamResult responses so browsers, SDKs, or background jobs can read blobs without buffering the whole payload in memory.

sequenceDiagram
    participant Client as Client App
    participant Controller as StorageController
    participant Storage as IStorage
    participant Provider as IStorage Provider

    Client->>Controller: POST /storage/upload (stream)
    Controller->>Storage: UploadAsync(stream, UploadOptions)
    Storage->>Provider: Push stream to backend
    Provider-->>Storage: Result<BlobMetadata>
    Storage-->>Controller: Upload response
    Controller-->>Client: 200 OK + metadata

    Client->>Controller: GET /storage/download?file=video.mp4
    Controller->>Storage: DownloadAsync(file)
    Storage->>Provider: Open download stream
    Provider-->>Storage: Result<Stream>
    Storage-->>Controller: Stream payload
    Controller-->>Client: Chunked response

Controllers remain thin: consumers can inherit and override actions to add custom routing, authorization, or telemetry while leaving the streaming plumbing intact.

Virtual File System (VFS)

Want a file/directory API on top of any configured IStorage (with optional metadata caching)? The ManagedCode.Storage.VirtualFileSystem package provides IVirtualFileSystem, which routes all operations through your registered storage provider.

using ManagedCode.Storage.FileSystem.Extensions;
using ManagedCode.Storage.VirtualFileSystem.Core;
using ManagedCode.Storage.VirtualFileSystem.Extensions;

// 1) Register any IStorage provider (example: FileSystem)
builder.Services.AddFileSystemStorageAsDefault(options =>
{
    options.BaseFolder = Path.Combine(builder.Environment.ContentRootPath, "storage");
});

// 2) Add VFS overlay
builder.Services.AddVirtualFileSystem(options =>
{
    options.DefaultContainer = "vfs";
    options.EnableCache = true;
});

// 3) Use IVirtualFileSystem
public sealed class MyVfsService(IVirtualFileSystem vfs)
{
    public async Task WriteAsync(CancellationToken ct)
    {
        var file = await vfs.GetFileAsync("avatars/user-1.png", ct);
        await file.WriteAllTextAsync("hello", cancellationToken: ct);
    }
}

VFS is an overlay: it does not replace your provider. In tests, pair VFS with ManagedCode.Storage.TestFakes or the FileSystem provider pointed at a temp folder to avoid real cloud accounts.

Dependency Injection & Keyed Registrations

Every provider ships with default and provider-specific registrations, but you can also assign multiple named instances using .NET’s keyed services. This makes it easy to route traffic to different containers/buckets (e.g. azure-primary vs. azure-dr) or to fan out a file to several backends:

using Amazon;
using Amazon.S3;
using ManagedCode.MimeTypes;
using Microsoft.Extensions.DependencyInjection;
using System.IO;
using System.Threading;
using System.Threading.Tasks;

builder.Services
    .AddAzureStorage("azure-primary", options =>
    {
        options.ConnectionString = configuration["Storage:Azure:Primary:ConnectionString"]!;
        options.Container = "assets";
    })
    .AddAzureStorage("azure-dr", options =>
    {
        options.ConnectionString = configuration["Storage:Azure:Dr:ConnectionString"]!;
        options.Container = "assets-dr";
    })
    .AddAWSStorage("aws-backup", options =>
    {
        options.PublicKey = configuration["Storage:Aws:AccessKey"]!;
        options.SecretKey = configuration["Storage:Aws:SecretKey"]!;
        options.Bucket = "assets-backup";
        options.OriginalOptions = new AmazonS3Config
        {
            RegionEndpoint = RegionEndpoint.USEast1
        };
    });

public sealed class AssetReplicator
{
    private readonly IAzureStorage _primary;
    private readonly IAzureStorage _disasterRecovery;
    private readonly IAWSStorage _backup;

    public AssetReplicator(
        [FromKeyedServices("azure-primary")] IAzureStorage primary,
        [FromKeyedServices("azure-dr")] IAzureStorage secondary,
        [FromKeyedServices("aws-backup")] IAWSStorage backup)
    {
        _primary = primary;
        _disasterRecovery = secondary;
        _backup = backup;
    }

    public async Task MirrorAsync(Stream content, string fileName, CancellationToken cancellationToken = default)
    {
        await using var buffer = new MemoryStream();
        await content.CopyToAsync(buffer, cancellationToken);

        buffer.Position = 0;
        var uploadOptions = new UploadOptions(fileName, mimeType: MimeHelper.GetMimeType(fileName));

        await _primary.UploadAsync(buffer, uploadOptions, cancellationToken);

        buffer.Position = 0;
        await _disasterRecovery.UploadAsync(buffer, uploadOptions, cancellationToken);

        buffer.Position = 0;
        await _backup.UploadAsync(buffer, uploadOptions, cancellationToken);
    }
}

Keyed services can also be resolved via IServiceProvider.GetRequiredKeyedService<T>("key") when manual dispatching is required.

Want to double-check data fidelity after copying? Pair uploads with Crc32Helper:

var download = await _backup.DownloadAsync(fileName, cancellationToken);
download.IsSuccess.ShouldBeTrue();

await using var local = download.Value;
var crc = Crc32Helper.CalculateFileCrc(local.FilePath);
logger.LogInformation("Backup CRC for {File} is {Crc}", fileName, crc);

The test suite includes end-to-end scenarios that mirror payloads between Azure, AWS, the local file system, and virtual file systems; multi-gigabyte flows execute by default across every provider using 4 MB units per “GB” to keep runs fast while still exercising streaming paths.

ASP.NET Controllers & Streaming

The ManagedCode.Storage.Server package surfaces upload/download controllers that pipe HTTP streams straight into the storage abstraction. Files can be sent as multipart forms or raw streams, while downloads return FileStreamResult so large assets flow back to the caller without buffering in memory.

// Program.cs / Startup.cs
builder.Services.AddStorageServer(options =>
{
    options.EnableRangeProcessing = true;              // support range/seek operations
    options.InMemoryUploadThresholdBytes = 512 * 1024;  // spill to disk after 512 KB
});

app.MapControllers(); // exposes /api/storage/* endpoints by default

When you need custom routes, validation, or policies, inherit from the base controller and reuse the same streaming helpers:

[Route("api/files")]
public sealed class FilesController : StorageControllerBase<IMyCustomStorage>
{
    public FilesController(
        IMyCustomStorage storage,
        ChunkUploadService chunks,
        StorageServerOptions options)
        : base(storage, chunks, options)
    {
    }
}

// Upload a form file directly into storage
public Task<IActionResult> Upload(IFormFile file, CancellationToken ct) =>
    UploadFormFileAsync(file, ct);

// Stream a blob to the client in real time
public Task<IActionResult> Download(string fileName, CancellationToken ct) =>
    DownloadAsStreamAsync(fileName, ct);

Need resumable uploads or live progress UI? Call AddStorageSignalR() to enable the optional hub and connect with the ManagedCode.Storage.Client.SignalR package; otherwise, the controllers alone cover straight HTTP streaming scenarios.

Connection modes

Each provider supports two DI patterns:

Cloud-drive providers (OneDrive, Google Drive, Dropbox) and CloudKit are configured in Configuring OneDrive, Google Drive, Dropbox, and CloudKit; the same default/provider-specific rules apply.

Azure

Default mode connection:

// Startup.cs
services.AddAzureStorageAsDefault(new AzureStorageOptions
{
    Container = "{YOUR_CONTAINER_NAME}",
    ConnectionString = "{YOUR_CONNECTION_STRING}",
});

Using in default mode:

// MyService.cs
public class MyService
{
    private readonly IStorage _storage;

    public MyService(IStorage storage)
    {
        _storage = storage;
    }
}

Provider-specific mode connection:

// Startup.cs
services.AddAzureStorage(new AzureStorageOptions
{
    Container = "{YOUR_CONTAINER_NAME}",
    ConnectionString = "{YOUR_CONNECTION_STRING}",
});

Using in provider-specific mode

// MyService.cs
public class MyService
{
    private readonly IAzureStorage _azureStorage;

    public MyService(IAzureStorage azureStorage)
    {
        _azureStorage = azureStorage;
    }
}

Need multiple Azure accounts or containers? Call services.AddAzureStorage("azure-primary", ...) and decorate constructor parameters with [FromKeyedServices("azure-primary")].

Google Cloud (Click here to expand) ### Google Cloud Default mode connection: ```cs // Startup.cs services.AddGCPStorageAsDefault(opt => { opt.GoogleCredential = GoogleCredential.FromFile("{PATH_TO_YOUR_CREDENTIALS_FILE}.json"); opt.BucketOptions = new BucketOptions() { ProjectId = "{YOUR_API_PROJECT_ID}", Bucket = "{YOUR_BUCKET_NAME}", }; }); ``` Using in default mode: ```cs // MyService.cs public class MyService { private readonly IStorage _storage; public MyService(IStorage storage) { _storage = storage; } } ``` Provider-specific mode connection: ```cs // Startup.cs services.AddGCPStorage(new GCPStorageOptions { BucketOptions = new BucketOptions() { ProjectId = "{YOUR_API_PROJECT_ID}", Bucket = "{YOUR_BUCKET_NAME}", } }); ``` Using in provider-specific mode ```cs // MyService.cs public class MyService { private readonly IGCPStorage _gcpStorage; public MyService(IGCPStorage gcpStorage) { _gcpStorage = gcpStorage; } } ``` > Need parallel GCS buckets? Register them with AddGCPStorage("gcp-secondary", ...) and inject via [FromKeyedServices("gcp-secondary")].
Amazon (Click here to expand) ### Amazon Default mode connection: ```cs // Startup.cs // Tip for LocalStack: configure the client and set ServiceURL to the emulator endpoint. var awsConfig = new AmazonS3Config { RegionEndpoint = RegionEndpoint.EUWest1, ForcePathStyle = true, UseHttp = true, ServiceURL = "http://localhost:4566" // LocalStack default endpoint }; services.AddAWSStorageAsDefault(opt => { opt.PublicKey = "{YOUR_PUBLIC_KEY}"; opt.SecretKey = "{YOUR_SECRET_KEY}"; opt.Bucket = "{YOUR_BUCKET_NAME}"; opt.OriginalOptions = awsConfig; }); ``` Using in default mode: ```cs // MyService.cs public class MyService { private readonly IStorage _storage; public MyService(IStorage storage) { _storage = storage; } } ``` Provider-specific mode connection: ```cs // Startup.cs services.AddAWSStorage(new AWSStorageOptions { PublicKey = "{YOUR_PUBLIC_KEY}", SecretKey = "{YOUR_SECRET_KEY}", Bucket = "{YOUR_BUCKET_NAME}", OriginalOptions = awsConfig }); ``` Using in provider-specific mode ```cs // MyService.cs public class MyService { private readonly IAWSStorage _storage; public MyService(IAWSStorage storage) { _storage = storage; } } ``` > Need parallel S3 buckets? Register them with AddAWSStorage("aws-backup", ...) and inject via [FromKeyedServices("aws-backup")].
FileSystem (Click here to expand) ### FileSystem Default mode connection: ```cs // Startup.cs services.AddFileSystemStorageAsDefault(opt => { opt.BaseFolder = Path.Combine(Environment.CurrentDirectory, "{YOUR_BUCKET_NAME}"); }); ``` Using in default mode: ```cs // MyService.cs public class MyService { private readonly IStorage _storage; public MyService(IStorage storage) { _storage = storage; } } ``` Provider-specific mode connection: ```cs // Startup.cs services.AddFileSystemStorage(new FileSystemStorageOptions { BaseFolder = Path.Combine(Environment.CurrentDirectory, "{YOUR_BUCKET_NAME}"), }); ``` Using in provider-specific mode ```cs // MyService.cs public class MyService { private readonly IFileSystemStorage _fileSystemStorage; public MyService(IFileSystemStorage fileSystemStorage) { _fileSystemStorage = fileSystemStorage; } } ``` > Mirror to multiple folders? Use AddFileSystemStorage("archive", options => options.BaseFolder = ...) and resolve them via [FromKeyedServices("archive")].

How to use

We assume that below code snippets are placed in your service class with injected IStorage:

public class MyService
{
    private readonly IStorage _storage;
    public MyService(IStorage storage)
    {
        _storage = storage;
    }
}

Upload

await _storage.UploadAsync(new Stream());
await _storage.UploadAsync("some string content");
await _storage.UploadAsync(new FileInfo("D:\\my_report.txt"));

Delete

await _storage.DeleteAsync("my_report.txt");

Download

var localFile = await _storage.DownloadAsync("my_report.txt");

Get metadata

await _storage.GetBlobMetadataAsync("my_report.txt");

Native client

If you need more flexibility, you can use native client for any IStorage<T>

_storage.StorageClient

Conclusion

In summary, Storage library provides a universal interface for accessing and manipulating data in different cloud blob storage providers, plus ready-to-host ASP.NET controllers, SignalR streaming endpoints, keyed dependency injection, and a memory-backed VFS. It makes it easy to switch between providers or to use multiple providers simultaneously, without having to learn and use multiple APIs, while staying in full control of routing, thresholds, and mirroring. We hope you find it useful in your own projects!

18 min read