Skip to content

End-to-End Storage Workflow

This tutorial will cover the end-to-end process of creating a bucket, uploading a file, and retrieving it, in a step-by-step format.

Prerequisites

  • Node.js v22+ installed
  • A TypeScript project

    Need a starter project?

    If you don't have an existing project, follow these steps to create a TypeScript project you can use to follow the guides in this section:

    1. Create a new project folder by executing the following command in the terminal:

      mkdir datahaven-project && cd datahaven-project
      
    2. Initialize a package.json file using the correct command for your package manager:

      pnpm init
      
      yarn init
      
      npm init --y
      
    3. Add the TypeScript and Node type definitions to your projects using the correct command for your package manager:

      pnpm add -D typescript ts-node @types/node
      
      yarn add -D typescript ts-node @types/node
      
      npm install -D typescript ts-node @types/node
      
    4. Create a tsconfig.json file in the root of your project and paste the following configuration:

      tsconfig.json
      {
          "compilerOptions": {
              "target": "ES2022",
              "module": "nodenext",
              "moduleResolution": "NodeNext",
              "esModuleInterop": true,
              "strict": true,
              "skipLibCheck": true,
              "outDir": "dist",
              "declaration": true,
              "sourceMap": true
          },
          "include": ["src/**/*.ts"]
      }
      
    5. Initialize the src directory:

      mkdir src && touch src/index.ts
      
  • Dependencies installed

  • Clients initialized

  • A file to upload to DataHaven (any file type is accepted; the current testnet file size limit is 5 MB).

Project Structure

This project organizes scripts, client setup, and different types of operations for easy development and deployment.

The following sections will build on the already established helper methods from the services folder, so it's important to start here with already properly configured clients (as mentioned in the Prerequisites section).

datahaven-project/
├── package.json
├── tsconfig.json
└── src/
    ├── files/
    │   └── helloworld.txt
    ├── operations/
    │   ├── fileOperations.ts
    │   └── bucketOperations.ts
    ├── services/
    │   ├── clientService.ts
    │   └── mspService.ts
    └── index.ts

Initialize the Script Entry Point

First, create an index.ts file if you haven't already. Its run method will orchestrate all the logic in this guide, and you’ll replace the labelled placeholders with real code step by step. By now, your services folder (including the MSP and client helper services) should already be created. If not, see the Get Started guide.

The index.ts snippet below also imports bucketOperations.ts and fileOperations.ts, which are not in your project yet—that's expected, as you'll create them later in this guide. All their imports are included right away so feel free to comment out the imports you don't need until you get to the step that implements that logic.

Add the following code to your index.ts file:

src/index.ts
import '@storagehub/api-augment';
import { initWasm } from '@storagehub-sdk/core';
import { polkadotApi } from './services/clientService.js';
import {
  downloadFile,
  uploadFile,
  verifyDownload,
  waitForBackendFileReady,
  waitForMSPConfirmOnChain,
} from './operations/fileOperations.js';
import { HealthStatus } from '@storagehub-sdk/msp-client';
import { mspClient } from './services/mspService.js';
import {
  createBucket,
  verifyBucketCreation,
  waitForBackendBucketReady,
} from './operations/bucketOperations.js';

async function run() {
  // For anything from @storagehub-sdk/core to work, initWasm() is required
  // on top of the file
  await initWasm();

  // --- End-to-end storage flow ---
  // **PLACEHOLDER FOR STEP 1: CHECK MSP HEALTH**
  // **PLACEHOLDER FOR STEP 2: CREATE BUCKET**
  // **PLACEHOLDER FOR STEP 3: VERIFY BUCKET**
  // **PLACEHOLDER FOR STEP 4: WAIT FOR BACKEND TO HAVE BUCKET**
  // **PLACEHOLDER FOR STEP 5: UPLOAD FILE**
  // **PLACEHOLDER FOR STEP 6: WAIT FOR BACKEND TO HAVE FILE**
  // **PLACEHOLDER FOR STEP 7: DOWNLOAD FILE**
  // **PLACEHOLDER FOR STEP 8: VERIFY FILE**

  // Disconnect the Polkadot API at the very end
  await polkadotApi.disconnect();
}

await run();

Check MSP Health

Since you are already connected to the MSP client, check its health status before creating a bucket.

  1. Replace the placeholder // **PLACEHOLDER FOR STEP 1: CHECK MSP HEALTH** with the following code:

    src/index.ts // **PLACEHOLDER FOR STEP 1: CHECK MSP HEALTH**
    // 1. Check MSP Health
    const mspHealth: HealthStatus = await mspClient.info.getHealth();
    console.log('MSP Health Status:', mspHealth);
    
  2. Check the health status by running the script:

    ts-node index.ts
    

    The response should return a healthy status, like this:

    ts-node index.ts
    MSP Health Status: {
        status: 'healthy',
        version: '0.1.0',
        service: 'backend-title',
        components: {
            storage: { status: 'healthy' },
            postgres: { status: 'healthy' },
            rpc: { status: 'healthy' }
        }
    }

Create a Bucket

Buckets group your files under a specific Main Storage Provider (MSP) and value proposition that describes what the storage fees under that MSP are going to look like.

In the following code, you will pull the MSP’s details/value proposition to prepare for bucket creation. Then you will derive the bucket ID, confirm it doesn’t exist already, submit a createBucket transaction, wait for confirmation, and finally query the chain to verify that the new bucket’s MSP and owner address match the account address that you are using.

To do all this, you are going to:

  1. Create a getValueProps helper method within mspService.ts.
  2. Create a createBucket helper method within bucketOperations.ts.
  3. Update the index.ts file to trigger the logic you've implemented.

Go through the in-depth instructions as follows:

  1. Add the following helper method to your mspService.ts file to fetch valueProps from the MSP Client:

    src/services/mspService.ts
    // Retrieve MSP value propositions and select one for bucket creation
    const getValueProps = async (): Promise<`0x${string}`> => {
      const valueProps: ValueProp[] = await mspClient.info.getValuePropositions();
      if (!Array.isArray(valueProps) || valueProps.length === 0) {
        throw new Error('No value propositions available from MSP');
      }
      // For simplicity, select the first value proposition and return its ID
      const valuePropId = valueProps[0].id as `0x${string}`;
      console.log(`Chose Value Prop ID: ${valuePropId}`);
      return valuePropId;
    };
    
  2. Add the getValueProps method to the export statement at the bottom of the mspService.ts file.

    mspService.ts
    // Export initialized client and helper functions for use in other modules
    export { mspClient, getMspInfo, getMspHealth, authenticateUser, getValueProps };
    
    View complete mspService.ts file
    src/services/mspService.ts
    import {
      HealthStatus,
      InfoResponse,
      MspClient,
      UserInfo,
      ValueProp,
    } from '@storagehub-sdk/msp-client';
    import { HttpClientConfig } from '@storagehub-sdk/core';
    import { address, walletClient } from './clientService.js';
    
    const NETWORKS = {
      devnet: {
        id: 181222,
        name: 'DataHaven Local Devnet',
        rpcUrl: 'http://127.0.0.1:9666',
        wsUrl: 'wss://127.0.0.1:9666',
        mspUrl: 'http://127.0.0.1:8080/',
        nativeCurrency: { name: 'StorageHub', symbol: 'SH', decimals: 18 },
      },
      testnet: {
        id: 55931,
        name: 'DataHaven Testnet',
        rpcUrl: 'https://services.datahaven-testnet.network/testnet',
        wsUrl: 'wss://services.datahaven-testnet.network/testnet',
        mspUrl: 'https://deo-dh-backend.testnet.datahaven-infra.network/',
        nativeCurrency: { name: 'Mock', symbol: 'MOCK', decimals: 18 },
      },
    };
    
    // Configure the HTTP client to point to the MSP backend
    const httpCfg: HttpClientConfig = { baseUrl: NETWORKS.testnet.mspUrl };
    
    // Initialize a session token for authenticated requests (updated after authentication through SIWE)
    let sessionToken: string | undefined = undefined;
    
    // Provide session information to the MSP client whenever available
    // Returns a token and user address if authenticated, otherwise undefined
    const sessionProvider = async () =>
      sessionToken
        ? ({ token: sessionToken, user: { address: address } } as const)
        : undefined;
    
    // Establish a connection to the Main Storage Provider (MSP) backend
    const mspClient = await MspClient.connect(httpCfg, sessionProvider);
    
    // Retrieve MSP metadata, including its unique ID and version, and log it to the console
    const getMspInfo = async (): Promise<InfoResponse> => {
      const mspInfo = await mspClient.info.getInfo();
      console.log(`MSP ID: ${mspInfo.mspId}`);
      return mspInfo;
    };
    
    // Retrieve and log the MSP’s current health status
    const getMspHealth = async (): Promise<HealthStatus> => {
      const mspHealth = await mspClient.info.getHealth();
      console.log(`MSP Health: ${mspHealth}`);
      return mspHealth;
    };
    
    // Authenticate the user via SIWE (Sign-In With Ethereum) using the connected wallet
    // Once authenticated, store the returned session token and retrieve the user’s profile
    const authenticateUser = async (): Promise<UserInfo> => {
      console.log('Authenticating user with MSP via SIWE...');
    
      // In development domain and uri can be arbitrary placeholders,
      // but in production they must match your actual frontend origin.
      const domain = 'localhost';
      const uri = 'http://localhost';
    
      const siweSession = await mspClient.auth.SIWE(walletClient, domain, uri);
      console.log('SIWE Session:', siweSession);
      sessionToken = (siweSession as { token: string }).token;
    
      const profile: UserInfo = await mspClient.auth.getProfile();
      return profile;
    };
    
    // Retrieve MSP value propositions and select one for bucket creation
    const getValueProps = async (): Promise<`0x${string}`> => {
      const valueProps: ValueProp[] = await mspClient.info.getValuePropositions();
      if (!Array.isArray(valueProps) || valueProps.length === 0) {
        throw new Error('No value propositions available from MSP');
      }
      // For simplicity, select the first value proposition and return its ID
      const valuePropId = valueProps[0].id as `0x${string}`;
      console.log(`Chose Value Prop ID: ${valuePropId}`);
      return valuePropId;
    };
    
    // Export initialized client and helper functions for use in other modules
    export { mspClient, getMspInfo, getMspHealth, authenticateUser, getValueProps };
    
  3. Next, make sure to create a new folder called operations within the src folder (at the same level as the services folder) like so:

    mkdir operations
    
  4. Then, create a new file within the operations folder called bucketOperations.ts.

  5. Add the following code:

    src/operations/bucketOperations.ts
    import {
      storageHubClient,
      address,
      publicClient,
      polkadotApi,
    } from '../services/clientService.js';
    import {
      getMspInfo,
      getValueProps,
      mspClient,
    } from '../services/mspService.js';
    
    export async function createBucket(bucketName: string) {
      // Get basic MSP information from the MSP including its ID
      const { mspId } = await getMspInfo();
    
      // Choose one of the value props retrieved from the MSP through the helper function
      const valuePropId = await getValueProps();
      console.log(`Value Prop ID: ${valuePropId}`);
    
      // Derive bucket ID
      const bucketId = (await storageHubClient.deriveBucketId(
        address,
        bucketName
      )) as string;
      console.log(`Derived bucket ID: ${bucketId}`);
    
      // Check that the bucket doesn't exist yet
      const bucketBeforeCreation = await polkadotApi.query.providers.buckets(
        bucketId
      );
      console.log('Bucket before creation is empty', bucketBeforeCreation.isEmpty);
      if (!bucketBeforeCreation.isEmpty) {
        throw new Error(`Bucket already exists: ${bucketId}`);
      }
    
      const isPrivate = false;
    
      // Create bucket on chain
      const txHash: `0x${string}` | undefined = await storageHubClient.createBucket(
        mspId as `0x${string}`,
        bucketName,
        isPrivate,
        valuePropId
      );
    
      console.log('createBucket() txHash:', txHash);
      if (!txHash) {
        throw new Error('createBucket() did not return a transaction hash');
      }
    
      // Wait for transaction receipt
      const txReceipt = await publicClient.waitForTransactionReceipt({
        hash: txHash,
      });
      if (txReceipt.status !== 'success') {
        throw new Error(`Bucket creation failed: ${txHash}`);
      }
    
      return { bucketId, txReceipt };
    }
    

    The createBucket helper handles the full lifecycle of a bucket-creation transaction:

    • It fetches the MSP ID and selects a value prop (required to create a bucket).
    • It derives a deterministic bucket ID from your wallet address and chosen bucket name.
    • Before sending any on-chain transaction, it checks whether the bucket already exists to prevent accidental overwrites.

    Once the check passes, the createBucket extrinsic is called via the StorageHub client, returning the bucketId and txReceipt.

  6. Now that you've extracted all the bucket-creation logic into its own method, update the index.ts file.

    Replace the placeholder // **PLACEHOLDER FOR STEP 2: CREATE BUCKET** with the following code:

    src/index.ts // **PLACEHOLDER FOR STEP 2: CREATE BUCKET**
    // 2. Create Bucket
    const bucketName = 'init-bucket';
    const { bucketId, txReceipt } = await createBucket(bucketName);
    console.log(`Created Bucket ID: ${bucketId}`);
    console.log(`createBucket() txReceipt: ${txReceipt}`);
    

    Note

    You can also get a list of all your created buckets within a certain MSP using the mspClient.buckets.listBuckets() function. Make sure you are authenticated before triggering this function.

    If you run the script multiple times, use a new bucketName to avoid a revert, or modify the logic to use your existing bucket in later steps.

Check if Bucket is On-Chain

The last step is to verify that the bucket was created successfully on-chain and to confirm its stored data. Just like with the createBucket method you can extract all the bucket verification logic into its own verifyBucketCreation method.

  1. Add the following code in your bucketOperations.ts file:

    bucketOperations.ts
    // Verify bucket creation on chain and return bucket data
    export async function verifyBucketCreation(bucketId: string) {
      const { mspId } = await getMspInfo();
    
      const bucket = await polkadotApi.query.providers.buckets(bucketId);
      if (bucket.isEmpty) {
        throw new Error('Bucket not found on chain after creation');
      }
    
      const bucketData = bucket.unwrap().toHuman();
      console.log(
        'Bucket userId matches initial bucket owner address',
        bucketData.userId === address
      );
      console.log(
        `Bucket MSPId matches initial MSPId: ${bucketData.mspId === mspId}`
      );
      return bucketData;
    }
    
  2. Update the index.ts file to trigger the helper method you just implemented:

    index.ts // **PLACEHOLDER FOR STEP 3: VERIFY BUCKET**
    // 3. Verify bucket exists on chain
    const bucketData = await verifyBucketCreation(bucketId);
    console.log('Bucket data:', bucketData);
    

    The response should look something like this:

    ts-node index.ts Bucket userId matches initial bucket owner address: true Bucket mspId matches initial mspId: true
    Bucket data: {
      root: '0x03170a2e7597b7b7e3d84c05391d139a62b157e78786d8c082f29dcf4c111314',
      userId: '0x00FA35D84a43db75467D2B2c1ed8974aCA57223e',
      mspId: '0x0000000000000000000000000000000000000000000000000000000000000001',
      private: false,
      readAccessGroupId: null,
      size_: '0',
      valuePropId: '0x628a23c7aa64902e13f63ffdd0725e07723745f84cabda048d901020d200da1e'
    }

View complete index.ts file up until now
src/index.ts
import '@storagehub/api-augment';
import { initWasm } from '@storagehub-sdk/core';
import { polkadotApi } from './services/clientService.js';
import {
  downloadFile,
  uploadFile,
  verifyDownload,
  waitForBackendFileReady,
  waitForMSPConfirmOnChain,
} from './operations/fileOperations.js';
import { HealthStatus } from '@storagehub-sdk/msp-client';
import { mspClient } from './services/mspService.js';
import {
  createBucket,
  verifyBucketCreation,
  waitForBackendBucketReady,
} from './operations/bucketOperations.js';

async function run() {
  // For anything from @storagehub-sdk/core to work, initWasm() is required
  // on top of the file
  await initWasm();

  // --- End-to-end storage flow ---
  // 1. Check MSP Health
  const mspHealth: HealthStatus = await mspClient.info.getHealth();
  console.log('MSP Health Status:', mspHealth);
  // 2. Create Bucket
  const bucketName = 'init-bucket';
  const { bucketId, txReceipt } = await createBucket(bucketName);
  console.log(`Created Bucket ID: ${bucketId}`);
  console.log(`createBucket() txReceipt: ${txReceipt}`);
  // 3. Verify bucket exists on chain
  const bucketData = await verifyBucketCreation(bucketId);
  console.log('Bucket data:', bucketData);

  // **PLACEHOLDER FOR STEP 4: WAIT FOR BACKEND TO HAVE BUCKET**
  // **PLACEHOLDER FOR STEP 5: UPLOAD FILE**
  // **PLACEHOLDER FOR STEP 6: WAIT FOR BACKEND TO HAVE FILE**
  // **PLACEHOLDER FOR STEP 7: DOWNLOAD FILE**
  // **PLACEHOLDER FOR STEP 8: VERIFY FILE**

  // Disconnect the Polkadot API at the very end
  await polkadotApi.disconnect();
}

await run();

You’ve successfully created a bucket and verified it on-chain.

Wait for Backend to Have Bucket

Right after a bucket is created, your script will immediately try to upload a file. At this point, the bucket exists on-chain, but DataHaven’s indexer may not have processed the block yet. Until the indexer catches up, the MSP backend can’t resolve the new bucket ID, so any upload attempt will fail. To avoid that race condition, you’ll add a small polling helper that waits for the indexer to acknowledge the bucket before continuing.

  1. Add the following code in your bucketOperations.ts file:

    bucketOperations.ts
    export async function waitForBackendBucketReady(bucketId: string) {
      const maxAttempts = 10; // Number of polling attempts
      const delayMs = 2000; // Delay between attempts in milliseconds
    
      for (let i = 0; i < maxAttempts; i++) {
        console.log(
          `Checking for bucket in MSP backend, attempt ${
            i + 1
          } of ${maxAttempts}...`
        );
        try {
          // Query the MSP backend for the bucket metadata.
          // If the backend has synced the bucket, this call resolves successfully.
          const bucket = await mspClient.buckets.getBucket(bucketId);
    
          if (bucket) {
            // Bucket is now available and the script can safely continue
            console.log('Bucket found in MSP backend:', bucket);
            return;
          }
        } catch (error: any) {
          // Backend hasn’t indexed the bucket yet
          if (error.status === 404 || error.body.error === 'Not found: Record') {
            console.log(`Bucket not found in MSP backend yet (404).`);
          } else {
            // Any other error is unexpected and should fail the entire workflow
            console.log('Unexpected error while fetching bucket from MSP:', error);
            throw error;
          }
        }
        // Wait before polling again
        await new Promise((r) => setTimeout(r, delayMs));
      }
      // All attempts exhausted
      throw new Error(`Bucket ${bucketId} not found in MSP backend after waiting`);
    }
    
  2. Update the index.ts file to trigger the helper method you just implemented:

    index.ts // **PLACEHOLDER FOR STEP 4: WAIT FOR BACKEND TO HAVE BUCKET**
    // 4. Wait until indexer/backend knows about the bucket
    await waitForBackendBucketReady(bucketId);
    

    The response should look something like this:

    ts-node index.ts
    Checking for bucket in MSP backend, attempt 1 of 10...
    Bucket not found in MSP backend yet (404).
    Checking for bucket in MSP backend, attempt 2 of 10...
    Bucket not found in MSP backend yet (404).
    Checking for bucket in MSP backend, attempt 3 of 10...
    Bucket not found in MSP backend yet (404).
    Checking for bucket in MSP backend, attempt 4 of 10...
    Bucket not found in MSP backend yet (404).
    Checking for bucket in MSP backend, attempt 5 of 10...
    Bucket not found in MSP backend yet (404).
    Checking for bucket in MSP backend, attempt 6 of 10...
    Bucket found in MSP backend: {
      bucketId: '0x750337cba34ddcfdec3101cf8cc5ae09042a921b5571971533af2aab372604b9',
      name: 'init-bucket',
      root: '0x0000000000000000000000000000000000000000000000000000000000000000',
      isPublic: true,
      sizeBytes: 0,
      valuePropId: '0x628a23c7aa64902e13f63ffdd0725e07723745f84cabda048d901020d200da1e',
      fileCount: 0
    }

View complete bucketOperations.ts
src/operations/bucketOperations.ts
import {
  storageHubClient,
  address,
  publicClient,
  polkadotApi,
} from '../services/clientService.js';
import {
  getMspInfo,
  getValueProps,
  mspClient,
} from '../services/mspService.js';

export async function createBucket(bucketName: string) {
  // Get basic MSP information from the MSP including its ID
  const { mspId } = await getMspInfo();

  // Choose one of the value props retrieved from the MSP through the helper function
  const valuePropId = await getValueProps();
  console.log(`Value Prop ID: ${valuePropId}`);

  // Derive bucket ID
  const bucketId = (await storageHubClient.deriveBucketId(
    address,
    bucketName
  )) as string;
  console.log(`Derived bucket ID: ${bucketId}`);

  // Check that the bucket doesn't exist yet
  const bucketBeforeCreation = await polkadotApi.query.providers.buckets(
    bucketId
  );
  console.log('Bucket before creation is empty', bucketBeforeCreation.isEmpty);
  if (!bucketBeforeCreation.isEmpty) {
    throw new Error(`Bucket already exists: ${bucketId}`);
  }

  const isPrivate = false;

  // Create bucket on chain
  const txHash: `0x${string}` | undefined = await storageHubClient.createBucket(
    mspId as `0x${string}`,
    bucketName,
    isPrivate,
    valuePropId
  );

  console.log('createBucket() txHash:', txHash);
  if (!txHash) {
    throw new Error('createBucket() did not return a transaction hash');
  }

  // Wait for transaction receipt
  const txReceipt = await publicClient.waitForTransactionReceipt({
    hash: txHash,
  });
  if (txReceipt.status !== 'success') {
    throw new Error(`Bucket creation failed: ${txHash}`);
  }

  return { bucketId, txReceipt };
}

// Verify bucket creation on chain and return bucket data
export async function verifyBucketCreation(bucketId: string) {
  const { mspId } = await getMspInfo();

  const bucket = await polkadotApi.query.providers.buckets(bucketId);
  if (bucket.isEmpty) {
    throw new Error('Bucket not found on chain after creation');
  }

  const bucketData = bucket.unwrap().toHuman();
  console.log(
    'Bucket userId matches initial bucket owner address',
    bucketData.userId === address
  );
  console.log(
    `Bucket MSPId matches initial MSPId: ${bucketData.mspId === mspId}`
  );
  return bucketData;
}

export async function waitForBackendBucketReady(bucketId: string) {
  const maxAttempts = 10; // Number of polling attempts
  const delayMs = 2000; // Delay between attempts in milliseconds

  for (let i = 0; i < maxAttempts; i++) {
    console.log(
      `Checking for bucket in MSP backend, attempt ${
        i + 1
      } of ${maxAttempts}...`
    );
    try {
      // Query the MSP backend for the bucket metadata.
      // If the backend has synced the bucket, this call resolves successfully.
      const bucket = await mspClient.buckets.getBucket(bucketId);

      if (bucket) {
        // Bucket is now available and the script can safely continue
        console.log('Bucket found in MSP backend:', bucket);
        return;
      }
    } catch (error: any) {
      // Backend hasn’t indexed the bucket yet
      if (error.status === 404 || error.body.error === 'Not found: Record') {
        console.log(`Bucket not found in MSP backend yet (404).`);
      } else {
        // Any other error is unexpected and should fail the entire workflow
        console.log('Unexpected error while fetching bucket from MSP:', error);
        throw error;
      }
    }
    // Wait before polling again
    await new Promise((r) => setTimeout(r, delayMs));
  }
  // All attempts exhausted
  throw new Error(`Bucket ${bucketId} not found in MSP backend after waiting`);
}

Upload a File

Ensure your file is ready to upload. In this demonstration, a .txt file named helloworld.txt is stored in the files folder as an example, i.e., /src/files.

In this section you will learn how to upload a file to DataHaven by following a three-step flow:

  1. Issue a Storage Request: Register your intent to store a file in your bucket and set its replication policy. Initialize FileManager, compute the file’s fingerprint, fetch MSP info (and extract peer IDs), choose a replication level and replica count, then call issueStorageRequest.
  2. Verify If Storage Request Is On-Chain: Derive the deterministic file key, query on-chain state, and confirm the request exists and matches your local fingerprint and bucket.
  3. Upload a File: Send the file bytes to the MSP, linked to your storage request. Confirm that the upload receipt indicates a successful upload.

All three of these steps will be handled within the uploadFile helper method as part of the fileOperations.ts file. After that, you will update the index.ts file accordingly to trigger this new logic.

Add Method to Upload File

Create a new file within the operations folder called fileOperations.ts and add the following code:

operations/fileOperations.ts
import { createReadStream, statSync, createWriteStream } from 'node:fs';
import { Readable } from 'node:stream';
import { FileManager, ReplicationLevel } from '@storagehub-sdk/core';
import { TypeRegistry } from '@polkadot/types';
import { AccountId20, H256 } from '@polkadot/types/interfaces';
import {
  storageHubClient,
  address,
  publicClient,
  polkadotApi,
  account,
} from '../services/clientService.js';
import {
  mspClient,
  getMspInfo,
  authenticateUser,
} from '../services/mspService.js';
import { DownloadResult } from '@storagehub-sdk/msp-client';
import { PalletFileSystemStorageRequestMetadata } from '@polkadot/types/lookup';

// Add helper methods here

To implement the uploadFile helper method, add the following code to the fileOperations.ts file:

operations/fileOperations.ts // Add helper methods here
export async function uploadFile(
  bucketId: string,
  filePath: string,
  fileName: string
) {
  //   ISSUE STORAGE REQUEST

  // Set up FileManager
  const fileSize = statSync(filePath).size;
  const fileManager = new FileManager({
    size: fileSize,
    stream: () =>
      Readable.toWeb(createReadStream(filePath)) as ReadableStream<Uint8Array>,
  });

  // Get file details

  const fingerprint = await fileManager.getFingerprint();
  console.log(`Fingerprint: ${fingerprint.toHex()}`);

  const fileSizeBigInt = BigInt(fileManager.getFileSize());
  console.log(`File size: ${fileSize} bytes`);

  // Get MSP details

  // Fetch MSP details from the backend (includes its on-chain ID and libp2p addresses)
  const { mspId, multiaddresses } = await getMspInfo();
  // Ensure the MSP exposes at least one multiaddress (required to reach it over libp2p)
  if (!multiaddresses?.length) {
    throw new Error('MSP multiaddresses are missing');
  }
  // Extract the MSP’s libp2p peer IDs from the multiaddresses
  // Each address should contain a `/p2p/<peerId>` segment
  const peerIds: string[] = extractPeerIDs(multiaddresses);
  // Validate that at least one valid peer ID was found
  if (peerIds.length === 0) {
    throw new Error('MSP multiaddresses had no /p2p/<peerId> segment');
  }

  // Extracts libp2p peer IDs from a list of multiaddresses.
  // A multiaddress commonly ends with `/p2p/<peerId>`, so this function
  // splits on that delimiter and returns the trailing segment when present.
  function extractPeerIDs(multiaddresses: string[]): string[] {
    return (multiaddresses ?? [])
      .map((addr) => addr.split('/p2p/').pop())
      .filter((id): id is string => !!id);
  }

  // Set the redundancy policy for this request.
  // Custom replication allows the client to specify an exact replica count.
  const replicationLevel = ReplicationLevel.Custom;
  const replicas = 1;

  // Issue storage request
  const txHash: `0x${string}` | undefined =
    await storageHubClient.issueStorageRequest(
      bucketId as `0x${string}`,
      fileName,
      fingerprint.toHex() as `0x${string}`,
      fileSizeBigInt,
      mspId as `0x${string}`,
      peerIds,
      replicationLevel,
      replicas
    );
  console.log('issueStorageRequest() txHash:', txHash);
  if (!txHash) {
    throw new Error('issueStorageRequest() did not return a transaction hash');
  }

  // Wait for storage request transaction
  const receipt = await publicClient.waitForTransactionReceipt({
    hash: txHash,
  });
  if (receipt.status !== 'success') {
    throw new Error(`Storage request failed: ${txHash}`);
  }
  console.log('issueStorageRequest() txReceipt:', receipt);

  //   VERIFY STORAGE REQUEST ON CHAIN

  // Compute file key
  const registry = new TypeRegistry();
  const owner = registry.createType(
    'AccountId20',
    account.address
  ) as AccountId20;
  const bucketIdH256 = registry.createType('H256', bucketId) as H256;
  const fileKey = await fileManager.computeFileKey(
    owner,
    bucketIdH256,
    fileName
  );

  // Verify storage request on chain
  const storageRequest = await polkadotApi.query.fileSystem.storageRequests(
    fileKey
  );
  if (!storageRequest.isSome) {
    throw new Error('Storage request not found on chain');
  }

  // Read the storage request data
  const storageRequestData = storageRequest.unwrap().toHuman();
  console.log('Storage request data:', storageRequestData);
  console.log(
    'Storage request bucketId matches initial bucketId:',
    storageRequestData.bucketId === bucketId
  );
  console.log(
    'Storage request fingerprint matches initial fingerprint',
    storageRequestData.fingerprint === fingerprint.toString()
  );

  //   UPLOAD FILE TO MSP

  // Authenticate bucket owner address with MSP prior to uploading file
  const authProfile = await authenticateUser();
  console.log('Authenticated user profile:', authProfile);

  // Upload file to MSP
  const uploadReceipt = await mspClient.files.uploadFile(
    bucketId,
    fileKey.toHex(),
    await fileManager.getFileBlob(),
    address,
    fileName
  );
  console.log('File upload receipt:', uploadReceipt);

  if (uploadReceipt.status !== 'upload_successful') {
    throw new Error('File upload to MSP failed');
  }

  return { fileKey, uploadReceipt };
}

Call the Upload File Helper Method

Replace the placeholder // **PLACEHOLDER FOR STEP 5: UPLOAD FILE** with the following code:

src/index.ts // **PLACEHOLDER FOR STEP 5: UPLOAD FILE**
  // 5. Upload file
  const fileName = 'helloworld.txt';
  const filePath = new URL(`./files/${fileName}`, import.meta.url).pathname;

  const { fileKey, uploadReceipt } = await uploadFile(
    bucketId,
    filePath,
    fileName
  );
  console.log(`File uploaded: ${fileKey}`);
  console.log(`Status: ${uploadReceipt.status}`);

After a successful file upload the logs should look something like:

ts-node index.ts issueStorageRequest() txHash: 0x1cb9446510d9f204c93f1c348e0a13422adef91f1740ea0fdb1534e3ccb232ef
issueStorageRequest() txReceipt: {
  transactionHash: '0x1cb9446510d9f204c93f1c348e0a13422adef91f1740ea0fdb1534e3ccb232ef',
  transactionIndex: 0,
  blockHash: '0x0cd98b5d6050b926e6876a5b09124d1840e2c94d95faffdd6668a659e3c5c6a7',
  from: '0x00fa35d84a43db75467d2b2c1ed8974aca57223e',
  to: '0x0000000000000000000000000000000000000404',
  blockNumber: 98684n,
  cumulativeGasUsed: 239712n,
  gasUsed: 239712n,
  contractAddress: null,
  logs: [
    {
      address: '0x0000000000000000000000000000000000000404',
      topics: [Array],
      data: '0x',
      blockHash: '0x0cd98b5d6050b926e6876a5b09124d1840e2c94d95faffdd6668a659e3c5c6a7',
      blockNumber: 98684n,
      transactionHash: '0xfb344dc05359ee4d13189e65fc3230a1998a1802d3a0cf929ffb80a0670d7ce0',
      transactionIndex: 0,
      logIndex: 0,
      transactionLogIndex: '0x0',
      removed: false
    }
  ],
  logsBloom: '0x00000000000000040000000000000000000000000000000000000000000000040000000000000000000000000001000000000000000000000000080000000000000000040000000000000000000000000000000000000140000000000000000000000000000000000000000000000400000000100000000000000000000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000800800000000000000000000000000200000000000000000000000010000000000000000000000000000080000',
  status: 'success',
  effectiveGasPrice: 1000000000n,
  type: 'legacy'
}
 Storage request data: {
  requestedAt: '387,185',
  expiresAt: '387,295',
  owner: '0x00FA35D84a43db75467D2B2c1ed8974aCA57223e',
  bucketId: '0x8009cc4028ab4c8e333b13d38b840107f8467e27be11e9624e3b0d505314a5da',
  location: 'helloworld.txt',
  fingerprint: '0x1bc3a71173c16c1eee04f7e7cf2591678b0b6cdf08eb81c638ae60a38b706aad',
  size_: '18',
  msp: [
    '0x0000000000000000000000000000000000000000000000000000000000000001',
    false
  ],
  userPeerIds: [
    '12D3KooWNEor6iiEAbZhCXqJbXibdjethDY8oeDoieVVxpZhQcW1',
    '12D3KooWNEor6iiEAbZhCXqJbXibdjethDY8oeDoieVVxpZhQcW1',
    '12D3KooWNEor6iiEAbZhCXqJbXibdjethDY8oeDoieVVxpZhQcW1'
  ],
  bspsRequired: '1',
  bspsConfirmed: '0',
  bspsVolunteered: '0',
  depositPaid: '1,000,010,114,925,524,930'
}
File upload receipt: {
  status: 'upload_successful',
  fileKey: '0x8345bdd406fd9df119757b77c84e16a2e304276372dc21cb37a69a471ee093a6',
  bucketId: '0xdd2148ff63c15826ab42953a9d214770e6c2a73b22b83d28819a1777ab9d1322',
  fingerprint: '0x1bc3a71173c16c1eee04f7e7cf2591678b0b6cdf08eb81c638ae60a38b706aad',
  location: 'helloworld.txt'
}
File uploaded: 0x8345bdd406fd9df119757b77c84e16a2e304276372dc21cb37a69a471ee093a6 Status: upload_successful
View complete index.ts up until this point
src/index.ts
import '@storagehub/api-augment';
import { initWasm } from '@storagehub-sdk/core';
import { polkadotApi } from './services/clientService.js';
import {
  downloadFile,
  uploadFile,
  verifyDownload,
  waitForBackendFileReady,
  waitForMSPConfirmOnChain,
} from './operations/fileOperations.js';
import { HealthStatus } from '@storagehub-sdk/msp-client';
import { mspClient } from './services/mspService.js';
import {
  createBucket,
  verifyBucketCreation,
  waitForBackendBucketReady,
} from './operations/bucketOperations.js';

async function run() {
  // For anything from @storagehub-sdk/core to work, initWasm() is required
  // on top of the file
  await initWasm();

  // --- End-to-end storage flow ---
  // 1. Check MSP Health
  const mspHealth: HealthStatus = await mspClient.info.getHealth();
  console.log('MSP Health Status:', mspHealth);
  // 2. Create Bucket
  const bucketName = 'init-bucket';
  const { bucketId, txReceipt } = await createBucket(bucketName);
  console.log(`Created Bucket ID: ${bucketId}`);
  console.log(`createBucket() txReceipt: ${txReceipt}`);
  // 3. Verify bucket exists on chain
  const bucketData = await verifyBucketCreation(bucketId);
  console.log('Bucket data:', bucketData);
  // 4. Wait until indexer/backend knows about the bucket
  await waitForBackendBucketReady(bucketId);
  // 5. Upload file
  const fileName = 'helloworld.txt';
  const filePath = new URL(`./files/${fileName}`, import.meta.url).pathname;

  const { fileKey, uploadReceipt } = await uploadFile(
    bucketId,
    filePath,
    fileName
  );
  console.log(`File uploaded: ${fileKey}`);
  console.log(`Status: ${uploadReceipt.status}`);

  // **PLACEHOLDER FOR STEP 6: WAIT FOR BACKEND TO HAVE FILE**
  // **PLACEHOLDER FOR STEP 7: DOWNLOAD FILE**
  // **PLACEHOLDER FOR STEP 8: VERIFY FILE**

  // Disconnect the Polkadot API at the very end
  await polkadotApi.disconnect();
}

await run();

Wait for Backend to Have File

In this step, you wire in two small helper methods:

  1. waitForMSPConfirmOnChain: Polls the DataHaven runtime until the MSP has confirmed the storage request on-chain.
  2. waitForBackendFileReady: Polls the MSP backend using mspClient.files.getFileInfo(bucketId, fileKey) until the file metadata becomes available. Even if the file is confirmed on-chain, the backend may not yet be aware of it.

Once both checks pass, you know the file is committed on-chain, and the MSP backend is ready to serve it, so the subsequent download call won’t randomly fail with a 404 while the system is still syncing.

  1. Add the following code in your fileOperations.ts file:

    fileOperations.ts
    export async function waitForMSPConfirmOnChain(fileKey: string) {
      const maxAttempts = 10; // Number of polling attempts
      const delayMs = 2000; // Delay between attempts in milliseconds
    
      for (let i = 0; i < maxAttempts; i++) {
        console.log(
          `Check storage request has been confirmed by the MSP on-chain, attempt ${
            i + 1
          } of ${maxAttempts}...`
        );
    
        // Query the runtime for the StorageRequest entry associated with this fileKey
        const req = await polkadotApi.query.fileSystem.storageRequests(fileKey);
    
        // StorageRequest removed from state before confirmation is an error
        if (req.isNone) {
          throw new Error(
            `StorageRequest for ${fileKey} no longer exists on-chain.`
          );
        }
    
        // Decode the on-chain metadata struct
        const data: PalletFileSystemStorageRequestMetadata = req.unwrap();
    
        // Extract the MSP confirmation tuple (mspId, bool)
        const mspTuple = data.msp.isSome ? data.msp.unwrap() : null;
    
        // The second value in the tuple is a SCALE Bool (codec), so convert using .isTrue
        const mspConfirmed = mspTuple ? (mspTuple[1] as any).isTrue : false;
    
        // If MSP has confirmed the storage request, we’re good to proceed
        if (mspConfirmed) {
          console.log('Storage request confirmed by MSP on-chain');
          return;
        }
    
        // Wait before polling again
        await new Promise((r) => setTimeout(r, delayMs));
      }
    
      // All attempts exhausted
      throw new Error(
        `FileKey ${fileKey} not ready for download after waiting ${
          maxAttempts * delayMs
        } ms`
      );
    }
    
    export async function waitForBackendFileReady(
      bucketId: string,
      fileKey: string
    ) {
      const maxAttempts = 15; // Number of polling attempts
      const delayMs = 2000; // Delay between attempts in milliseconds
    
      for (let i = 0; i < maxAttempts; i++) {
        console.log(
          `Checking for file in MSP backend, attempt ${i + 1} of ${maxAttempts}...`
        );
    
        try {
          // Query MSP backend for the file metadata
          const fileInfo = await mspClient.files.getFileInfo(bucketId, fileKey);
    
          // File is fully ready — backend has indexed it and can serve it
          if (fileInfo.status === 'ready') {
            console.log('File found in MSP backend:', fileInfo);
            return fileInfo;
          }
    
          // Failure statuses (irrecoverable for this upload lifecycle)
          if (fileInfo.status === 'revoked') {
            throw new Error('File upload was cancelled by user');
          } else if (fileInfo.status === 'rejected') {
            throw new Error('File upload was rejected by MSP');
          } else if (fileInfo.status === 'expired') {
            throw new Error('File upload request expired before MSP processed it');
          }
    
          // Otherwise still pending (indexer not done, MSP still syncing, etc.)
          console.log(`File status is "${fileInfo.status}", waiting...`);
        } catch (error: any) {
          if (error?.status === 404 || error?.body?.error === 'Not found: Record') {
            // Handle "not yet indexed" as a *non-fatal* condition
            console.log(
              'File not yet indexed in MSP backend (404 Not Found). Waiting before retry...'
            );
          } else {
            // Any unexpected backend error should stop the workflow and surface to the caller
            console.log('Unexpected error while fetching file from MSP:', error);
            throw error;
          }
        }
    
        // Wait before polling again
        await new Promise((r) => setTimeout(r, delayMs));
      }
    
      // All attempts exhausted
      throw new Error('Timed out waiting for MSP backend to mark file as ready');
    }
    
  2. Update the index.ts file to trigger the helper method you just implemented:

    index.ts // **PLACEHOLDER FOR STEP 6: WAIT FOR BACKEND TO HAVE FILE**
    // 6. Wait until indexer/backend knows about the file
    await waitForMSPConfirmOnChain(fileKey.toHex());
    await waitForBackendFileReady(bucketId, fileKey.toHex());
    

    The response should look something like this:

    ts-node index.ts
    Check storage request has been confirmed by the MSP on-chain, attempt 1 of 10...
    Check storage request has been confirmed by the MSP on-chain, attempt 2 of 10...
    Check storage request has been confirmed by the MSP on-chain, attempt 3 of 10...
    Storage request confirmed by MSP on-chain
    Checking for file in MSP backend, attempt 1 of 15...
    File not yet indexed in MSP backend (404 Not Found). Waiting before retry...
    Checking for file in MSP backend, attempt 2 of 15...
    File not yet indexed in MSP backend (404 Not Found). Waiting before retry...
    Checking for file in MSP backend, attempt 3 of 15...
    File not yet indexed in MSP backend (404 Not Found). Waiting before retry...
    Checking for file in MSP backend, attempt 4 of 15...
    File not yet indexed in MSP backend (404 Not Found). Waiting before retry...
    Checking for file in MSP backend, attempt 5 of 15...
    File status is "inProgress", waiting...
    Checking for file in MSP backend, attempt 6 of 15...
    File status is "inProgress", waiting...
    Checking for file in MSP backend, attempt 7 of 15...
    File status is "inProgress", waiting...
    Checking for file in MSP backend, attempt 8 of 15...
    File status is "inProgress", waiting...
    Checking for file in MSP backend, attempt 9 of 15...
    File status is "inProgress", waiting...
    Checking for file in MSP backend, attempt 10 of 15...
    File found in MSP backend: {
      fileKey: '0xd80ba1a305f49240f0c18adb00532f284941455cb2e46c137ccd38755be198dd',
      fingerprint: '0x1bc3a71173c16c1eee04f7e7cf2591678b0b6cdf08eb81c638ae60a38b706aad',
      bucketId: '0x750337cba34ddcfdec3101cf8cc5ae09042a921b5571971533af2aab372604b9',
      location: 'helloworld.txt',
      size: 18n,
      isPublic: true,
      uploadedAt: 2025-12-10T12:03:01.033Z,
      status: 'ready',
      blockHash: '0x07f5319641faf4f30a225223d056adc7026e13a73d20a548b7a3a91d15e30fef',
      txHash: '0xf3acbdf55fbcadfb17ec90a9fe507b4d5d529fdd9b36aec1e173ffadc61877ea'
    }

    View complete index.ts file up until this point
    src/index.ts
    import '@storagehub/api-augment';
    import { initWasm } from '@storagehub-sdk/core';
    import { polkadotApi } from './services/clientService.js';
    import {
      downloadFile,
      uploadFile,
      verifyDownload,
      waitForBackendFileReady,
      waitForMSPConfirmOnChain,
    } from './operations/fileOperations.js';
    import { HealthStatus } from '@storagehub-sdk/msp-client';
    import { mspClient } from './services/mspService.js';
    import {
      createBucket,
      verifyBucketCreation,
      waitForBackendBucketReady,
    } from './operations/bucketOperations.js';
    
    async function run() {
      // For anything from @storagehub-sdk/core to work, initWasm() is required
      // on top of the file
      await initWasm();
    
      // --- End-to-end storage flow ---
      // 1. Check MSP Health
      const mspHealth: HealthStatus = await mspClient.info.getHealth();
      console.log('MSP Health Status:', mspHealth);
      // 2. Create Bucket
      const bucketName = 'init-bucket';
      const { bucketId, txReceipt } = await createBucket(bucketName);
      console.log(`Created Bucket ID: ${bucketId}`);
      console.log(`createBucket() txReceipt: ${txReceipt}`);
      // 3. Verify bucket exists on chain
      const bucketData = await verifyBucketCreation(bucketId);
      console.log('Bucket data:', bucketData);
      // 4. Wait until indexer/backend knows about the bucket
      await waitForBackendBucketReady(bucketId);
      // 5. Upload file
      const fileName = 'helloworld.txt';
      const filePath = new URL(`./files/${fileName}`, import.meta.url).pathname;
    
      const { fileKey, uploadReceipt } = await uploadFile(
        bucketId,
        filePath,
        fileName
      );
      console.log(`File uploaded: ${fileKey}`);
      console.log(`Status: ${uploadReceipt.status}`);
      // 6. Wait until indexer/backend knows about the file
      await waitForMSPConfirmOnChain(fileKey.toHex());
      await waitForBackendFileReady(bucketId, fileKey.toHex());
    
      // **PLACEHOLDER FOR STEP 7: DOWNLOAD FILE**
      // **PLACEHOLDER FOR STEP 8: VERIFY FILE**
    
      // Disconnect the Polkadot API at the very end
      await polkadotApi.disconnect();
    }
    
    await run();
    

Download and Save File

In this step, you'll fetch your file from the DataHaven network via the MSP, and you'll save it locally on your machine.

To do this, create the downloadFile helper method as part of the fileOperations.ts file. After that, you will update the index.ts file accordingly to trigger this new logic.

Add Method to Download File

To create the downloadFile helper method, add the following code:

src/operations/fileOperations.ts
export async function downloadFile(
  fileKey: H256,
  downloadPath: string
): Promise<{ path: string; size: number; mime?: string }> {
  // Download file from MSP
  const downloadResponse: DownloadResult = await mspClient.files.downloadFile(
    fileKey.toHex()
  );

  // Check if the download response was successful
  if (downloadResponse.status !== 200) {
    throw new Error(`Download failed with status: ${downloadResponse.status}`);
  }

  // Save downloaded file

  // Create a writable stream to the target file path
  // This stream will receive binary data chunks and write them to disk.
  const writeStream = createWriteStream(downloadPath);
  // Convert the Web ReadableStream into a Node.js-readable stream
  const readableStream = Readable.fromWeb(downloadResponse.stream as any);

  // Pipe the readable (input) stream into the writable (output) stream
  // This transfers the file data chunk by chunk and closes the write stream automatically
  // when finished.
  return new Promise((resolve, reject) => {
    readableStream.pipe(writeStream);
    writeStream.on('finish', async () => {
      const { size } = await import('node:fs/promises').then((fs) =>
        fs.stat(downloadPath)
      );
      const mime =
        downloadResponse.contentType === null
          ? undefined
          : downloadResponse.contentType;

      resolve({
        path: downloadPath,
        size,
        mime, // if available
      });
    });
    writeStream.on('error', reject);
  });
}

Call the Download File Helper Method

Replace the placeholder // **PLACEHOLDER FOR STEP 7: DOWNLOAD FILE** with the following code:

src/index.ts // **PLACEHOLDER FOR STEP 7: DOWNLOAD FILE**
// 7. Download file
const downloadedFilePath = new URL(
  './files/helloworld_downloaded.txt',
  import.meta.url
).pathname;
const downloadedFile = await downloadFile(fileKey, downloadedFilePath);
console.log(`File type: ${downloadedFile.mime}`);
console.log(
  `Downloaded ${downloadedFile.size} bytes to ${downloadedFile.path}`
);
View complete index.ts file up until this point
src/index.ts
import '@storagehub/api-augment';
import { initWasm } from '@storagehub-sdk/core';
import { polkadotApi } from './services/clientService.js';
import {
  downloadFile,
  uploadFile,
  verifyDownload,
  waitForBackendFileReady,
  waitForMSPConfirmOnChain,
} from './operations/fileOperations.js';
import { HealthStatus } from '@storagehub-sdk/msp-client';
import { mspClient } from './services/mspService.js';
import {
  createBucket,
  verifyBucketCreation,
  waitForBackendBucketReady,
} from './operations/bucketOperations.js';

async function run() {
  // For anything from @storagehub-sdk/core to work, initWasm() is required
  // on top of the file
  await initWasm();

  // --- End-to-end storage flow ---
  // 1. Check MSP Health
  const mspHealth: HealthStatus = await mspClient.info.getHealth();
  console.log('MSP Health Status:', mspHealth);
  // 2. Create Bucket
  const bucketName = 'init-bucket';
  const { bucketId, txReceipt } = await createBucket(bucketName);
  console.log(`Created Bucket ID: ${bucketId}`);
  console.log(`createBucket() txReceipt: ${txReceipt}`);
  // 3. Verify bucket exists on chain
  const bucketData = await verifyBucketCreation(bucketId);
  console.log('Bucket data:', bucketData);
  // 4. Wait until indexer/backend knows about the bucket
  await waitForBackendBucketReady(bucketId);
  // 5. Upload file
  const fileName = 'helloworld.txt';
  const filePath = new URL(`./files/${fileName}`, import.meta.url).pathname;

  const { fileKey, uploadReceipt } = await uploadFile(
    bucketId,
    filePath,
    fileName
  );
  console.log(`File uploaded: ${fileKey}`);
  console.log(`Status: ${uploadReceipt.status}`);
  // 6. Wait until indexer/backend knows about the file
  await waitForMSPConfirmOnChain(fileKey.toHex());
  await waitForBackendFileReady(bucketId, fileKey.toHex());
  // 7. Download file
  const downloadedFilePath = new URL(
    './files/helloworld_downloaded.txt',
    import.meta.url
  ).pathname;
  const downloadedFile = await downloadFile(fileKey, downloadedFilePath);
  console.log(`File type: ${downloadedFile.mime}`);
  console.log(
    `Downloaded ${downloadedFile.size} bytes to ${downloadedFile.path}`
  );

  // **PLACEHOLDER FOR STEP 8: VERIFY FILE**

  // Disconnect the Polkadot API at the very end
  await polkadotApi.disconnect();
}

await run();

Upon a successful file download, you'll see output similar to:

ts-node index.ts Downloaded 18 bytes to /Users/username/Documents/dh-project/src/files/helloworld_downloaded.txt

Verify Downloaded File

Verify that the downloaded file exactly matches the file you've uploaded.

Add Method to Verify Download

Implement the verifyDownload helper method logic to your fileOperations.ts file, by adding the following code:

src/operations/fileOperations.ts
// Compares an original file with a downloaded file byte-for-byte
export async function verifyDownload(
  originalPath: string,
  downloadedPath: string
): Promise<boolean> {
  const originalBuffer = await import('node:fs/promises').then((fs) =>
    fs.readFile(originalPath)
  );
  const downloadedBuffer = await import('node:fs/promises').then((fs) =>
    fs.readFile(downloadedPath)
  );

  return originalBuffer.equals(downloadedBuffer);
}

Call the Verify Download Helper Method

Replace the placeholder // **PLACEHOLDER FOR STEP 8: VERIFY FILE** with the following code:

src/index.ts // **PLACEHOLDER FOR STEP 8: VERIFY FILE**
const isValid = await verifyDownload(filePath, downloadedFilePath);
console.log(`File integrity verified: ${isValid ? 'PASSED' : 'FAILED'}`);

Putting It All Together

The code containing the complete series of steps from creating a bucket to retrieving the data is available below. As a reminder, before running the full script, ensure you have the following:

  • Tokens to pay for the storage request on your account
  • A file to upload, such as helloworld.txt
View complete src/index.ts script
src/index.ts
import '@storagehub/api-augment';
import { initWasm } from '@storagehub-sdk/core';
import { polkadotApi } from './services/clientService.js';
import {
  downloadFile,
  uploadFile,
  verifyDownload,
  waitForBackendFileReady,
  waitForMSPConfirmOnChain,
} from './operations/fileOperations.js';
import { HealthStatus } from '@storagehub-sdk/msp-client';
import { mspClient } from './services/mspService.js';
import {
  createBucket,
  verifyBucketCreation,
  waitForBackendBucketReady,
} from './operations/bucketOperations.js';

async function run() {
  // Initialize WASM
  await initWasm();

  console.log('🚀 Starting DataHaven Storage End-to-End Script...');

  // 1. Check MSP Health
  const mspHealth: HealthStatus = await mspClient.info.getHealth();
  console.log('MSP Health Status:', mspHealth);

  // 2. Create Bucket
  const bucketName = 'init-bucket';
  const { bucketId, txReceipt } = await createBucket(bucketName);
  console.log(`Created Bucket ID: ${bucketId}`);
  console.log(`createBucket() txReceipt: ${txReceipt}`);

  // 3. Verify bucket exists on chain
  const bucketData = await verifyBucketCreation(bucketId);
  console.log('Bucket data:', bucketData);

  // 4. Wait until indexer/backend knows about the bucket
  await waitForBackendBucketReady(bucketId);

  // 5. Upload file
  const fileName = 'helloworld.txt';
  const filePath = new URL(`./files/${fileName}`, import.meta.url).pathname;

  const { fileKey, uploadReceipt } = await uploadFile(
    bucketId,
    filePath,
    fileName
  );
  console.log(`File uploaded: ${fileKey}`);
  console.log(`Status: ${uploadReceipt.status}`);

  // 6. Wait until indexer/backend knows about the file
  await waitForMSPConfirmOnChain(fileKey.toHex());
  await waitForBackendFileReady(bucketId, fileKey.toHex());

  // 7. Download file
  const downloadedFilePath = new URL(
    './files/helloworld_downloaded.txt',
    import.meta.url
  ).pathname;
  const downloadedFile = await downloadFile(fileKey, downloadedFilePath);
  console.log(`File type: ${downloadedFile.mime}`);
  console.log(
    `Downloaded ${downloadedFile.size} bytes to ${downloadedFile.path}`
  );

  // 8. Verify download integrity
  const isValid = await verifyDownload(filePath, downloadedFilePath);
  console.log(`File integrity verified: ${isValid ? 'PASSED' : 'FAILED'}`);

  console.log('🚀 DataHaven Storage End-to-End Script Completed Successfully.');

  await polkadotApi.disconnect();
}

run();
View complete src/operations/fileOperations.ts
src/operations/fileOperations.ts
import { createReadStream, statSync, createWriteStream } from 'node:fs';
import { Readable } from 'node:stream';
import { FileManager, ReplicationLevel } from '@storagehub-sdk/core';
import { TypeRegistry } from '@polkadot/types';
import { AccountId20, H256 } from '@polkadot/types/interfaces';
import {
  storageHubClient,
  address,
  publicClient,
  polkadotApi,
  account,
} from '../services/clientService.js';
import {
  mspClient,
  getMspInfo,
  authenticateUser,
} from '../services/mspService.js';
import { DownloadResult } from '@storagehub-sdk/msp-client';
import { PalletFileSystemStorageRequestMetadata } from '@polkadot/types/lookup';

export async function uploadFile(
  bucketId: string,
  filePath: string,
  fileName: string
) {
  //   ISSUE STORAGE REQUEST

  // Set up FileManager
  const fileSize = statSync(filePath).size;
  const fileManager = new FileManager({
    size: fileSize,
    stream: () =>
      Readable.toWeb(createReadStream(filePath)) as ReadableStream<Uint8Array>,
  });

  // Get file details

  const fingerprint = await fileManager.getFingerprint();
  console.log(`Fingerprint: ${fingerprint.toHex()}`);

  const fileSizeBigInt = BigInt(fileManager.getFileSize());
  console.log(`File size: ${fileSize} bytes`);

  // Get MSP details

  // Fetch MSP details from the backend (includes its on-chain ID and libp2p addresses)
  const { mspId, multiaddresses } = await getMspInfo();
  // Ensure the MSP exposes at least one multiaddress (required to reach it over libp2p)
  if (!multiaddresses?.length) {
    throw new Error('MSP multiaddresses are missing');
  }
  // Extract the MSP’s libp2p peer IDs from the multiaddresses
  // Each address should contain a `/p2p/<peerId>` segment
  const peerIds: string[] = extractPeerIDs(multiaddresses);
  // Validate that at least one valid peer ID was found
  if (peerIds.length === 0) {
    throw new Error('MSP multiaddresses had no /p2p/<peerId> segment');
  }

  // Extracts libp2p peer IDs from a list of multiaddresses.
  // A multiaddress commonly ends with `/p2p/<peerId>`, so this function
  // splits on that delimiter and returns the trailing segment when present.
  function extractPeerIDs(multiaddresses: string[]): string[] {
    return (multiaddresses ?? [])
      .map((addr) => addr.split('/p2p/').pop())
      .filter((id): id is string => !!id);
  }

  // Set the redundancy policy for this request.
  // Custom replication allows the client to specify an exact replica count.
  const replicationLevel = ReplicationLevel.Custom;
  const replicas = 1;

  // Issue storage request
  const txHash: `0x${string}` | undefined =
    await storageHubClient.issueStorageRequest(
      bucketId as `0x${string}`,
      fileName,
      fingerprint.toHex() as `0x${string}`,
      fileSizeBigInt,
      mspId as `0x${string}`,
      peerIds,
      replicationLevel,
      replicas
    );
  console.log('issueStorageRequest() txHash:', txHash);
  if (!txHash) {
    throw new Error('issueStorageRequest() did not return a transaction hash');
  }

  // Wait for storage request transaction
  const receipt = await publicClient.waitForTransactionReceipt({
    hash: txHash,
  });
  if (receipt.status !== 'success') {
    throw new Error(`Storage request failed: ${txHash}`);
  }
  console.log('issueStorageRequest() txReceipt:', receipt);

  //   VERIFY STORAGE REQUEST ON CHAIN

  // Compute file key
  const registry = new TypeRegistry();
  const owner = registry.createType(
    'AccountId20',
    account.address
  ) as AccountId20;
  const bucketIdH256 = registry.createType('H256', bucketId) as H256;
  const fileKey = await fileManager.computeFileKey(
    owner,
    bucketIdH256,
    fileName
  );

  // Verify storage request on chain
  const storageRequest = await polkadotApi.query.fileSystem.storageRequests(
    fileKey
  );
  if (!storageRequest.isSome) {
    throw new Error('Storage request not found on chain');
  }

  // Read the storage request data
  const storageRequestData = storageRequest.unwrap().toHuman();
  console.log('Storage request data:', storageRequestData);
  console.log(
    'Storage request bucketId matches initial bucketId:',
    storageRequestData.bucketId === bucketId
  );
  console.log(
    'Storage request fingerprint matches initial fingerprint',
    storageRequestData.fingerprint === fingerprint.toString()
  );

  //   UPLOAD FILE TO MSP

  // Authenticate bucket owner address with MSP prior to uploading file
  const authProfile = await authenticateUser();
  console.log('Authenticated user profile:', authProfile);

  // Upload file to MSP
  const uploadReceipt = await mspClient.files.uploadFile(
    bucketId,
    fileKey.toHex(),
    await fileManager.getFileBlob(),
    address,
    fileName
  );
  console.log('File upload receipt:', uploadReceipt);

  if (uploadReceipt.status !== 'upload_successful') {
    throw new Error('File upload to MSP failed');
  }

  return { fileKey, uploadReceipt };
}

export async function downloadFile(
  fileKey: H256,
  downloadPath: string
): Promise<{ path: string; size: number; mime?: string }> {
  // Download file from MSP
  const downloadResponse: DownloadResult = await mspClient.files.downloadFile(
    fileKey.toHex()
  );

  // Check if the download response was successful
  if (downloadResponse.status !== 200) {
    throw new Error(`Download failed with status: ${downloadResponse.status}`);
  }

  // Save downloaded file

  // Create a writable stream to the target file path
  // This stream will receive binary data chunks and write them to disk.
  const writeStream = createWriteStream(downloadPath);
  // Convert the Web ReadableStream into a Node.js-readable stream
  const readableStream = Readable.fromWeb(downloadResponse.stream as any);

  // Pipe the readable (input) stream into the writable (output) stream
  // This transfers the file data chunk by chunk and closes the write stream automatically
  // when finished.
  return new Promise((resolve, reject) => {
    readableStream.pipe(writeStream);
    writeStream.on('finish', async () => {
      const { size } = await import('node:fs/promises').then((fs) =>
        fs.stat(downloadPath)
      );
      const mime =
        downloadResponse.contentType === null
          ? undefined
          : downloadResponse.contentType;

      resolve({
        path: downloadPath,
        size,
        mime, // if available
      });
    });
    writeStream.on('error', reject);
  });
}

// Compares an original file with a downloaded file byte-for-byte
export async function verifyDownload(
  originalPath: string,
  downloadedPath: string
): Promise<boolean> {
  const originalBuffer = await import('node:fs/promises').then((fs) =>
    fs.readFile(originalPath)
  );
  const downloadedBuffer = await import('node:fs/promises').then((fs) =>
    fs.readFile(downloadedPath)
  );

  return originalBuffer.equals(downloadedBuffer);
}

export async function waitForMSPConfirmOnChain(fileKey: string) {
  const maxAttempts = 10; // Number of polling attempts
  const delayMs = 2000; // Delay between attempts in milliseconds

  for (let i = 0; i < maxAttempts; i++) {
    console.log(
      `Check storage request has been confirmed by the MSP on-chain, attempt ${
        i + 1
      } of ${maxAttempts}...`
    );

    // Query the runtime for the StorageRequest entry associated with this fileKey
    const req = await polkadotApi.query.fileSystem.storageRequests(fileKey);

    // StorageRequest removed from state before confirmation is an error
    if (req.isNone) {
      throw new Error(
        `StorageRequest for ${fileKey} no longer exists on-chain.`
      );
    }

    // Decode the on-chain metadata struct
    const data: PalletFileSystemStorageRequestMetadata = req.unwrap();

    // Extract the MSP confirmation tuple (mspId, bool)
    const mspTuple = data.msp.isSome ? data.msp.unwrap() : null;

    // The second value in the tuple is a SCALE Bool (codec), so convert using .isTrue
    const mspConfirmed = mspTuple ? (mspTuple[1] as any).isTrue : false;

    // If MSP has confirmed the storage request, we’re good to proceed
    if (mspConfirmed) {
      console.log('Storage request confirmed by MSP on-chain');
      return;
    }

    // Wait before polling again
    await new Promise((r) => setTimeout(r, delayMs));
  }

  // All attempts exhausted
  throw new Error(
    `FileKey ${fileKey} not ready for download after waiting ${
      maxAttempts * delayMs
    } ms`
  );
}

export async function waitForBackendFileReady(
  bucketId: string,
  fileKey: string
) {
  const maxAttempts = 15; // Number of polling attempts
  const delayMs = 2000; // Delay between attempts in milliseconds

  for (let i = 0; i < maxAttempts; i++) {
    console.log(
      `Checking for file in MSP backend, attempt ${i + 1} of ${maxAttempts}...`
    );

    try {
      // Query MSP backend for the file metadata
      const fileInfo = await mspClient.files.getFileInfo(bucketId, fileKey);

      // File is fully ready — backend has indexed it and can serve it
      if (fileInfo.status === 'ready') {
        console.log('File found in MSP backend:', fileInfo);
        return fileInfo;
      }

      // Failure statuses (irrecoverable for this upload lifecycle)
      if (fileInfo.status === 'revoked') {
        throw new Error('File upload was cancelled by user');
      } else if (fileInfo.status === 'rejected') {
        throw new Error('File upload was rejected by MSP');
      } else if (fileInfo.status === 'expired') {
        throw new Error('File upload request expired before MSP processed it');
      }

      // Otherwise still pending (indexer not done, MSP still syncing, etc.)
      console.log(`File status is "${fileInfo.status}", waiting...`);
    } catch (error: any) {
      if (error?.status === 404 || error?.body?.error === 'Not found: Record') {
        // Handle "not yet indexed" as a *non-fatal* condition
        console.log(
          'File not yet indexed in MSP backend (404 Not Found). Waiting before retry...'
        );
      } else {
        // Any unexpected backend error should stop the workflow and surface to the caller
        console.log('Unexpected error while fetching file from MSP:', error);
        throw error;
      }
    }

    // Wait before polling again
    await new Promise((r) => setTimeout(r, delayMs));
  }

  // All attempts exhausted
  throw new Error('Timed out waiting for MSP backend to mark file as ready');
}
View complete src/operations/bucketOperations.ts
src/operations/bucketOperations.ts
import {
  storageHubClient,
  address,
  publicClient,
  polkadotApi,
} from '../services/clientService.js';
import {
  getMspInfo,
  getValueProps,
  mspClient,
} from '../services/mspService.js';

export async function createBucket(bucketName: string) {
  // Get basic MSP information from the MSP including its ID
  const { mspId } = await getMspInfo();

  // Choose one of the value props retrieved from the MSP through the helper function
  const valuePropId = await getValueProps();
  console.log(`Value Prop ID: ${valuePropId}`);

  // Derive bucket ID
  const bucketId = (await storageHubClient.deriveBucketId(
    address,
    bucketName
  )) as string;
  console.log(`Derived bucket ID: ${bucketId}`);

  // Check that the bucket doesn't exist yet
  const bucketBeforeCreation = await polkadotApi.query.providers.buckets(
    bucketId
  );
  console.log('Bucket before creation is empty', bucketBeforeCreation.isEmpty);
  if (!bucketBeforeCreation.isEmpty) {
    throw new Error(`Bucket already exists: ${bucketId}`);
  }

  const isPrivate = false;

  // Create bucket on chain
  const txHash: `0x${string}` | undefined = await storageHubClient.createBucket(
    mspId as `0x${string}`,
    bucketName,
    isPrivate,
    valuePropId
  );

  console.log('createBucket() txHash:', txHash);
  if (!txHash) {
    throw new Error('createBucket() did not return a transaction hash');
  }

  // Wait for transaction receipt
  const txReceipt = await publicClient.waitForTransactionReceipt({
    hash: txHash,
  });
  if (txReceipt.status !== 'success') {
    throw new Error(`Bucket creation failed: ${txHash}`);
  }

  return { bucketId, txReceipt };
}

// Verify bucket creation on chain and return bucket data
export async function verifyBucketCreation(bucketId: string) {
  const { mspId } = await getMspInfo();

  const bucket = await polkadotApi.query.providers.buckets(bucketId);
  if (bucket.isEmpty) {
    throw new Error('Bucket not found on chain after creation');
  }

  const bucketData = bucket.unwrap().toHuman();
  console.log(
    'Bucket userId matches initial bucket owner address',
    bucketData.userId === address
  );
  console.log(
    `Bucket MSPId matches initial MSPId: ${bucketData.mspId === mspId}`
  );
  return bucketData;
}

export async function waitForBackendBucketReady(bucketId: string) {
  const maxAttempts = 10; // Number of polling attempts
  const delayMs = 2000; // Delay between attempts in milliseconds

  for (let i = 0; i < maxAttempts; i++) {
    console.log(
      `Checking for bucket in MSP backend, attempt ${
        i + 1
      } of ${maxAttempts}...`
    );
    try {
      // Query the MSP backend for the bucket metadata.
      // If the backend has synced the bucket, this call resolves successfully.
      const bucket = await mspClient.buckets.getBucket(bucketId);

      if (bucket) {
        // Bucket is now available and the script can safely continue
        console.log('Bucket found in MSP backend:', bucket);
        return;
      }
    } catch (error: any) {
      // Backend hasn’t indexed the bucket yet
      if (error.status === 404 || error.body.error === 'Not found: Record') {
        console.log(`Bucket not found in MSP backend yet (404).`);
      } else {
        // Any other error is unexpected and should fail the entire workflow
        console.log('Unexpected error while fetching bucket from MSP:', error);
        throw error;
      }
    }
    // Wait before polling again
    await new Promise((r) => setTimeout(r, delayMs));
  }
  // All attempts exhausted
  throw new Error(`Bucket ${bucketId} not found in MSP backend after waiting`);
}

Notes on Data Safety

Uploading a file does not guarantee network-wide replication. Files are considered secured by DataHaven only after replication to a Backup Storage Provider (BSP) is complete. Tooling to surface replication status is in active development.

Next Steps

Last update: December 16, 2025
| Created: October 17, 2025