Walrus.

Publication

Partagez vos connaissances.

McMMoKing.
Apr 10, 2025
Questions et Réponses avec des Experts

How to upload large files to Walrus without speed issues?

I'm trying to use Walrus for uploading files via API and I'm running into issues when trying to upload a 300MB file. While it works with the CLI, the upload speed is slow. Is there a file upload limit with Walrus, and how can I improve upload speeds for larger files?

  • Walrus
  • Typescript SDK
0
4
Partager
Commentaires
.

Réponses

4
Santorini.
Apr 11 2025, 15:11

Walrus has a maximum blob size limit of 13.6 GiB, so your 300MB file should not exceed the system's capabilities. However, default PUT requests are limited to 10 MiB; you can increase this limit by configuring your own publisher or adjusting the --max-body-size option.

0
Commentaires
.
Mister_CocaCola.
Apr 11 2025, 15:18

Using a local setup with the appropriate configurations might help bypass some network limitations, potentially improving speeds. For future uploads, you can use the walrus info command to check current configurations for any predefined limits or conditions that might affect file uploads.

0
Commentaires
.
Grizzly.
Apr 11 2025, 20:19

The slow upload speed you're experiencing on the CLI could be influenced by several factors such as network issues, server throttling, or limitations set by the server to manage high loads.

0
Commentaires
.
lite.vue.
Aug 31 2025, 10:56

You're definitely not hitting any file size limit—a 300 MB upload is well under the current maximum blob size of around 13.3 GiB (or 13.6 GiB by some accounts). So yes, Walrus can handle your file size just fine. ([docs.wal.app][1], [walrus.peera.ai][2])

However, the default HTTP upload limit is just 10 MiB per request. So unless you're using the CLI (which bypasses this), your large uploads will be throttled or even rejected. ([mystenlabs.github.io][3], [walrus.peera.ai][4])


Speed issues and how to fix them

To improve performance when uploading larger files via the API, here are some effective strategies:

1. Run your own publisher (or daemon)

By default, public publishers enforce small upload limits and can be slower. Running your own instance gives you full control.

  • Increase the body size limit using the --max-body-size flag (default: 10 MiB).
  • For quilt (multipart) uploads (batch small files), increase with --max-quilt-body-size (default: 100 MiB). ([mystenlabs.github.io][5])

2. Tune concurrency with sub-wallets and request settings

Performance benefits come from parallelism—each Walrus publisher uses multiple sub-wallets for concurrent uploads:

  • By default, 8 sub-wallets are created (--n-clients), allowing up to 8 simultaneous blob stores.
  • You can increase --n-clients to boost parallelism, but remember you'll also need more SUI/WAL tokens to fund them.
  • Tune the --max-concurrent-requests (how many uploads can run at once) and --max-buffer-size (queue depth before rejecting new requests with HTTP 429). ([mystenlabs.github.io][3])

3. Use multipart/quilt uploads

Walrus supports multipart (termed "quilt") uploads—breaking files into parts and uploading them together. This can be more efficient than one huge payload. Available since v1.29. ([docs.wal.app][6], [walrus.peera.ai][2])

4. Leverage upload relays

The CLI has support for upload relays (built in as of version v1.29+), which offload the heavy sliver upload work to a relay service:

  • Use --upload-relay <URL> to specify a relay.
  • Public relays exist (both for mainnet and testnet), though they may require tipping. ([docs.wal.app][7], [mystenlabs.github.io][8])

5. Optimize network and system performance

The client's side processing—especially erasure coding of data—can slow things down:

  • Ensure your hardware and network have sufficient bandwidth.
  • Monitor if CPU or memory is bottlenecked due to erasure encoding.
  • Adjust your local concurrency (e.g., parallel uploads) to match available bandwidth. ([walrus.peera.ai][9])

As a community user summed it up:

“The slowness … is partly due to the erasure coding process handled on the client side… currently, Walrus nodes can handle up to 8 concurrent HTTP requests… Increasing this limit may improve upload speed…” ([walrus.peera.ai][9])


Recap Table

StrategyBenefitRequirements / Notes
Use your own publisherBypass small public limits; full controlMust manage sub-wallets, fund with SUI/WAL, network/security setup
Increase --max-body-sizeUpload larger chunks per HTTP requestApplies to both blob and quilt uploads
Increase --n-clientsBoost concurrency—faster parallel uploadsRequires extra tokens, more sub-wallets
Increase --max-concurrent-requests and --max-buffer-sizeQueue management, reduce rejections (429s)Tuned for upload volume and system capacity
Use quilts (multipart)Efficient large or small-file batch uploadsRequires v1.29+, supports metadata, similar tuning flags
Use upload relayOffload encoding/upload overheadSupport in CLI/SDK; may involve relay tipping
Optimize network/hardwareReduce client-side bottlenecksCheck your system’s CPU, memory, and bandwidth

Suggested Smooth Setup

  1. Run your own Walrus publisher or daemon locally or in your infrastructure.

  2. Increase upload limits:

    walrus publisher \
      --max-body-size 500MiB \
      --max-quilt-body-size 2GiB \
      --n-clients 16 \
      --max-concurrent-requests 16 \
      --max-buffer-size 100
    
  3. Upload using multipart/quilt if working with multiple files or breaking big files into segments.

  4. Consider using upload relay if you're on resource-constrained environments (like browsers or mobile).

  5. Monitor your network and system usage—CPU, memory, bandwidth—and adjust concurrency accordingly.


TL;DR

  • No, 300 MB doesn’t exceed Walrus limits (max ~13 GiB).
  • Yes, default HTTP PUT limit is just 10 MiB—use your own publisher and bump it.
  • Improve speed with increased concurrency (--n-clients, --max-concurrent-requests), multipart uploads, upload relays, and network optimization.
0
Commentaires
.

Connaissez-vous la réponse ?

Veuillez vous connecter et la partager.

Walrus is a decentralized storage and data availability protocol designed specifically for large binary files, or "blobs"

88Publications156Réponses
Sui.X.Peera.

Gagne ta part de 1000 Sui

Gagne des points de réputation et obtiens des récompenses pour avoir aidé la communauté Sui à se développer.