Publication
Partagez vos connaissances.
How to upload large files to Walrus without speed issues?
I'm trying to use Walrus for uploading files via API and I'm running into issues when trying to upload a 300MB file. While it works with the CLI, the upload speed is slow. Is there a file upload limit with Walrus, and how can I improve upload speeds for larger files?
- Walrus
- Typescript SDK
Réponses
4Walrus has a maximum blob size limit of 13.6 GiB, so your 300MB file should not exceed the system's capabilities. However, default PUT requests are limited to 10 MiB; you can increase this limit by configuring your own publisher or adjusting the --max-body-size
option.
Using a local setup with the appropriate configurations might help bypass some network limitations, potentially improving speeds. For future uploads, you can use the walrus info
command to check current configurations for any predefined limits or conditions that might affect file uploads.
The slow upload speed you're experiencing on the CLI could be influenced by several factors such as network issues, server throttling, or limitations set by the server to manage high loads.
You're definitely not hitting any file size limit—a 300 MB upload is well under the current maximum blob size of around 13.3 GiB (or 13.6 GiB by some accounts). So yes, Walrus can handle your file size just fine. ([docs.wal.app][1], [walrus.peera.ai][2])
However, the default HTTP upload limit is just 10 MiB per request. So unless you're using the CLI (which bypasses this), your large uploads will be throttled or even rejected. ([mystenlabs.github.io][3], [walrus.peera.ai][4])
Speed issues and how to fix them
To improve performance when uploading larger files via the API, here are some effective strategies:
1. Run your own publisher (or daemon)
By default, public publishers enforce small upload limits and can be slower. Running your own instance gives you full control.
- Increase the body size limit using the
--max-body-size
flag (default: 10 MiB). - For quilt (multipart) uploads (batch small files), increase with
--max-quilt-body-size
(default: 100 MiB). ([mystenlabs.github.io][5])
2. Tune concurrency with sub-wallets and request settings
Performance benefits come from parallelism—each Walrus publisher uses multiple sub-wallets for concurrent uploads:
- By default, 8 sub-wallets are created (
--n-clients
), allowing up to 8 simultaneous blob stores. - You can increase
--n-clients
to boost parallelism, but remember you'll also need more SUI/WAL tokens to fund them. - Tune the
--max-concurrent-requests
(how many uploads can run at once) and--max-buffer-size
(queue depth before rejecting new requests with HTTP 429). ([mystenlabs.github.io][3])
3. Use multipart/quilt uploads
Walrus supports multipart (termed "quilt") uploads—breaking files into parts and uploading them together. This can be more efficient than one huge payload. Available since v1.29. ([docs.wal.app][6], [walrus.peera.ai][2])
4. Leverage upload relays
The CLI has support for upload relays (built in as of version v1.29+), which offload the heavy sliver upload work to a relay service:
- Use
--upload-relay <URL>
to specify a relay. - Public relays exist (both for mainnet and testnet), though they may require tipping. ([docs.wal.app][7], [mystenlabs.github.io][8])
5. Optimize network and system performance
The client's side processing—especially erasure coding of data—can slow things down:
- Ensure your hardware and network have sufficient bandwidth.
- Monitor if CPU or memory is bottlenecked due to erasure encoding.
- Adjust your local concurrency (e.g., parallel uploads) to match available bandwidth. ([walrus.peera.ai][9])
As a community user summed it up:
“The slowness … is partly due to the erasure coding process handled on the client side… currently, Walrus nodes can handle up to 8 concurrent HTTP requests… Increasing this limit may improve upload speed…” ([walrus.peera.ai][9])
Recap Table
Strategy | Benefit | Requirements / Notes |
---|---|---|
Use your own publisher | Bypass small public limits; full control | Must manage sub-wallets, fund with SUI/WAL, network/security setup |
Increase --max-body-size | Upload larger chunks per HTTP request | Applies to both blob and quilt uploads |
Increase --n-clients | Boost concurrency—faster parallel uploads | Requires extra tokens, more sub-wallets |
Increase --max-concurrent-requests and --max-buffer-size | Queue management, reduce rejections (429s) | Tuned for upload volume and system capacity |
Use quilts (multipart) | Efficient large or small-file batch uploads | Requires v1.29+, supports metadata, similar tuning flags |
Use upload relay | Offload encoding/upload overhead | Support in CLI/SDK; may involve relay tipping |
Optimize network/hardware | Reduce client-side bottlenecks | Check your system’s CPU, memory, and bandwidth |
Suggested Smooth Setup
-
Run your own Walrus publisher or daemon locally or in your infrastructure.
-
Increase upload limits:
walrus publisher \ --max-body-size 500MiB \ --max-quilt-body-size 2GiB \ --n-clients 16 \ --max-concurrent-requests 16 \ --max-buffer-size 100
-
Upload using multipart/quilt if working with multiple files or breaking big files into segments.
-
Consider using upload relay if you're on resource-constrained environments (like browsers or mobile).
-
Monitor your network and system usage—CPU, memory, bandwidth—and adjust concurrency accordingly.
TL;DR
- No, 300 MB doesn’t exceed Walrus limits (max ~13 GiB).
- Yes, default HTTP PUT limit is just 10 MiB—use your own publisher and bump it.
- Improve speed with increased concurrency (
--n-clients
,--max-concurrent-requests
), multipart uploads, upload relays, and network optimization.
Connaissez-vous la réponse ?
Veuillez vous connecter et la partager.
Walrus is a decentralized storage and data availability protocol designed specifically for large binary files, or "blobs"
Gagne ta part de 1000 Sui
Gagne des points de réputation et obtiens des récompenses pour avoir aidé la communauté Sui à se développer.