This page is the maintainer-side reference run for the AeroFTP Community Benchmark initiative announced in issue #177. It is a single-host, single-residential-uplink sample published as a baseline, not as a population-level benchmark. Selection bias is explicit in section 9.
When AeroFTP claims that one protocol is faster than another on a given provider, that claim should be defensible. A single fiber line in a single timezone is not a credible base for a public protocol comparison page. Issue #177 invites the community to run the same matrix against their own profiles and submit a sanitized JSON report.
Before asking other people to do that, the maintainer ran the matrix on every profile saved on the development host. The output of that exercise is the dataset and bar charts below, plus the bug fixes the sweep surfaced.
The main sweep uses the new benchmark custom subcommand introduced in v3.7.3:
aeroftp-cli --profile "<name>" benchmark custom \
--sizes 1M,10M,100M,1G --runs 3 \
--consent-publish --report <out>.jsonEach (size, run) tuple exercises five operations: upload, download, list, stat, delete. Numbers are reported as p50/p95/min/max/stddev per operation, not as arithmetic means. The CLI runs a sanitization pass before writing the JSON: if any path, hostname, account, bucket name, IP, MAC, token or fingerprint slips through, the report is rejected, not anonymized post-hoc.
| Class | Profiles | Notes |
|---|---|---|
| Full matrix (1M + 10M + 100M + 1G all OK) | AWS S3, Cloudflare R2, Alibaba OSS, S3 Storj, Tencent COS, S5 FileLu, Mega S3, Google S3, S3Drive, Drime, Internxt, Jotta, Koofr (native), FileLu, My Mega, Google Drive, Dropbox, OneDrive, pCloud, Azure Blob, Koofr WebDAV, FeliCloud, aeroftp.app FTPS | 27 profiles |
| Partial (provider quota / size cap) | jianguoyun (CN, 1G refused), WebDAV DriveHQ (free quota), MyBox (Box 250 MB cap on the free tier), Internxt (10 GB quota saturated), MyZoho (1G blocked, 100M OK) | 5 profiles |
| Hard-failed in v3.7.3, fixed in v3.7.4 | Filen Dev (chunked AES-GCM), Yandex (transient upload-target race), Drime (list() mutated current_path) | 3 profiles, re-run with v3.7.4 |
| Out of scope this round | idrive S3 (cold-storage timeout), InfiniCloud jp (1G stuck at upload-target), kDrive, SeaFile WebDAV (no operations on root, need a sub-path matrix), Lumo NAS (powered off), Wasabi / Quotaless (access expired), 8 GitHub-as-storage profiles, 7 Aruba FTP duplicates, 3 media CDNs (ImageKit, Uploadcare, Cloudinary) | covered by separate handoffs or future rounds |
The sweep itself acted as a stress test of the rest of the codebase. Every defect surfaced was fixed before publishing the dataset. The fixes are split between the v3.7.3 patch queue (commits 253f2cc2 + 8e0f0b8f) and the v3.7.4 release (22a4bd8f, d16a63cc, cb1e80b6).
| Fix | Provider | Commit | Verified by |
|---|---|---|---|
benchmark_sanitize substitutes PII before assertion | all | 8e0f0b8f | FeliCloud, Azure Blob (reports written instead of rejected) |
SigpipeIgnoreGuard wraps cmd_benchmark | all | 8e0f0b8f | Azure Blob, S3 Backblaze, Yandex (no more rc=141) |
OneDrive nested mkdir splits relative path on / | OneDrive | 253f2cc2 | OneDrive (full matrix in 178 s) |
Drime::list() no longer mutates current_path | Drime | 253f2cc2 | Drime (full matrix in 65 s post-fix) |
HTTP read_timeout 300 s -> 1800 s on all 24 providers | all HTTP-based | 253f2cc2 | Koofr WebDAV (1G upload in 1010 s instead of dying at 5 min) |
Chunked AES-GCM upload (1 MiB / index=N) | Filen native | 22a4bd8f | Filen Dev (10M+ no longer hits 413) |
| Per-chunk retry on egest body decode failures | Filen native | d16a63cc | Filen Dev (transient decode body no longer fatal) |
Retry upload PUT with fresh upload-target on transient failures | Yandex Disk | cb1e80b6 | Yandex (100M no longer single-PUT race) |
Bars are p50 across 3 runs, scaled per chart (the longest bar in each chart is the chart's max, not an absolute Mbps). The number after the bar is p50, the right-aligned smaller number is p95.
| # | Profile | Throughput (Mbps) |
|---|---|---|
| 1 | Alibaba OSS | |
| 2 | AzureBlob | |
| 3 | AWS | |
| 4 | Tencent | |
| 5 | S3Drive | |
| 6 | S3 Storj | |
| 7 | ||
| 8 | Cloudfare R2 | |
| 9 | MyZoho | |
| 10 | Jotta | |
| 11 | WebDav DriveHQ | |
| 12 | Koofr | |
| 13 | pCloud | |
| 14 | Mega S3 | |
| 15 | aeroftp.app | |
| 16 | My Mega | |
| 17 | My Koofr WebDAV | |
| 18 | My Dropbox | |
| 19 | Filen Dev | |
| 20 | Internxt | |
| 21 | My Google Drive | |
| 22 | MyBox | |
| 23 | jianguoyun | |
| 24 | Drime | |
| 25 | FileLu | |
| 26 | S5 FileLu | |
| 27 | Yandex |
| # | Profile | Throughput (Mbps) |
|---|---|---|
| 1 | Mega S3 | |
| 2 | Tencent | |
| 3 | ||
| 4 | Alibaba OSS | |
| 5 | S3 Storj | |
| 6 | Cloudfare R2 | |
| 7 | S3Drive | |
| 8 | AzureBlob | |
| 9 | Koofr | |
| 10 | AWS | |
| 11 | Jotta | |
| 12 | Filen Dev | |
| 13 | jianguoyun | |
| 14 | My Koofr WebDAV | |
| 15 | aeroftp.app | |
| 16 | pCloud | |
| 17 | MyZoho | |
| 18 | WebDav DriveHQ | |
| 19 | My Dropbox | |
| 20 | My Mega | |
| 21 | Internxt | |
| 22 | My Google Drive | |
| 23 | Yandex | |
| 24 | FileLu | |
| 25 | MyBox | |
| 26 | Drime | |
| 27 | S5 FileLu |
| # | Profile | Throughput (Mbps) |
|---|---|---|
| 1 | Alibaba OSS | |
| 2 | AzureBlob | |
| 3 | S3 Storj | |
| 4 | Tencent | |
| 5 | pCloud | |
| 6 | Cloudfare R2 | |
| 7 | AWS | |
| 8 | My Dropbox | |
| 9 | MyZoho | |
| 10 | ||
| 11 | S3Drive | |
| 12 | Koofr | |
| 13 | Jotta | |
| 14 | My Koofr WebDAV | |
| 15 | My Mega | |
| 16 | Mega S3 | |
| 17 | aeroftp.app | |
| 18 | My Google Drive | |
| 19 | WebDav DriveHQ | |
| 20 | Internxt | |
| 21 | FileLu | |
| 22 | MyBox | |
| 23 | S5 FileLu | |
| 24 | jianguoyun | |
| 25 | Yandex |
| # | Profile | Throughput (Mbps) |
|---|---|---|
| 1 | ||
| 2 | Alibaba OSS | |
| 3 | Tencent | |
| 4 | Cloudfare R2 | |
| 5 | S3 Storj | |
| 6 | Jotta | |
| 7 | My Mega | |
| 8 | aeroftp.app | |
| 9 | Koofr | |
| 10 | My Koofr WebDAV | |
| 11 | Mega S3 | |
| 12 | My Dropbox | |
| 13 | MyZoho | |
| 14 | My Google Drive | |
| 15 | FileLu | |
| 16 | MyBox | |
| 17 | pCloud | |
| 18 | jianguoyun | |
| 19 | AzureBlob | |
| 20 | S3Drive | |
| 21 | Internxt | |
| 22 | AWS | |
| 23 | Yandex | |
| 24 | WebDav DriveHQ | |
| 25 | S5 FileLu |
| # | Profile | Throughput (Mbps) |
|---|---|---|
| 1 | AzureBlob | |
| 2 | pCloud | |
| 3 | Alibaba OSS | |
| 4 | My Dropbox | |
| 5 | S3 Storj | |
| 6 | Tencent | |
| 7 | Jotta | |
| 8 | Cloudfare R2 | |
| 9 | Mega S3 | |
| 10 | Internxt | |
| 11 | ||
| 12 | S3Drive | |
| 13 | aeroftp.app | |
| 14 | AWS | |
| 15 | Koofr | |
| 16 | MyZoho | |
| 17 | My Google Drive | |
| 18 | FileLu | |
| 19 | My Mega | |
| 20 | My Koofr WebDAV | |
| 21 | MyBox | |
| 22 | S5 FileLu | |
| 23 | WebDav DriveHQ | |
| 24 | jianguoyun |
| # | Profile | Throughput (Mbps) |
|---|---|---|
| 1 | ||
| 2 | Alibaba OSS | |
| 3 | Mega S3 | |
| 4 | Tencent | |
| 5 | Cloudfare R2 | |
| 6 | Jotta | |
| 7 | My Mega | |
| 8 | FileLu | |
| 9 | My Dropbox | |
| 10 | S3 Storj | |
| 11 | Koofr | |
| 12 | My Koofr WebDAV | |
| 13 | My Google Drive | |
| 14 | MyBox | |
| 15 | aeroftp.app | |
| 16 | MyZoho | |
| 17 | Internxt | |
| 18 | AWS | |
| 19 | S5 FileLu | |
| 20 | pCloud | |
| 21 | jianguoyun | |
| 22 | AzureBlob | |
| 23 | S3Drive | |
| 24 | WebDav DriveHQ |
| # | Profile | Throughput (Mbps) |
|---|---|---|
| 1 | pCloud | |
| 2 | Jotta | |
| 3 | My Dropbox | |
| 4 | Alibaba OSS | |
| 5 | aeroftp.app | |
| 6 | S3 Storj | |
| 7 | Cloudfare R2 | |
| 8 | FileLu | |
| 9 | Tencent | |
| 10 | ||
| 11 | S3Drive | |
| 12 | AzureBlob | |
| 13 | Mega S3 | |
| 14 | AWS | |
| 15 | My Google Drive | |
| 16 | My Koofr WebDAV | |
| 17 | My Mega | |
| 18 | S5 FileLu |
| # | Profile | Throughput (Mbps) |
|---|---|---|
| 1 | ||
| 2 | Mega S3 | |
| 3 | Tencent | |
| 4 | Alibaba OSS | |
| 5 | Jotta | |
| 6 | FileLu | |
| 7 | My Google Drive | |
| 8 | Cloudfare R2 | |
| 9 | My Mega | |
| 10 | My Koofr WebDAV | |
| 11 | S3 Storj | |
| 12 | My Dropbox | |
| 13 | S5 FileLu | |
| 14 | aeroftp.app | |
| 15 | S3Drive | |
| 16 | pCloud | |
| 17 | AWS | |
| 18 | AzureBlob |
Two non-obvious takeaways:
Three profiles failed in the main v3.7.3 sweep with errors that were AeroFTP bugs, not provider limits. The v3.7.4 release plus one in-queue commit (cb1e80b6) fixed them. The verify round below was run on a v3.7.4 build that includes that pending commit, with the 100 MiB cap matrix (custom --sizes 1M,10M,100M --runs 2) to confirm the fixes hold without re-burning bandwidth on 1 GiB payloads.
| Profile | v3.7.3 status | v3.7.4 result (verify, 100 MiB cap) | Fix commit |
|---|---|---|---|
| Filen Dev | failed at 10 MB upload (413 Payload Too Large) | OK: 21 runs / 444 MB / 195 s, 0 transient + 0 fatal errors. Upload p50: 4.77 / 24.29 / 30.83 Mbps at 1M/10M/100M. Download p50: 34.22 / 28.99 / 29.22 Mbps | 22a4bd8f, d16a63cc |
| Drime | mid-sweep dir corruption: list() mutated current_path, every operation after upload landed on a non-existent path | OK: 21 runs / 444 MB / 87 s, 0 transient + 0 fatal errors. Upload p50: 4.01 / 19.26 / 64.95 Mbps at 1M/10M/100M. Download p50: 6.38 / 53.47 / 259.88 Mbps | 253f2cc2 |
| Yandex | failed at 100 MB upload (no retry on transient upload-target) | OK at 1 MB (7 runs, 36 s, 0 errors). Upload p50: 0.87 Mbps, download p50: 5.92 Mbps. The Yandex free-tier server caps upload throughput around 1 Mbps server-side, so 10 MB and 100 MB single-shot PUTs are closed mid-stream regardless of retry logic. The cb1e80b6 fix is therefore validated where it was meant to apply (transient failures); larger payloads need chunked resumable upload with Content-Range, which stays open as a separate work item | cb1e80b6 |
While re-running Drime, a smaller bug was surfaced and fixed in the same session: Drime's mkdir() returned a generic ServerError when the API answered 422 Unprocessable Entity with "Folder with same name already exists.", so the benchmark command logged a cosmetic warning even though the directory existed and the run continued normally. The provider now maps that exact response to ProviderError::AlreadyExists, which the benchmark already treats as idempotent. Pending in queue for the next release.
| Bug | Provider | Status |
|---|---|---|
Y3 (emerged after Y2 close) | Yandex Disk | OPEN |
| Benchmark assumes overwrite-on-PUT | 4shared | OPEN: CLI should delete between runs for strict providers |
| 1G timeout cap for slow storage | idrive S3, InfiniCloud jp | OPEN: bench should accept --per-profile-timeout flag |
| No-root path matrix | kDrive, SeaFile WebDAV | OPEN: both providers refuse operations on /, need a sub-path benchmark variant |
S5 FileLu native delete returns 500 | FileLu native | OPEN: provider-side intermittent, not reproducible on demand |
# Same CLI binary or rebuild from main post-v3.7.4
CLI=/path/to/aeroftp-cli
# Single profile
"$CLI" --profile "AWS" benchmark custom \
--sizes 1M,10M,100M,1G --runs 3 \
--consent-publish --report ./AWS.jsonThe --consent-publish flag wraps the JSON in BEGIN AEROFTP BENCHMARK REPORT / END AEROFTP BENCHMARK REPORT markers and runs the sanitization sweep. If anything that would identify the host or the account makes it past the per-provider redactor, the command refuses to write the report. You will not produce a poisoned report by accident.
This dataset was produced on:
It is a baseline for what one developer measures from one location with one binary. The community benchmark page on docs.aeroftp.app will only draw protocol-level conclusions after aggregating community submissions across at least three regions and two connection types. Any conclusion drawn from this dataset alone is provisional.