Skip to content

Community Benchmark, first round (v3.7.3 / v3.7.4)

Date: 2026-05-07 / 2026-05-08CLI: aeroftp-cli 3.7.3 (main sweep) + 3.7.4 (verify round)Issue: #177Schema: v1Profiles: 35 sanitized reports

This page is the maintainer-side reference run for the AeroFTP Community Benchmark initiative announced in issue #177. It is a single-host, single-residential-uplink sample published as a baseline, not as a population-level benchmark. Selection bias is explicit in section 9.

Downloads

1. Why this round exists

When AeroFTP claims that one protocol is faster than another on a given provider, that claim should be defensible. A single fiber line in a single timezone is not a credible base for a public protocol comparison page. Issue #177 invites the community to run the same matrix against their own profiles and submit a sanitized JSON report.

Before asking other people to do that, the maintainer ran the matrix on every profile saved on the development host. The output of that exercise is the dataset and bar charts below, plus the bug fixes the sweep surfaced.

2. Matrix

The main sweep uses the new benchmark custom subcommand introduced in v3.7.3:

bash
aeroftp-cli --profile "<name>" benchmark custom \
  --sizes 1M,10M,100M,1G --runs 3 \
  --consent-publish --report <out>.json

Each (size, run) tuple exercises five operations: upload, download, list, stat, delete. Numbers are reported as p50/p95/min/max/stddev per operation, not as arithmetic means. The CLI runs a sanitization pass before writing the JSON: if any path, hostname, account, bucket name, IP, MAC, token or fingerprint slips through, the report is rejected, not anonymized post-hoc.

3. Coverage

ClassProfilesNotes
Full matrix (1M + 10M + 100M + 1G all OK)AWS S3, Cloudflare R2, Alibaba OSS, S3 Storj, Tencent COS, S5 FileLu, Mega S3, Google S3, S3Drive, Drime, Internxt, Jotta, Koofr (native), FileLu, My Mega, Google Drive, Dropbox, OneDrive, pCloud, Azure Blob, Koofr WebDAV, FeliCloud, aeroftp.app FTPS27 profiles
Partial (provider quota / size cap)jianguoyun (CN, 1G refused), WebDAV DriveHQ (free quota), MyBox (Box 250 MB cap on the free tier), Internxt (10 GB quota saturated), MyZoho (1G blocked, 100M OK)5 profiles
Hard-failed in v3.7.3, fixed in v3.7.4Filen Dev (chunked AES-GCM), Yandex (transient upload-target race), Drime (list() mutated current_path)3 profiles, re-run with v3.7.4
Out of scope this roundidrive S3 (cold-storage timeout), InfiniCloud jp (1G stuck at upload-target), kDrive, SeaFile WebDAV (no operations on root, need a sub-path matrix), Lumo NAS (powered off), Wasabi / Quotaless (access expired), 8 GitHub-as-storage profiles, 7 Aruba FTP duplicates, 3 media CDNs (ImageKit, Uploadcare, Cloudinary)covered by separate handoffs or future rounds

4. Bug fixes shipped because of this sweep

The sweep itself acted as a stress test of the rest of the codebase. Every defect surfaced was fixed before publishing the dataset. The fixes are split between the v3.7.3 patch queue (commits 253f2cc2 + 8e0f0b8f) and the v3.7.4 release (22a4bd8f, d16a63cc, cb1e80b6).

FixProviderCommitVerified by
benchmark_sanitize substitutes PII before assertionall8e0f0b8fFeliCloud, Azure Blob (reports written instead of rejected)
SigpipeIgnoreGuard wraps cmd_benchmarkall8e0f0b8fAzure Blob, S3 Backblaze, Yandex (no more rc=141)
OneDrive nested mkdir splits relative path on /OneDrive253f2cc2OneDrive (full matrix in 178 s)
Drime::list() no longer mutates current_pathDrime253f2cc2Drime (full matrix in 65 s post-fix)
HTTP read_timeout 300 s -> 1800 s on all 24 providersall HTTP-based253f2cc2Koofr WebDAV (1G upload in 1010 s instead of dying at 5 min)
Chunked AES-GCM upload (1 MiB / index=N)Filen native22a4bd8fFilen Dev (10M+ no longer hits 413)
Per-chunk retry on egest body decode failuresFilen natived16a63ccFilen Dev (transient decode body no longer fatal)
Retry upload PUT with fresh upload-target on transient failuresYandex Diskcb1e80b6Yandex (100M no longer single-PUT race)

5. Throughput rankings

Bars are p50 across 3 runs, scaled per chart (the longest bar in each chart is the chart's max, not an absolute Mbps). The number after the bar is p50, the right-aligned smaller number is p95.

Payload 1 MiB

Upload

#ProfileThroughput (Mbps)
1Alibaba OSS
107.7p95 107.7
2AzureBlob
91.3p95 94.5
3AWS
65.1p95 68.2
4Tencent
64.1p95 69.3
5S3Drive
42.1p95 53.1
6S3 Storj
39.5p95 41.7
7Google
32.4p95 32.7
8Cloudfare R2
29.4p95 34.5
9MyZoho
24.3p95 26.3
10Jotta
20.8p95 24.6
11WebDav DriveHQ
18.5p95 19.8
12Koofr
18.3p95 24.4
13pCloud
17.0p95 17.1
14Mega S3
15.7p95 41.1
15aeroftp.app
13.4p95 13.6
16My Mega
12.9p95 16.1
17My Koofr WebDAV
12.6p95 12.9
18My Dropbox
12.5p95 12.7
19Filen Dev
6.2p95 7.0
20Internxt
4.3p95 5.2
21My Google Drive
3.8p95 3.9
22MyBox
3.2p95 3.8
23jianguoyun
2.1p95 2.1
24Drime
2.0p95 2.0
25FileLu
1.4p95 1.4
26S5 FileLu
1.4p95 1.4
27Yandex
0.8p95 0.9

Download

#ProfileThroughput (Mbps)
1Mega S3
83.7p95 84.9
2Tencent
72.3p95 74.1
3Google
62.3p95 67.0
4Alibaba OSS
60.5p95 83.7
5S3 Storj
50.0p95 51.6
6Cloudfare R2
46.6p95 48.2
7S3Drive
43.9p95 46.7
8AzureBlob
35.4p95 38.3
9Koofr
32.7p95 37.2
10AWS
31.1p95 37.9
11Jotta
29.9p95 37.2
12Filen Dev
25.7p95 27.5
13jianguoyun
21.4p95 21.4
14My Koofr WebDAV
20.4p95 27.1
15aeroftp.app
18.5p95 19.2
16pCloud
16.6p95 16.7
17MyZoho
11.4p95 12.5
18WebDav DriveHQ
10.7p95 10.9
19My Dropbox
9.6p95 10.1
20My Mega
9.6p95 11.7
21Internxt
9.2p95 9.2
22My Google Drive
6.8p95 8.2
23Yandex
6.1p95 6.7
24FileLu
5.4p95 5.6
25MyBox
5.3p95 5.4
26Drime
3.2p95 3.2
27S5 FileLu
1.8p95 1.8

Payload 10 MiB

Upload

#ProfileThroughput (Mbps)
1Alibaba OSS
173.5p95 179.1
2AzureBlob
161.7p95 161.9
3S3 Storj
121.1p95 122.6
4Tencent
108.3p95 119.4
5pCloud
94.3p95 101.2
6Cloudfare R2
89.1p95 91.0
7AWS
76.9p95 80.2
8My Dropbox
69.3p95 87.6
9MyZoho
67.7p95 68.3
10Google
67.4p95 68.9
11S3Drive
63.1p95 73.6
12Koofr
51.5p95 56.8
13Jotta
50.4p95 96.0
14My Koofr WebDAV
45.9p95 57.9
15My Mega
40.6p95 56.4
16Mega S3
39.4p95 44.9
17aeroftp.app
30.7p95 31.7
18My Google Drive
26.8p95 29.1
19WebDav DriveHQ
23.8p95 25.0
20Internxt
21.6p95 23.3
21FileLu
13.7p95 13.8
22MyBox
13.1p95 13.4
23S5 FileLu
6.7p95 6.8
24jianguoyun
2.1p95 2.1
25Yandex
1.0p95 1.0

Download

#ProfileThroughput (Mbps)
1Google
284.5p95 297.2
2Alibaba OSS
272.2p95 363.3
3Tencent
223.5p95 298.2
4Cloudfare R2
185.4p95 208.8
5S3 Storj
155.8p95 158.5
6Jotta
121.9p95 134.9
7My Mega
109.3p95 109.5
8aeroftp.app
89.0p95 99.3
9Koofr
87.1p95 98.3
10My Koofr WebDAV
77.9p95 84.0
11Mega S3
67.9p95 260.8
12My Dropbox
64.2p95 79.9
13MyZoho
50.7p95 53.1
14My Google Drive
50.0p95 52.9
15FileLu
48.3p95 50.1
16MyBox
45.9p95 48.5
17pCloud
43.9p95 44.9
18jianguoyun
43.7p95 43.9
19AzureBlob
42.0p95 43.3
20S3Drive
38.3p95 44.6
21Internxt
35.8p95 42.4
22AWS
34.1p95 36.3
23Yandex
27.5p95 34.5
24WebDav DriveHQ
14.2p95 14.3
25S5 FileLu
13.3p95 15.2

Payload 100 MiB

Upload

#ProfileThroughput (Mbps)
1AzureBlob
281.3p95 281.6
2pCloud
236.5p95 254.8
3Alibaba OSS
229.6p95 239.2
4My Dropbox
223.2p95 223.9
5S3 Storj
194.3p95 195.5
6Tencent
181.3p95 181.6
7Jotta
168.1p95 212.2
8Cloudfare R2
163.7p95 164.9
9Mega S3
163.6p95 166.9
10Internxt
149.8p95 167.1
11Google
141.4p95 166.8
12S3Drive
131.1p95 138.6
13aeroftp.app
129.2p95 142.4
14AWS
102.9p95 109.0
15Koofr
101.5p95 110.5
16MyZoho
99.1p95 102.2
17My Google Drive
96.7p95 99.5
18FileLu
92.5p95 96.9
19My Mega
72.5p95 77.9
20My Koofr WebDAV
48.7p95 49.8
21MyBox
37.9p95 39.6
22S5 FileLu
29.3p95 29.3
23WebDav DriveHQ
23.4p95 25.8
24jianguoyun
2.1p95 2.1

Download

#ProfileThroughput (Mbps)
1Google
615.3p95 631.5
2Alibaba OSS
594.0p95 609.6
3Mega S3
546.7p95 551.2
4Tencent
516.4p95 525.0
5Cloudfare R2
401.3p95 406.8
6Jotta
383.8p95 417.4
7My Mega
377.3p95 388.7
8FileLu
229.2p95 229.4
9My Dropbox
183.7p95 199.7
10S3 Storj
178.2p95 186.0
11Koofr
156.2p95 161.5
12My Koofr WebDAV
144.0p95 156.3
13My Google Drive
136.6p95 161.4
14MyBox
119.0p95 121.5
15aeroftp.app
82.3p95 84.7
16MyZoho
66.4p95 82.8
17Internxt
64.9p95 66.2
18AWS
62.6p95 71.4
19S5 FileLu
57.5p95 58.0
20pCloud
52.3p95 56.3
21jianguoyun
44.0p95 44.4
22AzureBlob
41.4p95 42.7
23S3Drive
29.4p95 54.5
24WebDav DriveHQ
14.8p95 14.8

Payload 1 GiB

Upload

#ProfileThroughput (Mbps)
1pCloud
281.8p95 285.3
2Jotta
271.2p95 276.6
3My Dropbox
240.5p95 241.3
4Alibaba OSS
209.3p95 238.5
5aeroftp.app
201.0p95 212.3
6S3 Storj
188.1p95 189.9
7Cloudfare R2
185.0p95 190.1
8FileLu
183.8p95 187.9
9Tencent
178.0p95 198.3
10Google
167.8p95 184.2
11S3Drive
165.6p95 168.5
12AzureBlob
156.9p95 157.1
13Mega S3
152.7p95 164.7
14AWS
144.2p95 155.8
15My Google Drive
129.4p95 138.9
16My Koofr WebDAV
113.6p95 142.8
17My Mega
79.2p95 86.8
18S5 FileLu
40.2p95 40.4

Download

#ProfileThroughput (Mbps)
1Google
585.1p95 616.3
2Mega S3
584.9p95 588.4
3Tencent
569.3p95 596.0
4Alibaba OSS
556.5p95 561.2
5Jotta
471.1p95 475.2
6FileLu
412.8p95 423.5
7My Google Drive
304.5p95 336.2
8Cloudfare R2
286.6p95 291.4
9My Mega
229.8p95 235.4
10My Koofr WebDAV
210.0p95 213.9
11S3 Storj
187.9p95 188.6
12My Dropbox
139.8p95 176.0
13S5 FileLu
97.2p95 114.0
14aeroftp.app
88.8p95 95.0
15S3Drive
86.4p95 91.0
16pCloud
80.1p95 84.8
17AWS
76.8p95 86.8
18AzureBlob
43.1p95 43.1

Two non-obvious takeaways:

  1. Asymmetry is real. The same provider can rank top-3 on upload and bottom-3 on download (Azure Blob), or vice-versa (Google S3). A single-direction benchmark would mislead.
  2. The S3 protocol is not universally fastest. Native APIs (pCloud, Jotta, Dropbox) win the 1 GiB upload bracket. The S3-as-backend assumption breaks here.

6. Verification round (post-v3.7.4)

Three profiles failed in the main v3.7.3 sweep with errors that were AeroFTP bugs, not provider limits. The v3.7.4 release plus one in-queue commit (cb1e80b6) fixed them. The verify round below was run on a v3.7.4 build that includes that pending commit, with the 100 MiB cap matrix (custom --sizes 1M,10M,100M --runs 2) to confirm the fixes hold without re-burning bandwidth on 1 GiB payloads.

Profilev3.7.3 statusv3.7.4 result (verify, 100 MiB cap)Fix commit
Filen Devfailed at 10 MB upload (413 Payload Too Large)OK: 21 runs / 444 MB / 195 s, 0 transient + 0 fatal errors. Upload p50: 4.77 / 24.29 / 30.83 Mbps at 1M/10M/100M. Download p50: 34.22 / 28.99 / 29.22 Mbps22a4bd8f, d16a63cc
Drimemid-sweep dir corruption: list() mutated current_path, every operation after upload landed on a non-existent pathOK: 21 runs / 444 MB / 87 s, 0 transient + 0 fatal errors. Upload p50: 4.01 / 19.26 / 64.95 Mbps at 1M/10M/100M. Download p50: 6.38 / 53.47 / 259.88 Mbps253f2cc2
Yandexfailed at 100 MB upload (no retry on transient upload-target)OK at 1 MB (7 runs, 36 s, 0 errors). Upload p50: 0.87 Mbps, download p50: 5.92 Mbps. The Yandex free-tier server caps upload throughput around 1 Mbps server-side, so 10 MB and 100 MB single-shot PUTs are closed mid-stream regardless of retry logic. The cb1e80b6 fix is therefore validated where it was meant to apply (transient failures); larger payloads need chunked resumable upload with Content-Range, which stays open as a separate work itemcb1e80b6

While re-running Drime, a smaller bug was surfaced and fixed in the same session: Drime's mkdir() returned a generic ServerError when the API answered 422 Unprocessable Entity with "Folder with same name already exists.", so the benchmark command logged a cosmetic warning even though the directory existed and the run continued normally. The provider now maps that exact response to ProviderError::AlreadyExists, which the benchmark already treats as idempotent. Pending in queue for the next release.

7. Known open bugs not in this round

BugProviderStatus
Y3 (emerged after Y2 close)Yandex DiskOPEN
Benchmark assumes overwrite-on-PUT4sharedOPEN: CLI should delete between runs for strict providers
1G timeout cap for slow storageidrive S3, InfiniCloud jpOPEN: bench should accept --per-profile-timeout flag
No-root path matrixkDrive, SeaFile WebDAVOPEN: both providers refuse operations on /, need a sub-path benchmark variant
S5 FileLu native delete returns 500FileLu nativeOPEN: provider-side intermittent, not reproducible on demand

8. How to reproduce

bash
# Same CLI binary or rebuild from main post-v3.7.4
CLI=/path/to/aeroftp-cli

# Single profile
"$CLI" --profile "AWS" benchmark custom \
  --sizes 1M,10M,100M,1G --runs 3 \
  --consent-publish --report ./AWS.json

The --consent-publish flag wraps the JSON in BEGIN AEROFTP BENCHMARK REPORT / END AEROFTP BENCHMARK REPORT markers and runs the sanitization sweep. If anything that would identify the host or the account makes it past the per-provider redactor, the command refuses to write the report. You will not produce a poisoned report by accident.

9. Selection bias and disclaimers

This dataset was produced on:

  • Single host (Linux x86_64, kernel 6.8, Ubuntu 24.04 LTS)
  • Single uplink (residential WE fiber, asymmetric ~4 MB/s up / ~3.5 MB/s down nominal)
  • Single timezone (UTC+02, all runs in one ~36 h window)
  • Single AeroFTP version (v3.7.3 build for the main sweep, v3.7.4 build for the verify round)

It is a baseline for what one developer measures from one location with one binary. The community benchmark page on docs.aeroftp.app will only draw protocol-level conclusions after aggregating community submissions across at least three regions and two connection types. Any conclusion drawn from this dataset alone is provisional.

aeroftp.app - Released under the GPL-3.0 License. AeroFTP Reviews