Skip to content

Commit bf7e81c

Browse files
Rclone improvements (#366)
Co-authored-by: Alexey A. Leonov <[email protected]>
1 parent 5c49716 commit bf7e81c

File tree

7 files changed

+70
-26
lines changed

7 files changed

+70
-26
lines changed

app/(docs)/dcs/buckets/object-listings/page.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -49,5 +49,5 @@ Avoid using access grants or S3 credentials with different path encryption setti
4949
{% /callout %}
5050

5151
{% callout type="info" %}
52-
The [](docId:4oDAezF-FcfPr0WPl7knd) in the Satellite Console cannot list objects with unencrypted object keys yet. If you try to open a bucket with such objects, you'll see it empty with a message "You have objects locked with a different passphrase". Support for unencrypted object keys in the Object Browser will be added in a future release. Until then, you can use the [](docId:TC-N6QQVQg8w2cRqvEqEf) or a S3-compatible app to list such objects.
52+
The [](docId:4oDAezF-FcfPr0WPl7knd) in the Satellite Console cannot list objects with unencrypted object keys yet. If you try to open a bucket with such objects, you'll see it empty with a message "You have objects locked with a different passphrase". Support for unencrypted object keys in the Object Browser will be added in a future release. Until then, you can use the [](docId:TC-N6QQVQg8w2cRqvEqEf) or an S3-compatible app to list such objects.
5353
{% /callout %}

app/(docs)/dcs/third-party-tools/file-transfer-performance/page.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ So, for the purposes of demonstrating uploads and downloads for many smaller fil
4242

4343
When working with small and medium-sized files, the optimal parallelism is limited by the segment, or "chunk", size. With [](docId:WayQo-4CZXkITaHiGeQF_), this segmentation is referred to as "concurrency." So, for example, a 1GB file would be optimally uploaded to Storj with the following command:
4444

45-
```Text
45+
```bash
4646
rclone copy --progress --s3-upload-concurrency 16 --s3-chunk-size 64M 1gb.zip remote:bucket
4747
```
4848

@@ -60,7 +60,7 @@ For example, a 10GB file could theoretically be transferred with 160 concurrency
6060

6161
Rclone also offers the advantage of being able to transfer multiple files in parallel with the `--transfers` flag. For example, multiple 1GB files could be transferred simultaneously with this command, modified from the single file example above:
6262

63-
```Text
63+
```bash
6464
rclone copy --progress --transfers 4 --s3-upload-concurrency 16 --s3-chunk-size 64M 1gb.zip remote:bucket
6565
```
6666

@@ -72,7 +72,7 @@ The relationship of constant chunk size to variable file size is the determining
7272

7373
The same basic mathematical calculations for uploads are also relevant for downloads. However, since the Uplink CLI supports parallelism with downloads, it is often the better choice for performance. This can be achieved using the `--parallelism` flag, as shown below:
7474

75-
```Text
75+
```bash
7676
uplink cp sj://bucket/bighugefile.zip ~/Downloads/bighugefile.zip --parallelism 4
7777
```
7878

@@ -82,7 +82,7 @@ Because Uplink bypasses the Storj edge network layer, this is the best option fo
8282

8383
With small files, [](docId:Mk51zylAE6xmqP7jUYAuX) is still the best option to use for downloads as well. This is again thanks to the `--transfers` flag that allows Rclone to download multiple files in parallel, taking advantage of concurrency even when files are smaller than the Storj segment size. To download 10 small files at once with Rclone, the command would be:
8484

85-
```Text
85+
```bash
8686
rclone copy --progress --transfers 10 remote:bucket /tmp
8787
```
8888

app/(docs)/dcs/third-party-tools/rclone/page.md

Lines changed: 12 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -15,24 +15,31 @@ metadata:
1515

1616
Follow the [Getting Started guide](docId:AsyYcUJFbO1JI8-Tu8tW3) to setup Rclone.
1717

18-
The following is more details about the 2 ways you can use Rclone with Storj.
18+
There are 2 ways to use Rclone with Storj:
19+
20+
1. **S3 Compatible:** Connect to the Storj network via the S3 protocol/S3 gateway.
21+
2. **Native:** Connect over the Storj protocol to access your bucket.
1922

2023
## S3 Compatible
2124

22-
Use our [S3 compatible API](docId:eZ4caegh9queuQuaazoo) to increase upload performance and reduce the load on your systems and network. A 1GB upload will result in only 1GB of data being uploaded
25+
Use our [S3 compatible API](docId:eZ4caegh9queuQuaazoo) to increase upload performance and reduce the load on your systems and network. A 1GB upload will result in only 1GB of data being uploaded.
2326

2427
- Faster upload
2528
- Reduction in network load
2629
- Server-side encryption
2730

31+
[See common commands](docId:AsyYcUJFbO1JI8-Tu8tW3) to get started!
32+
2833
## Native
2934

30-
Use our native integration pattern to take advantage of client-side encryption as well as to achieve the best possible download performance. Uploads will be erasure-coded [](docId:Pksf8d0TCLY2tBgXeT18d), thus a 1GB upload will result in 2.68GB of data being uploaded to storage nodes across the network.
35+
Use our native Rclone integration to take advantage of client-side encryption, and to achieve the best possible download performance. Note that uploads will be erasure-coded locally [](docId:Pksf8d0TCLY2tBgXeT18d); thus, uploading a 1GB file will result in 2.68GB uploaded data out of your network (to storage nodes across the network).
3136

3237
- End-to-end encryption
3338
- Faster download speed
3439

40+
[See common commands](docId:Mk51zylAE6xmqP7jUYAuX) to get started!
41+
3542
{% quick-links %}
36-
{% quick-link title="Rclone S3 compatible" href="docId:AsyYcUJFbO1JI8-Tu8tW3" /%}
37-
{% quick-link title="Rclone native" href="docId:Mk51zylAE6xmqP7jUYAuX" /%}
43+
{% quick-link title="Rclone - S3 Compatible" href="docId:AsyYcUJFbO1JI8-Tu8tW3" /%}
44+
{% quick-link title="Rclone - Native" href="docId:Mk51zylAE6xmqP7jUYAuX" /%}
3845
{% /quick-links %}

app/(docs)/dcs/third-party-tools/rclone/rclone-native/page.md

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -13,18 +13,20 @@ metadata:
1313

1414
## Selecting an Integration Pattern
1515

16-
Use our native integration pattern to take advantage of client-side encryption as well as to achieve the best possible download performance. Uploads will be erasure-coded locally, thus a 1GB upload will result in 2.68GB of data being uploaded to storage nodes across the network.
16+
Use our native Rclone integration to take advantage of client-side encryption, and to achieve the best possible download performance. Note that uploads will be erasure-coded locally [](docId:Pksf8d0TCLY2tBgXeT18d); thus, uploading a 1GB file will result in 2.68GB uploaded data out of your network (to storage nodes across the network).
1717

18-
## Use this pattern for
18+
Use this pattern (native integration) for:
1919

2020
- The strongest security
2121
- The best download speeds
2222

23+
Alternatively, you can use the [S3 compatible integration](docId:eZ4caegh9queuQuaazoo) with Rclone to increase upload performance and reduce the load on your systems and network.
24+
2325
## Setup
2426

25-
First, [Download](https://rclone.org/downloads/) and extract the rclone binary onto your system.
27+
First, [download rclone](https://rclone.org/downloads/) and extract the rclone binary onto your system.
2628

27-
Execute the config command:
29+
Execute the config command to setup a new Storj "remote" configuration:
2830

2931
```bash
3032
rclone config
@@ -179,4 +181,4 @@ q) Quit config
179181
e/n/d/r/c/s/q> q
180182
```
181183

182-
For additional commands you can do, see [](docId:WayQo-4CZXkITaHiGeQF_).
184+
For a listing of Rclone commands for general use, see [](docId:WayQo-4CZXkITaHiGeQF_).

app/(docs)/dcs/third-party-tools/rclone/rclone-s3/page.md

Lines changed: 43 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,28 +1,28 @@
11
---
2-
title: Rclone additional commands
2+
title: Rclone Commands
33
docId: WayQo-4CZXkITaHiGeQF_
44
redirects:
55
- /dcs/how-tos/sync-files-with-rclone/rclone-with-hosted-gateway
66
metadata:
7-
title: Rclone with S3 Compatibility Guide
8-
description: Step-by-step guide to configure Rclone pointed to Storj's S3 compatible API, providing better upload performance and lower network load.
7+
title: Rclone Command Guide
8+
description: Step-by-step guide to use Rclone with common commands.
99
---
1010

1111
{% callout type="info" %}
1212
Follow the [Getting Started guide](docId:AsyYcUJFbO1JI8-Tu8tW3) to setup Rclone.
1313
{% /callout %}
1414

15-
The follow are additional commands or options you can consider when using Rclone
15+
The follow are additional commands and options you can consider when using Rclone.
1616

17-
## Configuration password
17+
## Configuration Password
1818

19-
For additional security, you should consider using the `s) Set configuration password` option. It will encrypt the `rclone.conf` configuration file. This way secrets like the [](docId:OXSINcFRuVMBacPvswwNU), the encryption passphrase, and the access grant can't be easily stolen.
19+
For additional security, you should consider using the `s) Set configuration password` option. It will encrypt the `rclone.conf` configuration file. This way, secrets like the [](docId:OXSINcFRuVMBacPvswwNU), the encryption passphrase, and the access grant can't be easily stolen.
2020

2121
## Create a Bucket
2222

2323
Use the `mkdir` command to create new bucket, e.g., `mybucket`.
2424

25-
```yaml
25+
```bash
2626
rclone mkdir waterbear:mybucket
2727
```
2828

@@ -162,8 +162,43 @@ Or between two Storj buckets.
162162
rclone sync --progress waterbear-us:mybucket/videos/ waterbear-europe:mybucket/videos/
163163
```
164164

165-
Or even between another cloud storage and Storj.
165+
Or even between another cloud storage (e.g., an AWS S3 connection names `s3`) and Storj.
166166

167167
```bash
168168
rclone sync --progress s3:mybucket/videos/ waterbear:mybucket/videos/
169169
```
170+
171+
172+
## Mounting a Bucket
173+
174+
Use the `mount` command to mount a bucket to a folder (Mac, Windows and Linux) or as a disk drive (Windows). When mounted, you can use the bucket as a local folder (drive).
175+
{% tabs %}
176+
{% tab label="Windows" %}
177+
```powershell
178+
mkdir ~/mybucket
179+
rclone mount waterbear:mybucket ~/mybucket --vfs-cache-mode full
180+
```
181+
{% /tab %}
182+
183+
{% tab label="Linux" %}
184+
185+
```bash
186+
sudo mkdir /mnt/mybucket
187+
sudo chown $USER: /mnt/mybucket
188+
rclone mount waterbear:mybucket /mnt/mybucket --vfs-cache-mode full
189+
```
190+
{% /tab %}
191+
192+
{% tab label="macOS" %}
193+
```shell
194+
sudo mkdir /mnt/mybucket
195+
sudo chown $USER: /mnt/mybucket
196+
rclone mount waterbear:mybucket /mnt/mybucket --vfs-cache-mode full
197+
```
198+
{% /tab %}
199+
{% /tabs %}
200+
{% callout type="info" %}
201+
The `--vfs-cache-mode full` flag means that all reads and writes are cached to disk. Without it, reads and writes are done directly to the Storj bucket.
202+
{% /callout %}
203+
204+
To unmount the bucket, use the `Ctrl-C` keystroke to stop rclone.

app/(docs)/dcs/third-party-tools/s3fs/page.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -87,4 +87,4 @@ Now you can use the mounted bucket almost as any folder.
8787

8888
We recommend having a look at [](docId:LdrqSoECrAyE_LQMvj3aF) and its [`rclone mount` command](https://rclone.org/commands/rclone_mount/) as well.
8989

90-
Please note - you can configure a native connector in rclone, (see: [](docId:Mk51zylAE6xmqP7jUYAuX)) and use [](docId:Pksf8d0TCLY2tBgXeT18d), unlike [](docId:yYCzPT8HHcbEZZMvfoCFa) which uses[](docId:hf2uumViqYvS1oq8TYbeW) to provide a S3-compatible protocol (the S3 protocol does not use client side encryption by design).
90+
Please note - you can configure a native connector in rclone, (see: [](docId:Mk51zylAE6xmqP7jUYAuX)) and use [](docId:Pksf8d0TCLY2tBgXeT18d), unlike [](docId:yYCzPT8HHcbEZZMvfoCFa) which uses[](docId:hf2uumViqYvS1oq8TYbeW) to provide an S3-compatible protocol (the S3 protocol does not use client side encryption by design).

app/(docs)/learn/self-host/gateway-st/page.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -488,13 +488,13 @@ If you use`localhost` or `127.0.0.1` as your `local_IP,` you will not be able to
488488

489489
You can use the [Minio caching technology](https://docs.min.io/docs/minio-disk-cache-guide.html) in conjunction with the hosting of a static website.
490490

491-
> The following example uses `/mnt/drive1`, `/mnt/drive2` ,`/mnt/cache1` ... `/mnt/cache3` for caching, while excluding all objects under bucket `mybucket` and all objects with '.pdf' extensions on a S3 Gateway setup. Objects are cached if they have been accessed three times or more. Cache max usage is restricted to 80% of disk capacity in this example. Garbage collection is triggered when the high watermark is reached (i.e. at 72% of cache disk usage) and will clear the least recently accessed entries until the disk usage drops to the low watermark - i.e. cache disk usage drops to 56% (70% of 80% quota).
491+
> The following example uses `/mnt/drive1`, `/mnt/drive2` ,`/mnt/cache1` ... `/mnt/cache3` for caching, while excluding all objects under bucket `mybucket` and all objects with '.pdf' extensions on an S3 Gateway setup. Objects are cached if they have been accessed three times or more. Cache max usage is restricted to 80% of disk capacity in this example. Garbage collection is triggered when the high watermark is reached (i.e. at 72% of cache disk usage) and will clear the least recently accessed entries until the disk usage drops to the low watermark - i.e. cache disk usage drops to 56% (70% of 80% quota).
492492
493493
Export the environment variables before running the Gateway:
494494

495495
{% tabs %}
496496
{% tab label="Windows" %}
497-
Cache disks are not supported, because caching requires the [`atime`](http://kerolasa.github.io/filetimes.html) function to be enabled.
497+
Cache disks are not supported, because caching requires the [`atime`](https://kerolasa.github.io/filetimes.html) function to be enabled.
498498

499499
```Text
500500
$env:MINIO_CACHE="on"

0 commit comments

Comments
 (0)