Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
30 changes: 19 additions & 11 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,9 @@ jobs:
- name: Setup Rust
uses: dtolnay/rust-toolchain@stable

# - name: Set up QEMU
# uses: docker/setup-qemu-action@v3

- name: Install Protoc
uses: arduino/setup-protoc@v2
with:
Expand Down Expand Up @@ -70,17 +73,13 @@ jobs:
with:
context: .
file: anvil/Dockerfile
load: true
# Triggers Error: buildx failed with: ERROR: failed to build: docker exporter does not currently support exporting manifest lists
# https://github.com/docker/buildx/issues/59
load: true #When ARM support is re-enabled, this needs to be disabled
push: false
tags: ${{ steps.img.outputs.tag }}
tags: anvil:test
# # platforms: linux/amd64,linux/arm64
platforms: linux/amd64
build-args: |
BINARY_PATH=./target/release

- name: Validate runtime binary in image
run: |
docker run --rm ${{ steps.img.outputs.tag }} ls -l /usr/local/bin
docker run --rm ${{ steps.img.outputs.tag }} /usr/local/bin/anvil --help >/dev/null

- name: Wait for PostgreSQL to be ready
run: |
Expand Down Expand Up @@ -130,12 +129,18 @@ jobs:
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
# platforms: linux/amd64,linux/arm64
platforms: linux/amd64
build-args: |
BINARY_PATH=./target/release
cache-from: type=gha
cache-to: type=gha,mode=max

- name: Prepare Release Assets
if: github.event_name == 'push' && github.ref == 'refs/heads/main'
run: |
mkdir -p release
docker cp $(docker create ${{ steps.meta.outputs.tags }}):/usr/local/bin/anvil-cli release/anvil
docker cp $(docker create ${{ steps.meta.outputs.tags }}):/usr/local/bin/admin release/anvil-admin

- name: Create GitHub Release
if: github.event_name == 'push' && github.ref == 'refs/heads/main'
uses: softprops/action-gh-release@v1
Expand All @@ -150,3 +155,6 @@ jobs:
```sh
docker pull ghcr.io/${{ github.repository }}:${{ steps.tag.outputs.tag_name }}
```
files: |
release/anvil
release/anvil-admin
2 changes: 2 additions & 0 deletions anvil/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ COPY . .

# Build the anvil server and the admin CLI in release mode
RUN cargo build --release --bin anvil --bin admin
RUN cargo build --release -p anvil-cli

# Stage 2: Create the final, minimal image
FROM rust:latest
Expand All @@ -24,6 +25,7 @@ RUN apt-get update && apt-get purge -y build-essential pkg-config libssl-dev pro
# Copy the compiled binaries from the builder stage
COPY --from=builder /usr/src/anvil/target/release/anvil /usr/local/bin/anvil
COPY --from=builder /usr/src/anvil/target/release/admin /usr/local/bin/admin
COPY --from=builder /usr/src/anvil/target/release/anvil-cli /usr/local/bin/anvil-cli

# Expose the default gRPC/S3 port and a potential swarm port
EXPOSE 50051
Expand Down
53 changes: 25 additions & 28 deletions docs/01-getting-started.md
Original file line number Diff line number Diff line change
@@ -1,20 +1,20 @@
---
slug: /anvil/getting-started
title: 'Getting Started: Anvil in 10 Minutes'
description: A hands-on guide to launching a single-node Anvil instance with Docker and interacting with it using an S3 client.
tags: [getting-started, docker, s3]
description: A hands-on guide to launching a single-node Anvil instance with Docker and interacting with it using the Anvil CLI.
tags: [getting-started, docker, cli]
---

# Chapter 1: Anvil in 10 Minutes

> **TL;DR:** Use our `docker-compose.yml` to launch a single-node Anvil instance and its Postgres database. Use the `anvil-cli` or any S3 client to create a bucket and upload your first file.
> **TL;DR:** Use our `docker-compose.yml` to launch a single-node Anvil instance and its Postgres database. Use the `anvil` to create a bucket and upload your first file.

This guide will walk you through the fastest way to get a fully functional, single-node Anvil instance running on your local machine. By the end, you will have created a bucket, uploaded a file, and downloaded it back.

### 1.1. Prerequisites

- **Docker and Docker Compose:** Anvil is packaged as a Docker container for easy deployment. Ensure you have both [Docker](https://docs.docker.com/get-docker/) and [Docker Compose](https://docs.docker.com/compose/install/) installed.
- **An S3 Client:** You will need a client tool that can speak the S3 protocol. We recommend the [AWS Command Line Interface (CLI)](https://aws.amazon.com/cli/).
- **`anvil`:** The Anvil command-line interface is the primary tool for interacting with your Anvil cluster. It should be provided as part of your Anvil distribution.

### 1.2. Launching Anvil with Docker Compose

Expand Down Expand Up @@ -61,7 +61,7 @@ services:
retries: 5

anvil1:
image: ghcr.io/worka-ai/anvil:main
image: ghcr.io/worka-ai/anvil:v2025.11.14-001012
depends_on:
postgres-global:
condition: service_healthy
Expand Down Expand Up @@ -112,7 +112,7 @@ This command will download the necessary images and start the Anvil and Postgres

### 1.3. Creating Your First Tenant and API Key

Anvil is a multi-tenant system. Before you can create buckets, you need a **Tenant** and an **App** with an API key. You can create these using the `admin` CLI, which we will run inside the running Docker container.
Anvil is a multi-tenant system. Before you can create buckets, you need a **Tenant** and an **App** with an API key. You can create these using the `admin` tool, which we will run inside the running Docker container.

**Step 1: Create the Region and Tenant**

Expand All @@ -126,14 +126,14 @@ docker compose exec anvil1 admin tenants create my-first-tenant

**Step 2: Create an App**

Next, create an App for this tenant. This will generate the credentials needed to interact with the S3 API.
Next, create an App for this tenant. This will generate the credentials needed to interact with the API.

```bash
# Create an app and get its credentials (uses named flags)
docker compose exec anvil1 admin apps create --tenant-name my-first-tenant --app-name my-s3-app
docker compose exec anvil1 admin apps create --tenant-name my-first-tenant --app-name my-cli-app
```

This command will output a **Client ID** and a **Client Secret**. **Save these securely!** They are your S3 access credentials.
This command will output a **Client ID** and a **Client Secret**. **Save these securely!** They are your API credentials.

**Step 3: Grant Permissions**

Expand All @@ -143,31 +143,28 @@ By default, a new app has **no permissions**. You must explicitly grant it the r

```bash
# Grant the app full permissions on all resources
docker compose exec anvil1 admin policies grant --app-name my-s3-app --action "*" --resource "*"
docker compose exec anvil1 admin policies grant --app-name my-cli-app --action "*" --resource "*"
```

### 1.4. Using an S3 Client to Create a Bucket
### 1.4. Using the `anvil` to Create a Bucket

Now you can configure your S3 client to connect to Anvil. For the AWS CLI, you can set the credentials and endpoint URL using environment variables.
Now you can configure the `anvil` to connect to your new Anvil instance.

Replace `YOUR_CLIENT_ID` and `YOUR_CLIENT_SECRET` with the values you saved in the previous step.
**Step 1: Configure the CLI**

```bash
export AWS_ACCESS_KEY_ID=YOUR_CLIENT_ID
export AWS_SECRET_ACCESS_KEY=YOUR_SECRET_ACCESS_KEY
export AWS_DEFAULT_REGION=europe-west-1
Run the `configure` command and provide the host and the credentials you saved.

# The Anvil S3 endpoint (note the port)
ANVIL_ENDPOINT="http://localhost:50051"
```bash
# Replace YOUR_CLIENT_ID and YOUR_CLIENT_SECRET with the values from the previous step
anvil configure --host http://localhost:50051 --client-id YOUR_CLIENT_ID --client-secret YOUR_CLIENT_SECRET
```

Now, create a bucket. Bucket names must be globally unique.
**Step 2: Create a Bucket**

Now, create a bucket.

```bash
aws s3api create-bucket \
--bucket my-first-anvil-bucket \
--region europe-west-1 \
--endpoint-url $ANVIL_ENDPOINT
anvil bucket create --name my-first-anvil-bucket --region europe-west-1
```

### 1.5. Uploading and Downloading Your First Object
Expand All @@ -178,22 +175,22 @@ Create a sample file to upload:
echo "Hello, Anvil!" > hello.txt
```

Upload it to your new bucket:
Upload it to your new bucket using an S3-style path:

```bash
aws s3 cp hello.txt s3://my-first-anvil-bucket/hello.txt --endpoint-url $ANVIL_ENDPOINT
anvil object put --src hello.txt --dest s3://my-first-anvil-bucket/hello.txt
```

You can list the objects in your bucket to confirm the upload was successful:

```bash
aws s3 ls s3://my-first-anvil-bucket/ --endpoint-url $ANVIL_ENDPOINT
anvil object ls --path s3://my-first-anvil-bucket/
```

Finally, download the file back to verify its contents:

```bash
aws s3 cp s3://my-first-anvil-bucket/hello.txt downloaded_hello.txt --endpoint-url $ANVIL_ENDPOINT
anvil object get --src s3://my-first-anvil-bucket/hello.txt --dest downloaded_hello.txt

cat downloaded_hello.txt
# Expected output: Hello, Anvil!
Expand Down
8 changes: 4 additions & 4 deletions docs/03-user-guide-authentication.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ This model ensures that you can issue, rotate, and revoke credentials for differ

### 3.2. Creating an App and Getting Credentials

You create an App using the `anvil admin` CLI (as shown in the Getting Started guide) or via the administrative API.
You create an App using the `admin` tool (as shown in the Getting Started guide) or via the administrative API.

```bash
# This command is run by an administrator
Expand All @@ -49,7 +49,7 @@ Permissions in Anvil are defined by policies that connect an App to an **action*
* `write`: Permission to create, update, or delete resources.
* `grant`: Permission to manage the permissions of other apps (a highly privileged action).

A policy is granted using the admin CLI:
A policy is granted using the admin tool:

```bash
# Grant the app permission to read and write objects in 'my-data-bucket'
Expand All @@ -75,9 +75,9 @@ When a bucket is public:
- `GetObject` and `HeadObject` operations are allowed for anonymous users (without any authentication).
- All other operations (`PutObject`, `DeleteObject`, `ListObjects`) still require valid, authorized credentials.

You can set a bucket's public status using the `anvil admin` CLI or the gRPC API.
You can set a bucket's public status using the `admin` tool or the gRPC API.

```bash
# Make a bucket public (requires 'grant' permission on the bucket)
docker compose exec anvil1 admin buckets set-public-access --bucket my-public-assets --allow
```
```
44 changes: 8 additions & 36 deletions docs/04-user-guide-s3-gateway.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,11 +11,13 @@ tags: [user-guide, s3, aws-cli, rclone, sdk]

One of Anvil's most powerful features is its S3-compatible API gateway. This allows you to leverage the vast ecosystem of existing S3 tools, libraries, and SDKs to interact with your Anvil cluster without needing to write any custom code.

> **Note:** While this guide focuses on S3-compatible tools, the `anvil` is the recommended primary interface for most operations. See the [Getting Started](./getting-started) guide for `anvil` examples.

### 4.1. Configuring S3 Clients

To connect an S3 client to Anvil, you need to configure three things:

1. **Endpoint URL:** The HTTP address of your Anvil node (e.g., `http://localhost:9000`).
1. **Endpoint URL:** The HTTP address of your Anvil node (e.g., `http://localhost:50051`).
2. **Access Key ID:** Your Anvil App's **Client ID**.
3. **Secret Access Key:** Your Anvil App's **Client Secret**.

Expand All @@ -29,10 +31,10 @@ export AWS_ACCESS_KEY_ID="YOUR_CLIENT_ID"
export AWS_SECRET_ACCESS_KEY="YOUR_CLIENT_SECRET"

# The region your bucket is in
export AWS_DEFAULT_REGION="DOCKER_TEST"
export AWS_DEFAULT_REGION="europe-west-1"

# The Anvil S3 endpoint
ANVIL_ENDPOINT="http://localhost:9000"
ANVIL_ENDPOINT="http://localhost:50051"
```

Alternatively, you can create a dedicated profile in your `~/.aws/config` and `~/.aws/credentials` files.
Expand All @@ -56,10 +58,10 @@ import boto3

s3_client = boto3.client(
's3',
endpoint_url='http://localhost:9000',
endpoint_url='http://localhost:50051',
aws_access_key_id='YOUR_CLIENT_ID',
aws_secret_access_key='YOUR_CLIENT_SECRET',
region_name='DOCKER_TEST'
region_name='europe-west-1'
)
```

Expand All @@ -72,7 +74,7 @@ Once configured, you can use the standard S3 commands to manage your buckets and
```bash
aws s3api create-bucket \
--bucket my-s3-bucket \
--region DOCKER_TEST \
--region europe-west-1 \
--endpoint-url $ANVIL_ENDPOINT
```

Expand All @@ -93,33 +95,3 @@ aws s3 ls s3://my-s3-bucket/ --endpoint-url $ANVIL_ENDPOINT
```bash
aws s3 cp s3://my-s3-bucket/remote-file.txt downloaded-file.txt --endpoint-url $ANVIL_ENDPOINT
```

### 4.3. Generating Presigned URLs

Anvil's S3 gateway supports generating presigned URLs, which provide temporary, credential-less access to your objects. This is the most secure way to grant a user temporary access to download or upload a specific file.

**Generate a Presigned URL for Download (GET)**

```bash
aws s3 presign s3://my-s3-bucket/remote-file.txt --expires-in 300 --endpoint-url $ANVIL_ENDPOINT
```

This will return a long URL that can be used by anyone to download `remote-file.txt` for the next 5 minutes (300 seconds).

```bash
# Anyone can use this URL to download the file
curl "THE_PRESIGNED_URL"
```

**Generate a Presigned URL for Upload (PUT)**

```bash
aws s3 presign s3://my-s3-bucket/new-object.txt --expires-in 600 --endpoint-url $ANVIL_ENDPOINT
```

This URL can be used to upload a file to the specified key.

```bash
curl -T "local-upload.txt" "THE_PRESIGNED_URL"
```

8 changes: 4 additions & 4 deletions docs/06-operational-guide-deployment.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ This chapter covers the fundamentals of deploying Anvil. The architecture is fle

A single-node deployment is the simplest way to run Anvil and is perfect for development, testing, or small-scale use cases. It consists of one Anvil instance and two PostgreSQL databases (which can run on the same Postgres server).

See the `docker-compose.yml` in the [Getting Started](/docs/anvil/getting-started) guide for a complete, working example.
See the `docker-compose.yml` in the [Getting Started](../getting-started) guide for a complete, working example.

**Key Configuration Parameters:**

Expand Down Expand Up @@ -92,7 +92,7 @@ sudo firewall-cmd --reload

### 6.3. Configuration Reference

Anvil is configured entirely through environment variables. The following is a reference for the most important variables, defined in `src/config.rs`.
Anvil is configured entirely through environment variables. The following is a reference for the most important variables.

| Variable | Description |
| ------------------------------- | --------------------------------------------------------------------------- |
Expand All @@ -101,7 +101,7 @@ Anvil is configured entirely through environment variables. The following is a r
| `REGION` | **Required.** The name of the region this node belongs to. |
| `JWT_SECRET` | **Required.** Secret key for minting and verifying JWTs. |
| `ANVIL_SECRET_ENCRYPTION_KEY` | **Required.** A 64-character hex-encoded string for AES-256 encryption. <br/><br/> **CRITICAL:** This key is used to encrypt sensitive data at rest. It **MUST** be a cryptographically secure, 64-character hexadecimal string (representing 32 bytes). Loss of this key will result in permanent data loss. <br/><br/> Generate a secure key with: <br/> `openssl rand -hex 32` |
| `ANVIL_CLUSTER_SECRET` | A shared secret to authenticate and encrypt inter-node gossip messages. |
| `CLUSTER_SECRET` | A shared secret to authenticate and encrypt inter-node gossip messages. |
| `API_LISTEN_ADDR` | The local IP and port for the unified S3 Gateway and gRPC service (e.g., `0.0.0.0:50051`). |
| `CLUSTER_LISTEN_ADDR` | The local multiaddress for the QUIC P2P listener. |
| `PUBLIC_CLUSTER_ADDRS` | Comma-separated list of public-facing multiaddresses for this node. |
Expand All @@ -116,4 +116,4 @@ The separation of databases is a key scaling feature.

- **Global Database:** This is the single source of truth for low-volume, globally relevant data. It contains tables for `tenants`, `buckets`, `apps`, `policies`, and `regions`. Because all nodes access this, it can become a bottleneck if not managed correctly, but the data it holds changes infrequently.

- **Regional Database:** This database handles the high-volume traffic of object metadata. Each region has its own, containing the `objects` table. This allows object listing and searching to be handled locally within a region, preventing a single database from having to index billions of objects from around the world.
- **Regional Database:** This database handles the high-volume traffic of object metadata. Each region has its own, containing the `objects` table. This allows object listing and searching to be handled locally within a region, preventing a single database from having to index billions of objects from around the world.
20 changes: 10 additions & 10 deletions docs/07-operational-guide-admin-cli.md
Original file line number Diff line number Diff line change
@@ -1,19 +1,19 @@
---
slug: /anvil/operational-guide/admin-cli
title: 'Operational Guide: The Anvil Admin CLI'
description: A reference guide for using the `anvil admin` command-line interface to manage tenants, apps, policies, and regions.
tags: [operational-guide, admin, cli, tenants, apps, policies]
---
slug: /anvil/operational-guide/admin-tool
title: 'Operational Guide: The Admin Tool'
description: A reference guide for using the `admin` tool to manage tenants, apps, policies, and regions.
tags: [operational-guide, admin, tenants, apps, policies]
---

# Chapter 7: The Anvil Admin CLI
# Chapter 9: The Admin Tool

> **TL;DR:** Use the `anvil admin` CLI for core administrative tasks. It connects directly to the global database to manage tenants, regions, apps, and policies.
> **TL;DR:** Use the `admin` tool for core administrative tasks. It connects directly to the global database to manage tenants, regions, apps, and policies.

Anvil includes a powerful command-line interface (CLI) for performing essential administrative tasks. This tool is the primary way to bootstrap the system and manage high-level resources. It works by connecting directly to the global PostgreSQL database.
Anvil includes a powerful command-line tool for performing essential administrative tasks. This tool is the primary way to bootstrap the system and manage high-level resources. It works by connecting directly to the global PostgreSQL database.

### Running the Admin CLI
### Running the Admin Tool

When running Anvil via Docker Compose, you can execute the admin CLI using `docker-compose exec`. Note that the command is `admin`, not `anvil admin`.
When running Anvil via Docker Compose, you can execute the admin tool using `docker-compose exec`. The command to run is `admin`.

```bash
docker compose exec anvil1 admin <COMMAND>
Expand Down
Loading
Loading