Skip to content

docs(self-hosted): reference architectures #13893

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 5 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 10 additions & 0 deletions develop-docs/self-hosted/reference-architecture/index.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
---
title: Self-Hosted Reference Architectures
sidebar_title: Reference Architectures
sidebar_order: 3
---

This section contains reference architectures for self-hosted Sentry. These are not meant to be used as-is, but as a reference for how to deploy self-hosted Sentry around your existing infrastructure. This section can be used as a scaling strategy if you have higher traffic loads over time.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
This section contains reference architectures for self-hosted Sentry. These are not meant to be used as-is, but as a reference for how to deploy self-hosted Sentry around your existing infrastructure. This section can be used as a scaling strategy if you have higher traffic loads over time.
This section contains reference architectures for self-hosted Sentry. These are not meant to be used as-is, but as a reference for how to deploy self-hosted Sentry around your existing infrastructure. This section can be used to create a scaling strategy if you have higher traffic loads over time.


Please note that these reference architectures does not take into account external data storage dependencies such as Kafka, Postgres, Redis, S3 or other services. If you wish to do so, refer to the [Experimental Configurations](/self-hosted/experimental/) section
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Please note that these reference architectures does not take into account external data storage dependencies such as Kafka, Postgres, Redis, S3 or other services. If you wish to do so, refer to the [Experimental Configurations](/self-hosted/experimental/) section
Please note that these reference architectures do not take external data storage dependencies into account such as Kafka, Postgres, Redis, S3, etc. If you wish to do so, refer to the [Experimental Configurations](/self-hosted/experimental/) section

<PageGrid />
Original file line number Diff line number Diff line change
@@ -0,0 +1,85 @@
---
title: Separate Ingest Box
sidebar_title: Separate Ingest Box
sidebar_order: 2
---

In addition to having a [separate domain](/self-hosted/experimental/reverse-proxy/#expose-only-ingest-endpoint-publicly) for viewing the web UI and ingesting data, you can deploy a dedicated server for data ingestion that relays information to your main server. This setup is recommended for high-traffic installations and environments with multiple data centers.

This architecture helps mitigate DDoS attacks by distributing ingestion across multiple endpoints, while your main Sentry instance with the web UI should be protected on a private network (accessible via VPN). Invalid payloads sent to your Relay instances will be dropped immediately. If your main server becomes unreachable, your Relay will continue attempting to send the data.

Note that the region names in the diagram below are used for illustration purposes.

```mermaid
graph TB
subgraph main [Main Sentry Server]
direction TB
nginx[External Nginx]
sentry[Self-Hosted Sentry]

nginx --> sentry
end

subgraph "US Ingest Server"
direction TB
internet1[Public Internet]
relay1[Sentry Relay]
end


subgraph "Asia Ingest Server"
direction TB
internet2[Public Internet]
relay2[Sentry Relay]
end

subgraph "Europe Ingest Server"
direction TB
internet3[Public Internet]
relay3[Sentry Relay]
end

internet1 --> relay1 -- Through VPN or tunnel --> main
internet2 --> relay2 -- Through VPN or tunnel --> main
internet3 --> relay3 -- Through VPN or tunnel --> main
```

To set up the relay, install Sentry Relay on your machine by following the [Relay Getting Started Guide](https://docs.sentry.io/product/relay/getting-started/). Configure the Relay to run in `managed` mode and point it to your main Sentry server. You can customize the port and protocol (HTTP or HTTPS) as needed.

After installing Relay (via Docker or executable) and running the `configure init` command, you can configure it with the following settings:

```yaml
# Please see the relevant documentation.
# Performance tuning: https://docs.sentry.io/product/relay/operating-guidelines/
# All config options: https://docs.sentry.io/product/relay/options/
relay:
mode: managed
instance: default
upstream: https://sentry.yourcompany.com/
host: 0.0.0.0
port: 3000

limits:
max_concurrent_requests: 20

# To avoid having Out Of Memory issues,
# it's recommended to enable the envelope spooler.
spool:
envelopes:
path: /var/lib/sentry-relay/spool.db # make sure this path exists
max_memory_size: 200MB
max_disk_size: 1000MB

# metrics:
# statsd: "100.100.123.123:8125"

sentry:
enabled: true
dsn: "https://[email protected]/1"
```

While it's possible to run Relay on a different version than your self-hosted instance, we recommend keeping both Relay and Sentry on the same version. Remember to upgrade Relay whenever you upgrade your self-hosted Sentry installation.

<Alert level="info" title="Fun Fact">
Sentry SaaS uses a similar setup for their ingestion servers, behind Google Anycast IP address.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Sentry SaaS uses a similar setup for their ingestion servers, behind Google Anycast IP address.
Sentry SaaS uses a similar setup for its ingestion servers, behind Google Anycast IP addresses.

</Alert>
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
---
title: Simple Single Node
sidebar_title: Simple Single Node
sidebar_order: 1
---

This is the simplest setup for self-hosted Sentry. It is recommended for small to medium-sized installations. This setup follows [the minimum requirements](/self-hosted/#required-minimum-system-resources) for running Sentry.

It is highly recommended to put an external load balancer (or reverse proxy) in front of your self-hosted Sentry deployment. That way, you can tweak on rate limiting, TLS termination, and other features that does not change the built-in nginx configuration file. It is recommended to install the load balancer on your host machine instead of as a Docker container. Doing this way helps you in the event of Docker engine failure.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
It is highly recommended to put an external load balancer (or reverse proxy) in front of your self-hosted Sentry deployment. That way, you can tweak on rate limiting, TLS termination, and other features that does not change the built-in nginx configuration file. It is recommended to install the load balancer on your host machine instead of as a Docker container. Doing this way helps you in the event of Docker engine failure.
It is highly recommended to put an external load balancer (or reverse proxy) in front of your self-hosted Sentry deployment. That way, you can tweak rate limiting, TLS termination, and other features and do not change the default `nginx` configuration file. It is recommended to install the load balancer directly on your host machine instead of running in a Docker container. This protects you against a Docker engine failure.


If using external load balancer is not possible, you can put it as a Docker container, pointing to the `nginx` service at port `80`. Whatever value you put on your `SENTRY_BIND` environment variable won't matter.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
If using external load balancer is not possible, you can put it as a Docker container, pointing to the `nginx` service at port `80`. Whatever value you put on your `SENTRY_BIND` environment variable won't matter.


```mermaid
graph TB
subgraph Server
direction TB
nginx[External Nginx]
sentry[Self-Hosted Sentry]

nginx --> sentry
end

internet[Public Internet]

internet--> Server
```

For more information regarding configuring your external load balancer, please refer to the [External Load Balancer](/self-hosted/experimental/reverse-proxy/) section.
Loading