You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: docs/sources/alerting/_index.md
+3-1
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ weight: 700
7
7
8
8
Loki includes a component called the Ruler, adapted from our upstream project, Cortex. The Ruler is responsible for continually evaluating a set of configurable queries and then alerting when certain conditions happen, e.g. a high percentage of error logs.
9
9
10
-
First, ensure the Ruler component is enabled. The following is a basic configuration which loads rules from configuration files (it requires `/tmp/rules` and `/tmp/scratch` exist):
10
+
First, ensure the Ruler component is enabled. The following is a basic configuration which loads rules from configuration files:
11
11
12
12
```yaml
13
13
ruler:
@@ -168,6 +168,8 @@ Because the rule files are identical to Prometheus rule files, we can interact w
168
168
169
169
> **Note:** Not all commands in cortextool currently support Loki.
170
170
171
+
> **Note:** cortextool was intended to run against multi-tenant Loki, commands need an `--id=` flag set to the Loki instance ID or set the environment variable `CORTEX_TENANT_ID`. If Loki is running in single tenant mode, the required ID is `fake` (yes we know this might seem alarming but it's totally fine, no it can't be changed)
Copy file name to clipboardexpand all lines: docs/sources/operations/storage/boltdb-shipper.md
+2-4
Original file line number
Diff line number
Diff line change
@@ -1,9 +1,7 @@
1
1
---
2
-
title: BoltDB Shipper
2
+
title: Single Store (boltdb-shipper)
3
3
---
4
-
# Loki with BoltDB Shipper
5
-
6
-
:warning: BoltDB Shipper is still an experimental feature. It is not recommended to be used in production environments.
4
+
# Single Store Loki (boltdb-shipper index type)
7
5
8
6
BoltDB Shipper lets you run Loki without any dependency on NoSQL stores for storing index.
9
7
It locally stores the index in BoltDB files instead and keeps shipping those files to a shared object store i.e the same object store which is being used for storing chunks.
Copy file name to clipboardexpand all lines: docs/sources/operations/storage/filesystem.md
-100
Original file line number
Diff line number
Diff line change
@@ -42,103 +42,3 @@ The durability of the objects is at the mercy of the filesystem itself where oth
42
42
### High Availability
43
43
44
44
Running Loki clustered is not possible with the filesystem store unless the filesystem is shared in some fashion (NFS for example). However using shared filesystems is likely going to be a bad experience with Loki just as it is for almost every other application.
45
-
46
-
## New AND VERY EXPERIMENTAL in 1.5.0: Horizontal scaling of the filesystem store
47
-
48
-
**WARNING** as the title suggests, this is very new and potentially buggy, and it is also very likely configs around this feature will change over time.
49
-
50
-
With that warning out of the way, the addition of the [boltdb-shipper](../boltdb-shipper/) index store has added capabilities making it possible to overcome many of the limitations listed above using the filesystem store, specifically running Loki with the filesystem store on separate machines but still operate as a cluster supporting replication, and write distribution via the hash ring.
51
-
52
-
As mentioned in the title, this is very alpha at this point but we would love for people to try this and help us flush out bugs.
53
-
54
-
Here is an example config to run with Loki:
55
-
56
-
Use this config on multiple computers (or containers), do not run it on the same computer as Loki uses the hostname as the ID in the ring.
57
-
58
-
Do not use a shared fileystem such as NFS for this, each machine should have its own filesystem
59
-
60
-
```yaml
61
-
auth_enabled: false # single tenant mode
62
-
63
-
server:
64
-
http_listen_port: 3100
65
-
66
-
ingester:
67
-
max_transfer_retries: 0 # Disable blocks transfers on ingesters shutdown or rollout.
68
-
chunk_idle_period: 2h # Let chunks sit idle for at least 2h before flushing, this helps to reduce total chunks in store
69
-
max_chunk_age: 2h # Let chunks get at least 2h old before flushing due to age, this helps to reduce total chunks in store
70
-
chunk_target_size: 1048576 # Target chunks of 1MB, this helps to reduce total chunks in store
71
-
chunk_retain_period: 30s
72
-
73
-
query_store_max_look_back_period: -1 # This will allow the ingesters to query the store for all data
74
-
lifecycler:
75
-
heartbeat_period: 5s
76
-
interface_names:
77
-
- eth0
78
-
join_after: 30s
79
-
num_tokens: 512
80
-
ring:
81
-
heartbeat_timeout: 1m
82
-
kvstore:
83
-
consul:
84
-
consistent_reads: true
85
-
host: localhost:8500
86
-
http_client_timeout: 20s
87
-
store: consul
88
-
replication_factor: 1 # This can be increased and probably should if you are running multiple machines!
89
-
90
-
schema_config:
91
-
configs:
92
-
- from: 2018-04-15
93
-
store: boltdb-shipper
94
-
object_store: filesystem
95
-
schema: v11
96
-
index:
97
-
prefix: index_
98
-
period: 168h
99
-
100
-
storage_config:
101
-
boltdb_shipper:
102
-
shared_store: filesystem
103
-
active_index_directory: /tmp/loki/index
104
-
cache_location: /tmp/loki/boltdb-cache
105
-
filesystem:
106
-
directory: /tmp/loki/chunks
107
-
108
-
limits_config:
109
-
enforce_metric_name: false
110
-
reject_old_samples: true
111
-
reject_old_samples_max_age: 168h
112
-
113
-
chunk_store_config:
114
-
max_look_back_period: 0s # No limit how far we can look back in the store
115
-
116
-
table_manager:
117
-
retention_deletes_enabled: false
118
-
retention_period: 0s # No deletions, infinite retention
119
-
```
120
-
121
-
It does require Consul to be running for the ring (any of the ring stores will work: consul, etcd, memberlist, Consul is used in this example)
122
-
123
-
It is also required that Consul be available from each machine, this example only specifies `host: localhost:8500` you would likely need to change this to the correct hostname/ip and port of your consul server.
124
-
125
-
**The config needs to be the same on every Loki instance!**
126
-
127
-
The important piece of this config is `query_store_max_look_back_period: -1` this tells Loki to allow the ingesters to look in the store for all the data.
128
-
129
-
Traffic can be sent to any of the Loki servers, it can be round-robin load balanced if desired.
130
-
131
-
Each Loki instance will use Consul to properly route both read and write data to the correct Loki instance.
132
-
133
-
Scaling up is as easy as adding more loki instances and letting them talk to the same ring.
134
-
135
-
Scaling down is harder but possible. You would need to shutdown a Loki server then take everything in:
136
-
137
-
```yaml
138
-
filesystem:
139
-
directory: /tmp/loki/chunks
140
-
```
141
-
142
-
And copy it to the same directory on another Loki server, there is currently no way to split the chunks between servers you must move them all. We expect to provide more options here in the future.
0 commit comments