Skip to content

Commit 9535e3c

Browse files
agup006gitbook-bot
authored andcommitted
GitBook: [1.6] 139 pages and 22 assets modified
1 parent 46a39bb commit 9535e3c

31 files changed

+49
-11
lines changed
5.98 KB
Loading

.gitbook/assets/image (1).png

25.9 KB
Loading

.gitbook/assets/image (2).png

12.7 KB
Loading

.gitbook/assets/image (3).png

12.7 KB
Loading

.gitbook/assets/image (4).png

11 KB
Loading

.gitbook/assets/image (5).png

17.4 KB
Loading

.gitbook/assets/image (6).png

14.1 KB
Loading

.gitbook/assets/image (7).png

65.8 KB
Loading

.gitbook/assets/image (8).png

27.9 KB
Loading

.gitbook/assets/image (9).png

14.1 KB
Loading

SUMMARY.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -118,9 +118,9 @@
118118
* [Amazon CloudWatch](pipeline/outputs/cloudwatch.md)
119119
* [Amazon Kinesis Data Firehose](pipeline/outputs/firehose.md)
120120
* [Amazon S3](pipeline/outputs/s3.md)
121-
* [Azure](pipeline/outputs/azure.md)
121+
* [Azure Log Analytics](pipeline/outputs/azure.md)
122122
* [Azure Blob](pipeline/outputs/azure_blob.md)
123-
* [BigQuery](pipeline/outputs/bigquery.md)
123+
* [Google Cloud BigQuery](pipeline/outputs/bigquery.md)
124124
* [Counter](pipeline/outputs/counter.md)
125125
* [Datadog](pipeline/outputs/datadog.md)
126126
* [Elasticsearch](pipeline/outputs/elasticsearch.md)

administration/configuring-fluent-bit/configuration-file.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ The _Service_ section defines global properties of the service, the keys availab
3434
| HTTP\_Server | Enable built-in HTTP Server | Off |
3535
| HTTP\_Listen | Set listening interface for HTTP Server when it's enabled | 0.0.0.0 |
3636
| HTTP\_Port | Set TCP Port for the HTTP Server | 2020 |
37-
| Coro\_Stack\_Size | Set the coroutines stack size in bytes. The value must be greater than the page size of the running system. Don't set too small value (say 4096), or coroutine threads can overrun the stack buffer. Do not change the default value of this parameter unless you know what you are doing. | 24576 |
37+
| Coro\_Stack\_Size | Set the coroutines stack size in bytes. The value must be greater than the page size of the running system. Don't set too small value \(say 4096\), or coroutine threads can overrun the stack buffer. Do not change the default value of this parameter unless you know what you are doing. | 24576 |
3838

3939
The following is an example of a _SERVICE_ section:
4040

concepts/data-pipeline/buffer.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ Previously defined in the [Buffering](../buffering.md) concept section, the `buf
88

99
The `buffer` phase already contains the data in an immutable state, meaning, no other filter can be applied.
1010

11-
![](../../.gitbook/assets/logging_pipeline_buffer%20%281%29%20%281%29.png)
11+
![](../../.gitbook/assets/logging_pipeline_buffer%20%281%29%20%281%29%20%281%29.png)
1212

1313
{% hint style="info" %}
1414
Note that buffered data is not raw text, it's in Fluent Bit's internal binary representation.

concepts/data-pipeline/input.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ description: The way to gather data from your sources
66

77
[Fluent Bit](http://fluentbit.io) provides different _Input Plugins_ to gather information from different sources, some of them just collect data from log files while others can gather metrics information from the operating system. There are many plugins for different needs.
88

9-
![](../../.gitbook/assets/logging_pipeline_input%20%281%29.png)
9+
![](../../.gitbook/assets/logging_pipeline_input%20%281%29%20%281%29.png)
1010

1111
When an input plugin is loaded, an internal _instance_ is created. Every instance has its own and independent configuration. Configuration keys are often called **properties**.
1212

concepts/data-pipeline/output.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ description: 'Destinations for your data: databases, cloud services and more!'
66

77
The output interface allows us to define destinations for the data. Common destinations are remote services, local file system or standard interface with others. Outputs are implemented as plugins and there are many available.
88

9-
![](../../.gitbook/assets/logging_pipeline_output.png)
9+
![](../../.gitbook/assets/logging_pipeline_output%20%281%29.png)
1010

1111
When an output plugin is loaded, an internal _instance_ is created. Every instance has its own independent configuration. Configuration keys are often called **properties**.
1212

concepts/data-pipeline/parser.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ description: Convert Unstructured to Structured messages
66

77
Dealing with raw strings or unstructured messages is a constant pain; having a structure is highly desired. Ideally we want to set a structure to the incoming data by the Input Plugins as soon as they are collected:
88

9-
![](../../.gitbook/assets/logging_pipeline_parser%20%281%29.png)
9+
![](../../.gitbook/assets/logging_pipeline_parser%20%281%29%20%281%29.png)
1010

1111
The Parser allows you to convert from unstructured to structured data. As a demonstrative example consider the following Apache \(HTTP Server\) log entry:
1212

concepts/data-pipeline/router.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ description: Create flexible routing rules
66

77
Routing is a core feature that allows to **route** your data through Filters and finally to one or multiple destinations. The router relies on the concept of [Tags](../key-concepts.md) and [Matching](../key-concepts.md) rules
88

9-
![](../../.gitbook/assets/logging_pipeline_routing%20%281%29%20%281%29.png)
9+
![](../../.gitbook/assets/logging_pipeline_routing%20%281%29%20%281%29%20%281%29.png)
1010

1111
There are two important concepts in Routing:
1212

pipeline/outputs/azure.md

Lines changed: 9 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,12 @@
1-
# Azure
1+
---
2+
description: 'Send logs, metrics to Azure Log Analytics'
3+
---
4+
5+
# Azure Log Analytics
6+
7+
8+
9+
![](../../.gitbook/assets/image%20%287%29.png)
210

311
Azure output plugin allows to ingest your records into [Azure Log Analytics](https://azure.microsoft.com/en-us/services/log-analytics/) service.
412

pipeline/outputs/bigquery.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# BigQuery
1+
# Google Cloud BigQuery
22

33
BigQuery output plugin is an _experimental_ plugin that allows you to stream records into [Google Cloud BigQuery](https://cloud.google.com/bigquery/) service. The implementation does not support the following, which would be expected in a full production version:
44

pipeline/outputs/cloudwatch.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,11 @@
1+
---
2+
description: Send logs and metrics to Amazon CloudWatch
3+
---
4+
15
# Amazon CloudWatch
26

7+
![](../../.gitbook/assets/image%20%282%29.png)
8+
39
The Amazon CloudWatch output plugin allows to ingest your records into the [CloudWatch Logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html) service. Support for CloudWatch Metrics is also provided via [EMF](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Embedded_Metric_Format_Specification.html).
410

511
This is the documentation for the core Fluent Bit CloudWatch plugin written in C. It can replace the [aws/amazon-cloudwatch-logs-for-fluent-bit](https://github.com/aws/amazon-cloudwatch-logs-for-fluent-bit) Golang Fluent Bit plugin released last year. The Golang plugin was named `cloudwatch`; this new high performance CloudWatch plugin is called `cloudwatch_logs` to prevent conflicts/confusion. Check the amazon repo for the Golang plugin for details on the deprecation/migration plan for the original plugin.

pipeline/outputs/datadog.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,7 @@
1+
---
2+
description: Send logs to Datadog
3+
---
4+
15
# Datadog
26

37
The Datadog output plugin allows to ingest your logs into [Datadog](https://app.datadoghq.com/signup).

pipeline/outputs/elasticsearch.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,7 @@
1+
---
2+
description: Send logs to Elasticsearch (including Amazon Elasticsearch Service)
3+
---
4+
15
# Elasticsearch
26

37
The **es** output plugin, allows to ingest your records into a [Elasticsearch](http://www.elastic.co) database. The following instructions assumes that you have a fully operational Elasticsearch service running in your environment.

pipeline/outputs/firehose.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,11 @@
1+
---
2+
description: Send logs to Amazon Kinesis Firehose
3+
---
4+
15
# Amazon Kinesis Data Firehose
26

7+
![](../../.gitbook/assets/image%20%288%29.png)
8+
39
The Amazon Kinesis Data Firehose output plugin allows to ingest your records into the [Firehose](https://aws.amazon.com/kinesis/data-firehose/) service.
410

511
This is the documentation for the core Fluent Bit Firehose plugin written in C. It can replace the [aws/amazon-kinesis-firehose-for-fluent-bit](https://github.com/aws/amazon-kinesis-firehose-for-fluent-bit) Golang Fluent Bit plugin released last year. The Golang plugin was named `firehose`; this new high performance and highly efficient firehose plugin is called `kinesis_firehose` to prevent conflicts/confusion.

pipeline/outputs/s3.md

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,12 @@
1+
---
2+
description: 'Send logs, data, metrics to Amazon S3'
3+
---
4+
15
# Amazon S3
26

3-
The Amazon S3 output plugin allows to ingest your records into the [S3](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html) cloud object store.
7+
![](../../.gitbook/assets/image%20%286%29.png)
8+
9+
The Amazon S3 output plugin allows you to ingest your records into the [S3](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html) cloud object store.
410

511
The plugin can upload data to S3 using the [multipart upload API](https://docs.aws.amazon.com/AmazonS3/latest/dev/uploadobjusingmpu.html) or using S3 [PutObject](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html). Multipart is the default and is recommended; Fluent Bit will stream data in a series of 'parts'. This limits the amount of data it has to buffer on disk at any point in time. By default, every time 5 MiB of data have been received, a new 'part' will be uploaded. The plugin can create files up to gigabytes in size from many small chunks/parts using the multipart API. All aspects of the upload process are configurable using the configuration options.
612

pipeline/outputs/splunk.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,7 @@
1+
---
2+
description: Send logs to Splunk HTTP Event Collector
3+
---
4+
15
# Splunk
26

37
Splunk output plugin allows to ingest your records into a [Splunk Enterprise](https://www.splunk.com/en_us/products/splunk-enterprise.html) service through the HTTP Event Collector \(HEC\) interface.

0 commit comments

Comments
 (0)