Skip to content

Commit 3f73a9c

Browse files
maycmleeburaizu
andauthored
Move OP destination shortcodes (#33278)
* move destination shortcodes * fix datadog logs * Apply suggestions from code review Co-authored-by: Bryce Eadie <[email protected]> * apply image and link fixes * Fix link --------- Co-authored-by: Bryce Eadie <[email protected]>
1 parent 348bcd1 commit 3f73a9c

File tree

17 files changed

+292
-28
lines changed

17 files changed

+292
-28
lines changed

content/en/logs/log_configuration/archives.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -427,7 +427,7 @@ This directory structure simplifies the process of querying your historical log
427427
[1]: /logs/indexes/#exclusion-filters
428428
[2]: /logs/archives/rehydrating/
429429
[3]: https://app.datadoghq.com/logs/pipelines/log-forwarding
430-
[4]: /observability_pipelines/archive_logs/
430+
[4]: /observability_pipelines/configuration/explore_templates/?tab=logs#archive-logs
431431
[5]: /account_management/rbac/permissions/?tab=ui#logs_write_archives
432432
[6]: https://app.datadoghq.com/logs/pipelines/archives
433433
[7]: /integrations/azure/

content/en/observability_pipelines/configuration/install_the_worker/advanced_worker_configurations.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ further_reading:
1111
- link: "/observability_pipelines/dual_ship_logs/"
1212
tag: "Documentation"
1313
text: "Dual ship logs with Observability Pipelines"
14-
- link: "/observability_pipelines/archive_logs/"
14+
- link: "/observability_pipelines/configuration/explore_templates/?tab=logs#archive-logs"
1515
tag: "Documentation"
1616
text: "Archive logs with Observability Pipelines"
1717
- link: "/observability_pipelines/split_logs/"

content/en/observability_pipelines/destinations/amazon_s3.md

Lines changed: 44 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,46 @@ You need to have Datadog's [AWS integration][3] installed to set up Datadog Log
4444

4545
Set up the Amazon S3 destination and its environment variables when you [set up an Archive Logs pipeline][4]. The information below is configured in the pipelines UI.
4646

47-
{{% observability_pipelines/destination_settings/datadog_archives_amazon_s3 %}}
47+
1. Enter your S3 bucket name. If you configured Log Archives, it's the name of the bucket you created earlier.
48+
1. Enter the AWS region the S3 bucket is in.
49+
1. Enter the key prefix.
50+
- Prefixes are useful for partitioning objects. For example, you can use a prefix as an object key to store objects under a particular directory. If using a prefix for this purpose, it must end in `/` to act as a directory path; a trailing `/` is not automatically added.
51+
- See [template syntax][8] if you want to route logs to different object keys based on specific fields in your logs.
52+
- **Note**: Datadog recommends that you start your prefixes with the directory name and without a lead slash (`/`). For example, `app-logs/` or `service-logs/`.
53+
1. Select the storage class for your S3 bucket in the **Storage Class** dropdown menu. If you are going to archive and rehydrate your logs:
54+
- **Note**: Rehydration only supports the following [storage classes][9]:
55+
- Standard
56+
- Intelligent-Tiering, only if [the optional asynchronous archive access tiers][10] are both disabled.
57+
- Standard-IA
58+
- One Zone-IA
59+
- If you wish to rehydrate from archives in another storage class, you must first move them to one of the supported storage classes above.
60+
- See the [Example destination and log archive setup](#example-destination-and-log-archive-setup) section of this page for how to configure your Log Archive based on your Amazon S3 destination setup.
61+
1. Optionally, select an AWS authentication option. If you are only using the [user or role you created earlier](#set-up-an-iam-policy-that-allows-workers-to-write-to-the-s3-bucket) for authentication, do not select **Assume role**. The **Assume role** option should only be used if the user or role you created earlier needs to assume a different role to access the specific AWS resource and that permission has to be explicitly defined.<br>If you select **Assume role**:
62+
1. Enter the ARN of the IAM role you want to assume.
63+
1. Optionally, enter the assumed role session name and external ID.
64+
- **Note:** The [user or role you created earlier](#set-up-an-iam-policy-that-allows-workers-to-write-to-the-s3-bucket) must have permission to assume this role so that the Worker can authenticate with AWS.
65+
1. Optionally, toggle the switch to enable **Buffering Options**.<br>**Note**: Buffering options is in Preview. Contact your account manager to request access.
66+
- If left disabled, the maximum size for buffering is 500 events.
67+
- If enabled:
68+
1. Select the buffer type you want to set (**Memory** or **Disk**).
69+
1. Enter the buffer size and select the unit.
70+
71+
#### Example destination and log archive setup
72+
73+
If you enter the following values for your Amazon S3 destination:
74+
- S3 Bucket Name: `test-op-bucket`
75+
- Prefix to apply to all object keys: `op-logs`
76+
- Storage class for the created objects: `Standard`
77+
78+
{{< img src="observability_pipelines/setup/amazon_s3_destination.png" alt="The Amazon S3 destination setup with the example values" style="width:40%;" >}}
79+
80+
Then these are the values you enter for configuring the S3 bucket for Log Archives:
81+
82+
- S3 bucket: `test-op-bucket`
83+
- Path: `op-logs`
84+
- Storage class: `Standard`
85+
86+
{{< img src="observability_pipelines/setup/amazon_s3_archive.png" alt="The log archive configuration with the example values" style="width:70%;" >}}
4887

4988
### Set the environment variables
5089

@@ -78,7 +117,10 @@ A batch of events is flushed when one of these parameters is met. See [event bat
78117
[1]: /logs/log_configuration/archives/
79118
[2]: /logs/log_configuration/rehydrating/
80119
[3]: /integrations/amazon_web_services/#setup
81-
[4]: /observability_pipelines/archive_logs/
120+
[4]: /observability_pipelines/configuration/explore_templates/?tab=logs#archive-logs
82121
[5]: /observability_pipelines/configuration/set_up_pipelines/
83122
[6]: https://docs.snowflake.com/en/user-guide/data-load-snowpipe-auto-s3
84123
[7]: /observability_pipelines/destinations/#event-batching
124+
[8]: /observability_pipelines/destinations/#template-syntax
125+
[9]: /logs/log_configuration/archives/?tab=awss3#storage-class
126+
[10]: https://aws.amazon.com/s3/storage-classes/intelligent-tiering/

content/en/observability_pipelines/destinations/amazon_security_lake.md

Lines changed: 23 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,25 @@ Set up the Amazon Security Lake destination and its environment variables when y
1717

1818
### Set up the destination
1919

20-
{{% observability_pipelines/destination_settings/amazon_security_lake %}}
20+
1. Enter your S3 bucket name.
21+
1. Enter the AWS region.
22+
1. Enter the custom source name.
23+
1. Optionally, select an [AWS authentication][5] option.
24+
1. Enter the ARN of the IAM role you want to assume.
25+
1. Optionally, enter the assumed role session name and external ID.
26+
1. Optionally, toggle the switch to enable TLS. If you enable TLS, the following certificate and key files are required.<br>**Note**: All file paths are made relative to the configuration data directory, which is `/var/lib/observability-pipelines-worker/config/` by default. See [Advanced Worker Configurations][4] for more information. The file must be owned by the `observability-pipelines-worker group` and `observability-pipelines-worker` user, or at least readable by the group or user.
27+
- `Server Certificate Path`: The path to the certificate file that has been signed by your Certificate Authority (CA) Root File in DER or PEM (X.509).
28+
- `CA Certificate Path`: The path to the certificate file that is your Certificate Authority (CA) Root File in DER or PEM (X.509).
29+
- `Private Key Path`: The path to the `.key` private key file that belongs to your Server Certificate Path in DER or PEM (PKCS#8) format.
30+
1. Optionally, toggle the switch to enable **Buffering Options**.<br>**Note**: Buffering options is in Preview. Contact your account manager to request access.
31+
- If left disabled, the maximum size for buffering is 500 events.
32+
- If enabled:
33+
1. Select the buffer type you want to set (**Memory** or **Disk**).
34+
1. Enter the buffer size and select the unit.
35+
36+
**Notes**:
37+
- When you add the Amazon Security Lake destination, the OCSF processor is automatically added so that you can convert your logs to Parquet before they are sent to Amazon Security Lake. See [Remap to OCSF documentation][3] for setup instructions.
38+
- Only logs formatted by the OCSF processor are converted to Parquet.
2139

2240
### Set the environment variables
2341

@@ -42,4 +60,7 @@ A batch of events is flushed when one of these parameters is met. See [event bat
4260
| None | 256,000,000 | 300 |
4361

4462
[1]: https://app.datadoghq.com/observability-pipelines
45-
[2]: /observability_pipelines/destinations/#event-batching
63+
[2]: /observability_pipelines/destinations/#event-batching
64+
[3]: /observability_pipelines/processors/remap_ocsf
65+
[4]: /observability_pipelines/configuration/install_the_worker/advanced_worker_configurations/
66+
[5]: /observability_pipelines/destinations/amazon_security_lake/#aws-authentication

content/en/observability_pipelines/destinations/azure_storage.md

Lines changed: 13 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,16 @@ You need to have Datadog's [Azure integration][3] installed to set up Datadog Lo
1717

1818
Set up the Azure Storage destination and its environment variables when you [set up an Archive Logs pipeline][4]. The information below is configured in the pipelines UI.
1919

20-
{{% observability_pipelines/destination_settings/datadog_archives_azure_storage %}}
20+
1. Enter the name of the Azure container you created earlier.
21+
1. Optionally, enter a prefix.
22+
- Prefixes are useful for partitioning objects. For example, you can use a prefix as an object key to store objects under a particular directory. If using a prefix for this purpose, it must end in `/` to act as a directory path; a trailing `/` is not automatically added.
23+
- See [template syntax][6] if you want to route logs to different object keys based on specific fields in your logs.
24+
- **Note**: Datadog recommends that you start your prefixes with the directory name and without a lead slash (`/`). For example, `app-logs/` or `service-logs/`.
25+
1. Optionally, toggle the switch to enable **Buffering Options**.<br>**Note**: Buffering options is in Preview. Contact your account manager to request access.
26+
- If left disabled, the maximum size for buffering is 500 events.
27+
- If enabled:
28+
1. Select the buffer type you want to set (**Memory** or **Disk**).
29+
1. Enter the buffer size and select the unit.
2130

2231
### Set the environment variables
2332

@@ -36,5 +45,6 @@ A batch of events is flushed when one of these parameters is met. See [event bat
3645
[1]: /logs/log_configuration/archives/
3746
[2]: /logs/log_configuration/rehydrating/
3847
[3]: /integrations/azure/#setup
39-
[4]: /observability_pipelines/archive_logs/
40-
[5]: /observability_pipelines/destinations/#event-batching
48+
[4]: /observability_pipelines/configuration/explore_templates/?tab=logs#archive-logs
49+
[5]: /observability_pipelines/destinations/#event-batching
50+
[6]: /observability_pipelines/destinations/#template-syntax

content/en/observability_pipelines/destinations/crowdstrike_ng_siem.md

Lines changed: 16 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,19 @@ Set up the CrowdStrike NG-SIEM destination and its environment variables when yo
1111

1212
### Set up the destination
1313

14-
{{% observability_pipelines/destination_settings/crowdstrike_ng_siem %}}
14+
To use the CrowdStrike NG-SIEM destination, you need to set up a CrowdStrike data connector using the HEC/HTTP Event Connector. See [Step 1: Set up the HEC/HTTP event data connector][3] for instructions. When you set up the data connector, you are given a HEC API key and URL, which you use when you configure the Observability Pipelines Worker later on.
15+
16+
1. Select **JSON** or **Raw** encoding in the dropdown menu.
17+
1. Optionally, enable compressions and select an algorithm (**gzip** or **zlib**) in the dropdown menu.
18+
1. Optionally, toggle the switch to enable TLS. If you enable TLS, the following certificate and key files are required.<br>**Note**: All file paths are made relative to the configuration data directory, which is `/var/lib/observability-pipelines-worker/config/` by default. See [Advanced Worker Configurations][4] for more information. The file must be owned by the `observability-pipelines-worker group` and `observability-pipelines-worker` user, or at least readable by the group or user.
19+
- `Server Certificate Path`: The path to the certificate file that has been signed by your Certificate Authority (CA) Root File in DER or PEM (X.509).
20+
- `CA Certificate Path`: The path to the certificate file that is your Certificate Authority (CA) Root File in DER or PEM (X.509).
21+
- `Private Key Path`: The path to the `.key` private key file that belongs to your Server Certificate Path in DER or PEM (PKCS#8) format.
22+
1. Optionally, toggle the switch to enable **Buffering Options**.<br>**Note**: Buffering options is in Preview. Contact your account manager to request access.
23+
- If left disabled, the maximum size for buffering is 500 events.
24+
- If enabled:
25+
1. Select the buffer type you want to set (**Memory** or **Disk**).
26+
1. Enter the buffer size and select the unit.
1527

1628
### Set the environment variables
1729

@@ -28,4 +40,6 @@ A batch of events is flushed when one of these parameters is met. See [event bat
2840
| None | 1,000,000 | 1 |
2941

3042
[1]: https://app.datadoghq.com/observability-pipelines
31-
[2]: /observability_pipelines/destinations/#event-batching
43+
[2]: /observability_pipelines/destinations/#event-batching
44+
[3]: https://falcon.us-2.crowdstrike.com/documentation/page/bdded008/hec-http-event-connector-guide
45+
[4]: /observability_pipelines/configuration/install_the_worker/advanced_worker_configurations/

content/en/observability_pipelines/destinations/datadog_logs.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,11 @@ Set up the Datadog Logs destination and its environment variables when you [set
1111

1212
### Set up the destination
1313

14-
{{% observability_pipelines/destination_settings/datadog %}}
14+
1. Optionally, toggle the switch to enable **Buffering Options**.<br>**Note**: Buffering options is in Preview. Contact your account manager to request access.
15+
- If left disabled, the maximum size for buffering is 500 events.
16+
- If enabled:
17+
1. Select the buffer type you want to set (**Memory** or **Disk**).
18+
1. Enter the buffer size and select the unit.
1519

1620
### Set the environment variables
1721

content/en/observability_pipelines/destinations/google_chronicle.md

Lines changed: 20 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,21 @@ Set up the Google Chronicle destination and its environment variables when you [
1212

1313
### Set up the destination
1414

15-
{{% observability_pipelines/destination_settings/chronicle %}}
15+
To set up the Worker's Google Chronicle destination:
16+
17+
1. Enter the customer ID for your Google Chronicle instance.
18+
1. If you have a credentials JSON file, enter the path to your credentials JSON file. The credentials file must be placed under `DD_OP_DATA_DIR/config`. Alternatively, you can use the `GOOGLE_APPLICATION_CREDENTIALS` environment variable to provide the credential path.
19+
- If you're using [workload identity][6] on Google Kubernetes Engine (GKE), the `GOOGLE_APPLICATION_CREDENTIALS` is provided for you.
20+
- The Worker uses standard [Google authentication methods][7].
21+
1. Select **JSON** or **Raw** encoding in the dropdown menu.
22+
1. Enter the log type. See [template syntax][4] if you want to route logs to different log types based on specific fields in your logs.
23+
1. Optionally, toggle the switch to enable **Buffering Options**.<br>**Note**: Buffering options is in Preview. Contact your account manager to request access.
24+
- If left disabled, the maximum size for buffering is 500 events.
25+
- If enabled:
26+
1. Select the buffer type you want to set (**Memory** or **Disk**).
27+
1. Enter the buffer size and select the unit.
28+
29+
**Note**: Logs sent to the Google Chronicle destination must have ingestion labels. For example, if the logs are from a A10 load balancer, it must have the ingestion label `A10_LOAD_BALANCER`. See Google Cloud's [Support log types with a default parser][5] for a list of available log types and their respective ingestion labels.
1630

1731
### Set the environment variables
1832

@@ -30,4 +44,8 @@ A batch of events is flushed when one of these parameters is met. See [event bat
3044

3145
[1]: https://app.datadoghq.com/observability-pipelines
3246
[2]: /observability_pipelines/destinations/#event-batching
33-
[3]: https://cloud.google.com/docs/authentication#auth-flowchart
47+
[3]: https://cloud.google.com/docs/authentication#auth-flowchart
48+
[4]: /observability_pipelines/destinations/#template-syntax
49+
[5]: https://cloud.google.com/chronicle/docs/ingestion/parser-list/supported-default-parsers#with-default-parser
50+
[6]:https://cloud.google.com/kubernetes-engine/docs/concepts/workload-identity
51+
[7]: https://cloud.google.com/docs/authentication#auth-flowchart

content/en/observability_pipelines/destinations/google_cloud_storage.md

Lines changed: 21 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,22 @@ You need to have Datadog's [Google Cloud Platform integration][3] installed to s
2323

2424
Set up the Google Cloud Storage destination and its environment variables when you [set up an Archive Logs pipeline][4]. The information below is configured in the pipelines UI.
2525

26-
{{% observability_pipelines/destination_settings/datadog_archives_google_cloud_storage %}}
26+
1. Enter the name of your Google Cloud storage bucket. If you configured Log Archives, it's the bucket you created earlier.
27+
1. If you have a credentials JSON file, enter the path to your credentials JSON file. If you configured Log Archives it's the credentials you downloaded [earlier](#create-a-service-account-to-allow-workers-to-write-to-the-bucket). The credentials file must be placed under `DD_OP_DATA_DIR/config`. Alternatively, you can use the `GOOGLE_APPLICATION_CREDENTIALS` environment variable to provide the credential path.
28+
- If you're using [workload identity][9] on Google Kubernetes Engine (GKE), the `GOOGLE_APPLICATION_CREDENTIALS` is provided for you.
29+
- The Worker uses standard [Google authentication methods][8].
30+
1. Select the storage class for the created objects.
31+
1. Select the access level of the created objects.
32+
1. Optionally, enter in the prefix.
33+
- Prefixes are useful for partitioning objects. For example, you can use a prefix as an object key to store objects under a particular directory. If using a prefix for this purpose, it must end in `/` to act as a directory path; a trailing `/` is not automatically added.
34+
- See [template syntax][7] if you want to route logs to different object keys based on specific fields in your logs.
35+
- **Note**: Datadog recommends that you start your prefixes with the directory name and without a lead slash (`/`). For example, `app-logs/` or `service-logs/`.
36+
1. Optionally, click **Add Header** to add metadata.
37+
1. Optionally, toggle the switch to enable **Buffering Options**.<br>**Note**: Buffering options is in Preview. Contact your account manager to request access.
38+
- If left disabled, the maximum size for buffering is 500 events.
39+
- If enabled:
40+
1. Select the buffer type you want to set (**Memory** or **Disk**).
41+
1. Enter the buffer size and select the unit.
2742

2843
### Set the environment variables
2944

@@ -42,6 +57,9 @@ A batch of events is flushed when one of these parameters is met. See [event bat
4257
[1]: /logs/log_configuration/archives/
4358
[2]: /logs/log_configuration/rehydrating/
4459
[3]: /integrations/google_cloud_platform/#setup
45-
[4]: /observability_pipelines/archive_logs/
60+
[4]: /observability_pipelines/configuration/explore_templates/?tab=logs#archive-logs
4661
[5]: /observability_pipelines/destinations/#event-batching
47-
[6]: https://cloud.google.com/docs/authentication#auth-flowchart
62+
[6]: https://cloud.google.com/docs/authentication#auth-flowchart
63+
[7]: /observability_pipelines/destinations/#template-syntax
64+
[8]: https://cloud.google.com/docs/authentication#auth-flowchart
65+
[9]: https://cloud.google.com/kubernetes-engine/docs/concepts/workload-identity

0 commit comments

Comments
 (0)