You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Firehose is a cloud native service for delivering real-time streaming data to destinations such as service endpoints (HTTP or GRPC) & managed databases (Postgres, InfluxDB, Redis, Elasticsearch, Prometheus and MongoDB). With Firehose, you don't need to write applications or manage resources. It can be scaled up to match the throughput of your data. If your data is present in Kafka, Firehose delivers it to the destination(SINK) that you specified.
9
9
@@ -47,28 +47,28 @@ Explore the following resources to get started with Firehose:
47
47
48
48
## Run with Docker
49
49
50
-
Use the docker hub to download firehose [docker image](https://hub.docker.com/r/odpf/firehose/). You need to have docker installed in your system.
50
+
Use the docker hub to download firehose [docker image](https://hub.docker.com/r/gotocompany/firehose/). You need to have docker installed in your system.
51
51
52
52
```
53
53
# Download docker image from docker hub
54
-
$ docker pull odpf/firehose
54
+
$ docker pull gotocompany/firehose
55
55
56
56
# Run the following docker command for a simple log sink.
**Note:** Make sure your protos (.jar file) are located in `work-dir`, this is required for Filter functionality to work.
61
61
62
62
## Run with Kubernetes
63
63
64
-
- Create a firehose deployment using the helm chart available [here](https://github.com/odpf/charts/tree/main/stable/firehose)
64
+
- Create a firehose deployment using the helm chart available [here](https://github.com/goto/charts/tree/main/stable/firehose)
65
65
- Deployment also includes telegraf container which pushes stats metrics
66
66
67
67
## Running locally
68
68
69
69
```sh
70
70
# Clone the repo
71
-
$ git clone https://github.com/odpf/firehose.git
71
+
$ git clone https://github.com/goto/firehose.git
72
72
73
73
# Build the jar
74
74
$ ./gradlew clean build
@@ -101,11 +101,11 @@ Development of Firehose happens in the open on GitHub, and we are grateful to th
101
101
102
102
Read our [contributing guide](docs/docs/contribute/contribution.md) to learn about our development process, how to propose bugfixes and improvements, and how to build and test your changes to Firehose.
103
103
104
-
To help you get your feet wet and get you familiar with our contribution process, we have a list of [good first issues](https://github.com/odpf/firehose/labels/good%20first%20issue) that contain bugs which have a relatively limited scope. This is a great place to get started.
104
+
To help you get your feet wet and get you familiar with our contribution process, we have a list of [good first issues](https://github.com/goto/firehose/labels/good%20first%20issue) that contain bugs which have a relatively limited scope. This is a great place to get started.
105
105
106
106
## Credits
107
107
108
-
This project exists thanks to all the [contributors](https://github.com/odpf/firehose/graphs/contributors).
108
+
This project exists thanks to all the [contributors](https://github.com/goto/firehose/graphs/contributors).
Copy file name to clipboardExpand all lines: docs/docs/concepts/monitoring.md
+3-3
Original file line number
Diff line number
Diff line change
@@ -71,11 +71,11 @@ Lastly, set up Telegraf to send metrics to InfluxDB, following the corresponding
71
71
72
72
#### Firehose deployed on Kubernetes _\*\*_
73
73
74
-
1. Follow[ this guide](https://github.com/odpf/charts/tree/main/stable/firehose#readme) for deploying Firehose on a Kubernetes cluster using a Helm chart.
75
-
2. Configure the following parameters in the default [values.yaml](https://github.com/odpf/charts/blob/main/stable/firehose/values.yaml) file and run -
74
+
1. Follow[ this guide](https://github.com/goto/charts/tree/main/stable/firehose#readme) for deploying Firehose on a Kubernetes cluster using a Helm chart.
75
+
2. Configure the following parameters in the default [values.yaml](https://github.com/goto/charts/blob/main/stable/firehose/values.yaml) file and run -
Copy file name to clipboardExpand all lines: docs/docs/contribute/contribution.md
+5-5
Original file line number
Diff line number
Diff line change
@@ -3,8 +3,8 @@
3
3
The following is a set of guidelines for contributing to Firehose. These are mostly guidelines, not rules. Use your best judgment, and feel free to propose changes to this document in a pull request. Here are some important resources:
4
4
5
5
- The [Concepts](../guides/create_firehose.md) section will explain to you about Firehose architecture,
6
-
- Our [roadmap](https://github.com/odpf/firehose/blob/main/docs/roadmap.md) is the 10k foot view of where we're going, and
7
-
- Github [issues](https://github.com/odpf/firehose/issues) track the ongoing and reported issues.
6
+
- Our [roadmap](https://github.com/goto/firehose/blob/main/docs/roadmap.md) is the 10k foot view of where we're going, and
7
+
- Github [issues](https://github.com/goto/firehose/issues) track the ongoing and reported issues.
8
8
9
9
Development of Firehose happens in the open on GitHub, and we are grateful to the community for contributing bug fixes and improvements. Read below to learn how you can take part in improving Firehose.
10
10
@@ -23,14 +23,14 @@ The following parts are open for contribution:
23
23
- Provide suggestions to make the user experience better
24
24
- Provide suggestions to Improve the documentation
25
25
26
-
To help you get your feet wet and get you familiar with our contribution process, we have a list of [good first issues](https://github.com/odpf/firehose/labels/good%20first%20issue) that contain bugs that have a relatively limited scope. This is a great place to get started.
26
+
To help you get your feet wet and get you familiar with our contribution process, we have a list of [good first issues](https://github.com/goto/firehose/labels/good%20first%20issue) that contain bugs that have a relatively limited scope. This is a great place to get started.
27
27
28
28
## How can I contribute?
29
29
30
30
We use RFCs and GitHub issues to communicate ideas.
31
31
32
32
- You can report a bug or suggest a feature enhancement or can just ask questions. Reach out on Github discussions for this purpose.
33
-
- You are also welcome to add a new common sink in [depot](https://github.com/odpf/depot), improve monitoring and logging and improve code quality.
33
+
- You are also welcome to add a new common sink in [depot](https://github.com/goto/depot), improve monitoring and logging and improve code quality.
34
34
- You can help with documenting new features or improve existing documentation.
35
35
- You can also review and accept other contributions if you are a maintainer.
36
36
@@ -53,4 +53,4 @@ Please follow these practices for your change to get merged fast and smoothly:
53
53
- If you are introducing a completely new feature or making any major changes to an existing one, we recommend starting with an RFC and get consensus on the basic design first.
54
54
- Make sure your local build is running with all the tests and checkstyle passing.
55
55
- If your change is related to user-facing protocols/configurations, you need to make the corresponding change in the documentation as well.
56
-
- Docs live in the code repo under [`docs`](https://github.com/odpf/firehose/tree/7d0df99962507e6ad2147837c4536f36d52d5a48/docs/docs/README.md) so that changes to that can be done in the same PR as changes to the code.
56
+
- Docs live in the code repo under [`docs`](https://github.com/goto/firehose/tree/main/docs/docs/README.md) so that changes to that can be done in the same PR as changes to the code.
Copy file name to clipboardExpand all lines: docs/docs/contribute/development.md
+4-4
Original file line number
Diff line number
Diff line change
@@ -39,7 +39,7 @@ Configuration parameter variables of each sink can be found in the [Configuratio
39
39
40
40
When `INPUT_SCHEMA_DATA_TYPE is set to protobuf`, firehose uses Stencil Server as its Schema Registry for hosting Protobuf descriptors. The environment variable `SCHEMA_REGISTRY_STENCIL_ENABLE` must be set to `true` . Stencil server URL must be specified in the variable `SCHEMA_REGISTRY_STENCIL_URLS` . The Proto Descriptor Set file of the Kafka messages must be uploaded to the Stencil server.
41
41
42
-
Refer [this guide](https://github.com/odpf/stencil/tree/master/server#readme) on how to set up and configure the Stencil server, and how to generate and upload Proto descriptor set file to the server.
42
+
Refer [this guide](https://github.com/goto/stencil/tree/master/server#readme) on how to set up and configure the Stencil server, and how to generate and upload Proto descriptor set file to the server.
Firehose uses [Stencil](https://github.com/odpf/stencil) as the schema-registry which enables dynamic proto schemas. For the sake of this
85
+
Firehose uses [Stencil](https://github.com/goto/stencil) as the schema-registry which enables dynamic proto schemas. For the sake of this
86
86
quick-setup guide, we can work our way around Stencil setup by setting up a simple local HTTP server which can provide the static descriptor for TestMessage schema.
- For INPUT_SCHEMA_DATA_TYPE = protobuf, this sink will generate bigquery schema from protobuf message schema and update bigquery table with the latest generated schema.
134
134
- The protobuf message of a `google.protobuf.Timestamp` field might be needed when table partitioning is enabled.
135
135
- For INPUT_SCHEMA_DATA_TYPE = json, this sink will generate bigquery schema by infering incoming json. In future we will add support for json schema as well coming from stencil.
136
-
- The timestamp column is needed incase of partition table. It can be generated at the time of ingestion by setting the config. Please refer to config `SINK_BIGQUERY_ADD_EVENT_TIMESTAMP_ENABLE` in [depot bigquery sink config section](https://github.com/odpf/depot/blob/main/docs/reference/configuration/bigquery-sink.md#sink_bigquery_add_event_timestamp_enable)
136
+
- The timestamp column is needed incase of partition table. It can be generated at the time of ingestion by setting the config. Please refer to config `SINK_BIGQUERY_ADD_EVENT_TIMESTAMP_ENABLE` in [depot bigquery sink config section](https://github.com/goto/depot/blob/main/docs/reference/configuration/bigquery-sink.md#sink_bigquery_add_event_timestamp_enable)
137
137
- Google cloud credential with some bigquery permission is required to run this sink.
138
138
139
139
## Create a Bigtable sink
140
140
141
-
- it requires the following environment [variables](https://github.com/odpf/depot/blob/main/docs/reference/configuration/bigtable.md) ,which are required by ODPF Depot library, to be set along with the generic firehose variables.
141
+
- it requires the following environment [variables](https://github.com/goto/depot/blob/main/docs/reference/configuration/bigtable.md) ,which are required by Depot library, to be set along with the generic firehose variables.
142
142
143
143
If you'd like to connect to a sink which is not yet supported, you can create a new sink by following the [contribution guidelines](../contribute/contribution.md)
0 commit comments