Skip to content

Commit 836e48f

Browse files
committed
REST gRPC updated
1 parent cc9555b commit 836e48f

28 files changed

+1131
-13
lines changed

.gitmodules

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,3 +7,6 @@
77
[submodule "src/microservices/vcpkg"]
88
path = src/microservices/vcpkg
99
url = https://github.com/microsoft/vcpkg
10+
[submodule "src/microservices/REST/vcpkg"]
11+
path = src/microservices/REST/vcpkg
12+
url = https://github.com/microsoft/vcpkg

README.md

Lines changed: 10 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -317,12 +317,16 @@ This change ensures that VSCode uses the "Ninja Multi-Config" generator by defau
317317
* [XML](docs/tinyxml2.md)
318318

319319
## [REST API and Microservices with C++](#)
320-
* [Monolithic Architecture vs REST API and Microservices](docs/REST_API_microservices.md)
321-
* [mock APIs with Mockoon](docs/mockoon.md)
322-
* [gRPC Microservice](docs/gRPC_microservice.md)
323-
* [REST API with Crow](docs/REST_API_with_crow.md)
324-
* [Test API with Postman](docs/test_API_with_postman.md)
325-
320+
* [Monolithic Architecture vs REST API and Microservices](docs/microservices/REST_API_microservices.md)
321+
* [gRPC](docs/microservices/grpc.md)
322+
* [REST API with crow](docs/microservices/REST_API_with_crow.md)
323+
* [mocking APIs with Mockoon](docs/microservices/mockoon.md)
324+
* [Test API with Postman](docs/microservices/test_API_with_postman.md)
325+
326+
## [Event Streaming and Message Queuing with C++](#)
327+
* [Apache Kafka](docs/event_streaming_message/apache_kafka.md)
328+
* [RabbitMQ](docs/event_streaming_message/rabbitMQ.md)
329+
326330
## [Logging With C++](#)
327331
* [spdlog](docs/spdlog.md)
328332

Lines changed: 124 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,124 @@
1+
## RabbitMQ, and Kafka
2+
3+
4+
RabbitMQ, and Kafka are designed for messaging or communication between systems, but they serve different purposes and operate in different environments. Let's break down how MAVLink compares to RabbitMQ and Kafka.
5+
6+
### **RabbitMQ**
7+
- **Purpose**: RabbitMQ is a **message broker** designed for **asynchronous communication** between distributed systems. It is widely used for managing the distribution of tasks or messages between producer and consumer applications.
8+
- **Architecture**: RabbitMQ follows a **producer-consumer** architecture with queues that can hold and forward messages. Producers send messages to the broker, which distributes them to consumers. It is based on the **AMQP protocol** (Advanced Message Queuing Protocol).
9+
- **Transport**: RabbitMQ handles communication over **TCP/IP** using AMQP as the underlying protocol. It offers features like **message durability**, **acknowledgements**, and **routing** mechanisms (like topic and direct exchanges).
10+
- **Message Structure**: Messages are more loosely structured and can contain any type of payload (JSON, XML, binary data). RabbitMQ is better suited for **reliable delivery** and **high-throughput** tasks, but with some latency compared to MAVLink's real-time focus.
11+
12+
### **Apache Kafka**
13+
- **Purpose**: Kafka is designed for **distributed event streaming** and high-throughput message handling, especially for applications requiring event logging, monitoring, and real-time data analytics. It is used to handle large volumes of data in real time, like **log aggregation**, **event sourcing**, and **stream processing**.
14+
- **Architecture**: Kafka uses a **publish-subscribe** model where producers publish messages to topics, and consumers subscribe to those topics to receive messages. Kafka is highly distributed and fault-tolerant, designed for scalability.
15+
- **Transport**: Kafka uses its own custom protocol built on **TCP/IP**. It supports **distributed storage** of messages and has mechanisms for **replication** and **high availability**.
16+
- **Message Structure**: Kafka messages are organized in **topics** and **partitions**, allowing for massive parallelism. Kafka emphasizes **durability**, **event ordering**, and **real-time data streaming**.
17+
18+
### **Key Comparisons**
19+
20+
| Feature | **RabbitMQ** | **Kafka** |
21+
|-------------------------|------------------------------------------------|----------------------------------------------|
22+
| **Primary Use Case** | Asynchronous messaging between systems | Distributed event streaming and data logging |
23+
| **Message Model** | Producer-consumer with broker-based queues | Publish-subscribe, distributed architecture |
24+
| **Transport** | TCP/IP, AMQP | TCP/IP, custom Kafka protocol |
25+
| **Message Durability** | Yes, with message persistence | Yes, messages are persisted in partitions |
26+
| **Real-Time Focus** | Not real-time, but supports asynchronous tasks | Real-time stream processing |
27+
| **Scalability** | Can scale horizontally, but not for huge volumes | Designed for high scalability and throughput |
28+
| **Latency** | Higher latency, but ensures reliability | Low-latency with high-throughput focus |
29+
30+
### **Summary**
31+
32+
- **RabbitMQ** is ideal for **asynchronous task distribution** and managing message delivery between producer-consumer applications. It's reliable and offers strong **durability** features but isn’t designed for real-time telemetry.
33+
- **Kafka** is designed for handling **massive amounts of streaming data** with high throughput and **durability**, often in **big data** or **log aggregation** scenarios. It’s less suited for low-latency, real-time control systems like those served by MAVLink.
34+
35+
36+
37+
## Real-world Scenarios of RabbitMQ and Kafka
38+
39+
---
40+
41+
### **Scenario 1: Asynchronous Task Processing (RabbitMQ)**
42+
#### Example: Processing Image Uploads in a Web Application
43+
- **Use Case**: A user uploads an image to a web application, and the image needs to be processed (resized, watermarked, etc.). The web server sends the image to a **task queue** for processing by worker services, which handle the image asynchronously and then store the processed version.
44+
- **RabbitMQ Fit**: RabbitMQ excels here because the tasks (image processing) are:
45+
- **Discrete**: Each image is an independent task.
46+
- **Asynchronous**: Processing can be done independently from the user interaction. The user doesn't need to wait for the result immediately.
47+
- **Reliable**: If the system crashes or a worker fails, RabbitMQ can ensure that tasks are not lost. It supports **message durability**, **acknowledgements**, and **retry mechanisms**.
48+
49+
- **Why Kafka Could Introduce Complexity**:
50+
- Kafka is designed for **high-throughput event streams** and doesn’t handle discrete, one-off tasks as efficiently. If you use Kafka, you would need to implement custom logic to ensure messages (images) are processed exactly once and not left behind or duplicated. Kafka doesn’t provide strong **task routing** or **priority** management like RabbitMQ does.
51+
- **Overhead**: Kafka requires more infrastructure setup, such as managing topics and partitions. For a simple task queue scenario like this, Kafka's added complexity is unnecessary.
52+
53+
#### Why RabbitMQ is a Better Fit:
54+
- **Worker Queue Pattern**: RabbitMQ’s core strength is in distributing tasks across workers using queues. It is simple to use and scales well for small to medium-scale task-based processing systems.
55+
- **Routing and Prioritization**: RabbitMQ supports routing rules that Kafka doesn’t natively support, such as sending high-priority messages to faster queues.
56+
57+
---
58+
59+
### **Scenario 2: Event-Driven Microservices (RabbitMQ)**
60+
#### Example: Order Processing in an E-Commerce System
61+
- **Use Case**: When a customer places an order, several systems (e.g., inventory, billing, shipping) need to be notified and act on the event.
62+
- **RabbitMQ Fit**: RabbitMQ is often used in **event-driven architectures** to:
63+
- Distribute events between various microservices.
64+
- Allow fine-grained control over how messages are routed to each service (e.g., fanout or topic exchanges).
65+
- Support scenarios where different services process events at different speeds, with built-in message persistence and retry mechanisms.
66+
67+
- **Why Kafka Could Cause Problems**:
68+
- Kafka is better suited for **log aggregation** or event streaming rather than microservice orchestration.
69+
- Kafka doesn’t support **message priority**, so time-sensitive orders could get stuck behind less critical events if all services are subscribed to the same topic.
70+
- Kafka doesn’t offer built-in mechanisms for complex **routing** of messages, requiring additional infrastructure and logic to replicate RabbitMQ’s flexible routing capabilities.
71+
72+
#### Why RabbitMQ is a Better Fit:
73+
- **Microservice Communication**: RabbitMQ supports both **fanout** (broadcast to all services) and **topic exchanges** (route based on conditions), which is ideal for microservices that need to handle specific events based on their role.
74+
- **Message TTL and Dead Letter Exchanges**: RabbitMQ offers message time-to-live (TTL) and dead-letter exchanges to handle delayed or failed messages, which is critical for systems like e-commerce.
75+
76+
---
77+
78+
### **Scenario 3: Real-Time Data Streaming (Kafka)**
79+
#### Example: Log Aggregation and Real-Time Monitoring
80+
- **Use Case**: A system needs to collect logs from thousands of servers, process them in real time (e.g., alerting on error rates), and analyze them for patterns over time (e.g., anomaly detection).
81+
- **Kafka Fit**: Kafka is designed to handle **high-throughput** log aggregation and **streaming** events in real time. It excels in:
82+
- **Durability**: Kafka stores all logs for a specified time, allowing you to replay logs or process them at a different time.
83+
- **Horizontal Scalability**: Kafka can handle huge streams of data by distributing logs across partitions, making it perfect for systems that need to scale to millions of events per second.
84+
- **Replayability**: Kafka’s ability to replay streams is valuable for debugging or replaying logs to test new processing pipelines.
85+
86+
- **Why RabbitMQ Would Struggle**:
87+
- **Throughput**: RabbitMQ would struggle to handle the same level of throughput efficiently. It was not designed for massive, persistent streams of data like Kafka is.
88+
- **Data Retention**: RabbitMQ messages are typically discarded after they are consumed, whereas Kafka retains messages for a specified period, which is critical for scenarios like log analysis or event stream processing.
89+
- **Scalability**: RabbitMQ’s architecture isn’t as horizontally scalable as Kafka’s, especially when dealing with large-scale, distributed event streams.
90+
91+
#### Why Kafka is a Better Fit:
92+
- **Event Streaming**: Kafka is designed for **continuous, high-volume event streams**, making it ideal for real-time log aggregation.
93+
- **Durability and Reprocessing**: Kafka keeps the event log for a long time, so consumers can reprocess or analyze historical data, a capability that RabbitMQ doesn't offer.
94+
- **Scaling**: Kafka’s partitioned architecture allows it to scale horizontally with very high throughput, ideal for applications needing to process millions of events per second.
95+
96+
---
97+
98+
### **Scenario 4: Financial Transactions Streaming (Kafka)**
99+
#### Example: Tracking Financial Transactions for Real-Time Fraud Detection
100+
- **Use Case**: A bank wants to track all financial transactions in real time and analyze them for patterns of fraudulent activity. Thousands of events are generated per second across multiple systems.
101+
- **Kafka Fit**:
102+
- **High Throughput**: Kafka can ingest millions of financial transactions in real time, allowing immediate analysis by fraud detection systems.
103+
- **Stream Processing**: Kafka works well with stream processing frameworks (like Kafka Streams or Apache Flink) to apply business rules in real time and flag suspicious transactions.
104+
- **Durability**: Kafka's ability to store streams of events means that any missed transactions can be replayed and re-analyzed.
105+
106+
- **Why RabbitMQ Would Be a Poor Fit**:
107+
- RabbitMQ lacks the throughput and durability features needed for this kind of high-volume data ingestion and analysis.
108+
- The lack of **message replay** in RabbitMQ would be problematic if historical data needed to be re-analyzed.
109+
- RabbitMQ’s more complex routing and message delivery guarantees are unnecessary in this scenario, where Kafka’s simpler event log and partitioning are better suited.
110+
111+
#### Why Kafka is a Better Fit:
112+
- **Massive Scale and Replay**: Kafka is better suited for high-velocity data and allows the reprocessing of events, making it perfect for financial fraud detection.
113+
- **Durable Event Streams**: Kafka retains the entire event log, allowing for both real-time processing and later analysis, whereas RabbitMQ would delete messages after consumption.
114+
115+
---
116+
117+
### **Conclusion: Where Each Solution Fits Best**
118+
119+
| Solution | **Best for** | **Problems if Misused** |
120+
|-------------|---------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------|
121+
| **RabbitMQ**| **Asynchronous task processing**, **microservices**, **event-driven apps**| **Limited scalability** for high-throughput or event streaming; doesn’t support replay or long-term storage|
122+
| **Kafka** | **High-throughput event streaming**, **log aggregation**, **real-time analytics**| **Overly complex** for simple tasks, lacks **routing** and **priority features**, more setup and management |
123+
124+
RabbitMQ is ideal for cases where **reliability, routing, and discrete task management** are important, while Kafka excels at **high-throughput, durable event streams** where **scalability** and **real-time analytics** are the focus. Each platform serves different types of workloads, and using the wrong one for a particular job can result in added complexity and inefficiencies.
File renamed without changes.

docs/gRPC_microservice.md

Whitespace-only changes.

docs/REST_API_microservices.md renamed to docs/microservices/REST_API_microservices.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -196,7 +196,7 @@ int main() {
196196
In the microservice architecture, services would typically communicate using HTTP or gRPC. For example, `OrderService` might call `UserService` to retrieve user info, or `PaymentService` might be called to process payments after an order is placed.
197197

198198
The implementation of communication could be handled using REST API frameworks like:
199-
- **CppRestSDK** (for RESTful communication)
199+
- **Crow** (for RESTful communication)
200200
- **gRPC** (for high-performance communication)
201201

202202

Lines changed: 129 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,129 @@
1+
### Microservice Example with Crow
2+
3+
First source:
4+
```
5+
./vcpkg/bootstrap-vcpkg.sh
6+
```
7+
and then build it
8+
9+
```
10+
cmake -S . -B build -G "Ninja Multi-Config" -DCMAKE_TOOLCHAIN_FILE=./vcpkg/scripts/buildsystems/vcpkg.cmake
11+
```
12+
13+
#### 1. **User Service** (user_service.cpp)
14+
15+
```cpp
16+
#include "crow.h"
17+
#include <string>
18+
std::string getUserInfo() {
19+
return "{\"name\": \"John Doe\", \"email\": \"[email protected]\"}";
20+
}
21+
22+
int main() {
23+
24+
crow::SimpleApp app;
25+
26+
CROW_ROUTE(app, "/user")
27+
([]() { return crow::response(getUserInfo()); });
28+
29+
app.port(18080).multithreaded().run();
30+
}
31+
```
32+
the run:
33+
34+
```bash
35+
curl http://localhost:18080/user`
36+
```
37+
38+
#### 2. **Product Service** (product_service.cpp)
39+
40+
```cpp
41+
#include "crow.h"
42+
#include <string>
43+
44+
std::string getProductCatalog() {
45+
return "[{\"name\": \"Laptop\", \"price\": 1200.50}, {\"name\": \"Headphones\", \"price\": 200.99}]";
46+
}
47+
48+
int main() {
49+
crow::SimpleApp app;
50+
51+
CROW_ROUTE(app, "/products")
52+
([]() {
53+
return crow::response(getProductCatalog());
54+
});
55+
56+
app.port(18081).multithreaded().run();
57+
}
58+
```
59+
the run:
60+
61+
```bash
62+
curl http://localhost:18081/products
63+
```
64+
65+
#### 3. **Order Service** (order_service.cpp)
66+
67+
```cpp
68+
#include "crow.h"
69+
#include <iostream>
70+
71+
void createOrder() {
72+
std::cout << "Order created for John Doe with Laptop and Headphones" << std::endl;
73+
}
74+
75+
int main() {
76+
crow::SimpleApp app;
77+
78+
CROW_ROUTE(app, "/order")
79+
([]() {
80+
createOrder();
81+
return crow::response(200);
82+
});
83+
84+
app.port(18082).multithreaded().run();
85+
}
86+
```
87+
the run:
88+
89+
```bash
90+
curl http://localhost:18082/order
91+
```
92+
93+
94+
#### 4. **Payment Service** (payment_service.cpp)
95+
96+
```cpp
97+
#include "crow.h"
98+
99+
void processPayment(double amount) {
100+
std::cout << "Processing payment of $" << amount << std::endl;
101+
}
102+
103+
int main() {
104+
crow::SimpleApp app;
105+
106+
CROW_ROUTE(app, "/payment/<double>")
107+
([](double amount) {
108+
processPayment(amount);
109+
return crow::response(200);
110+
});
111+
112+
app.port(18083).multithreaded().run();
113+
}
114+
```
115+
the run:
116+
117+
```bash
118+
curl http://localhost:18083/payment/10
119+
```
120+
121+
122+
### Running Services
123+
124+
- **User Service** runs on port 18080
125+
- **Product Service** runs on port 18081
126+
- **Order Service** runs on port 18082
127+
- **Payment Service** runs on port 18083
128+
129+
Each service runs independently and listens for HTTP requests. They can be deployed separately, scaled, or updated independently without affecting other services.

0 commit comments

Comments
 (0)