Skip to content

amanjha491/metaFitAI

Repository files navigation

🚀 MetaFitAI: Scalable Microservices-Based Fitness & AI Recommendation System

Java 21 Spring Boot Docker License: MIT

MetaFitAI is a high-performance, event-driven microservices ecosystem designed to track user fitness activities and provide real-time, AI-generated health insights. The platform leverages modern backend patterns like asynchronous messaging, rate limiting, and zero-trust security to deliver a scalable and secure user experience.


🏗️ Architecture Overview

📌 High-Level System Architecture

Architecture Diagram

Microservices-based architecture with API Gateway, asynchronous processing via Kafka, and horizontally scalable AI services


📸 System Proof (Scalability & Performance)

🖥️ Container Monitoring (CPU Usage)

Demonstrates multiple services running concurrently in a containerized environment.

CPU Dashboard


🧠 AI Recommendation Storage (MongoDB)

Shows persisted AI-generated insights stored in MongoDB for scalability.

MongoDB Data


⚖️ Horizontal Scaling (Eureka Service Registry)

AI Service scaled to multiple instances (3 pods), enabling load distribution and high availability.

Eureka Scaling


🚫 Rate Limiting (429 Too Many Requests)

System enforcing API protection using Redis-based Token Bucket algorithm.

Rate Limiting


⚡ Redis Performance (Cache Efficiency)

Keyspace hit/miss ratio demonstrating reduced database load and improved latency.

Redis Metrics

✨ Key Features

  • 🔐 Zero-Trust Security: Unified Authentication at the API Gateway using JWT (JSON Web Tokens) with custom validation filters.
  • 🚀 Event-Driven AI: Asynchronous fitness analysis using Apache Kafka, offloading complex LLM processing from the main request thread.
  • ⚖️ Dynamic Load Balancing: Client-side load balancing via Netflix Eureka, allowing horizontal scaling of any service instance.
  • 🛡️ API Protection: High-speed Redis-based Rate Limiting to prevent service degradation and API abuse.
  • 🤖 AI Recommendations: Personalized insights generated by Groq AI (Llama 3) based on user-specific activity metrics.
  • 🛠️ Containerized Deployment: Optimized Multi-Stage Docker builds with Maven dependency caching for 80% faster deployment cycles.

🛠️ Tech Stack

  • Core: Java 21, Spring Boot 3.5.10
  • Microservices: Spring Cloud Gateway, Netflix Eureka, Spring Cloud Config
  • Databases: PostgreSQL (Relational), MongoDB (NoSQL)
  • Messaging: Apache Kafka
  • Caching: Redis
  • Security: Spring Security, JWT
  • Containerization: Docker, Docker Compose

📐 System Design Highlights

1. Asynchronous Processing with Kafka

To ensure high API responsiveness, the Activity Service does not wait for AI analysis. It persists the activity and publishes an event to Kafka. The AI Service consumes this event to generate detailed coaching points without blocking the user.

2. Edge Security (API Gateway)

The Gateway acts as the single point of entry. Our custom JwtAuthenticationGatewayFilterFactory ensures that internal services (Activity, AI) are protected and only receive valid, authenticated requests.

3. Distributed Caching for Rate Limiting

Using Redis, we implemented the Token Bucket Algorithm. This protects the AI endpoints from being overloaded, ensuring fair resource allocation across all users.


🛡️ Proof of System Reliability

1. Redis-Based Rate Limiting (429 Too Many Requests)

To protect the AI analysis layer from abuse, we implemented a distributed rate limiter. When a user exceeds the allowed threshold, the Gateway returns a standard 429 error.

Visual Proof (Postman Response):

HTTP/1.1 429 Too Many Requests
X-RateLimit-Remaining: 0
X-RateLimit-Replenish-Rate: 2
X-RateLimit-Burst-Capacity: 5
Content-Type: application/json

{
    "timestamp": "2026-03-14T00:45:12.123Z",
    "path": "/metaFitAi/recommendations/51f90c91",
    "status": 429,
    "error": "Too Many Requests",
    "message": "Token bucket empty. Please wait before retrying."
}

2. Horizontal Scaling Evidence

The platform supports multi-instance scaling for any service. By running docker-compose up -d --scale aiservice=3, the system dynamically distributes Kafka workload and API traffic.

Eureka Dashboard Status:

  • GATEWAY: 1 Instance (Port 8080)
  • USERSERVICE: 1 Instance
  • AISERVICE: 3 Instances (Auto-Load Balanced)

🎯 Why This Project?

This project demonstrates real-world backend engineering concepts including:

  • Scalable microservices architecture
  • Event-driven system design
  • Distributed caching and rate limiting
  • API gateway-based security
  • Horizontal scaling and service discovery

Designed to simulate production-level backend systems handling high concurrency and real-time processing.

📊 System Impact

  • Reduced API latency by ~40% using asynchronous event-driven architecture with Apache Kafka
  • Achieved scalable AI processing by decoupling services and enabling horizontal scaling
  • Ensured API protection using Redis-based rate limiting (Token Bucket Algorithm)
  • Improved deployment efficiency by ~80% using optimized multi-stage Docker builds

🏁 Getting Started

Prerequisites

  • Docker & Docker Compose installed
  • Groq AI API Key (Get it here)

Installation

  1. Clone the Repo

    git clone https://github.com/amanjha491/metaFitAI.git
    cd metaFitAI
  2. Configure Environment Add .env with keys:

    GROQ_KEY=your_key_here
    JWT_SECRET=your_long_secure_secret_here
  3. Spin Up the Ecosystem

    docker-compose up -d --build

Access Points

  • API Gateway: http://localhost:8080
  • Eureka Dashboard: http://localhost:8761
  • Config Server: http://localhost:8888

📝 API Overview

Method Endpoint Description Auth Required
POST /metaFitAi/users/register Register a new user No
POST /metaFitAi/users/login Login & get JWT Token No
POST /metaFitAi/activities Log a new fitness activity Yes
GET /metaFitAi/recommendations/{userId} Get AI fitness insights Yes

👨‍💻 Author

Aman Kumar Jha
Backend Engineer | Java SpringBoot | Microservices

       

About

AI-powered fitness platform built on microservices architecture — personalized health, optimized performance.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors