Visual Product Intelligence Platform - Analyze products, assess impact, and make informed decisions with AI.
VisionProbe-AI is an advanced multi-agent AI system designed to analyze product images and provide deep insights. Leveraged by OpenAI GPT-4 Vision, it moves beyond simple object detection to offer health impact analysis, environmental sustainability scores, and ethical purchase recommendations.
VisionProbe acts as a pipeline of intelligent systems functioning in unison. The architecture is built on a Micro-Agent Orchestrator Pattern where a central "Brain" (The Orchestrator) manages state, directs data flow, and handles failures throughout the analysis lifecycle.
The system architecture is composed of the following key modules connected in a secure pipeline:
- Client Layer:
- Frontend: A React application (Vite) that handles user interaction and image uploads.
- Authentication: Neon Auth validates user sessions before requests reach the API.
- Backend Layer:
- API: A Django Rest Framework application that exposes endpoints for analysis.
- Database: Neon Postgres stores user history, product reports, and authentication data.
- AI Core (Orchestrator):
- The central brain that directs the analysis logic. It communicates sequentially with specific agents (Visual, Knowledge, Usage, Impact, Buy Link) to build the final report.
From the moment a user uploads an image to the delivery of the intelligence report, the data flows as follows:
- User Initiation: The user selects a product image on the frontend dashboard.
- Security Check: Neon Auth validates the user's session token.
- Submission: The image is sent to the Django Backend via a secure POST request.
- AI Orchestration:
- Phase 1: Identification: The Visual Agent scans the image. Constraint: If confidence is < 50%, analysis aborts immediately.
- Phase 2: Context: If identified, Knowledge and Use Case agents enrich the data with facts and demographics.
- Phase 3: Analysis: The Impact Agent calculates Health and Environmental scores.
- Phase 4: Commerce: The Buy Link Agent finds purchase options. Constraint: If Risk is High, this step is skipped to protect the user.
- Delivery: The Orchestrator compiles a JSON report, saves it to Neon Postgres, and returns it to the frontend for display.
The core of VisionProbe-AI is its modular agent system. Each agent has a specific responsibility (Single Responsibility Principle) and contributes to the final report.
- Role: Conductor.
- Function: It does not "think" about the product; it thinks about the process. It manages the order of agent execution, passes data from one agent to the next (e.g., passing the "Product Name" from Visual Agent to the Context Agent), and handles error states.
-
ποΈ Visual Identification Agent
- Input: Raw Image.
- Task: Identifies the primary object, brand detection, and text extraction (OCR).
- Output: Structured JSON
{ "product_name": "Coke Zero", "brand": "Coca-Cola", "category": "Beverage", "confidence": 0.98 }.
-
π§ Knowledge Enrichment Agent
- Input: Product Name & Brand.
- Task: Retrieves factual data like ingredients (for food), specs (for tech), or material composition.
- Output: Detailed factual summary.
-
π₯ Use Case Agent
- Input: Product Context.
- Task: Identifies who the product is for (Demographics) and how it should be used.
- Output:
{ "target_audience": ["Gamers", "Students"], "use_cases": ["Energy boost", "Late night study"] }.
-
π Impact Analysis Agent
- Input: Product Ingredients/Materials.
- Task:
- Health: Checks for processed sugars, allergens, or carcinogens.
- Environment: Analyzing packaging recyclability and carbon footprint.
- Output: Risk Score (0-100) and Sustainability Rating.
-
π Buy Link Agent
- Input: Product Name + Risk Level.
- Task: Searches for purchase links.
- Logic: If
Risk Level > High, it suppresses purchase links to avoid promoting harmful products. - Output: List of URLs or "Purchase disabled due to high risk".
VisionProbe-AI/
βββ backend/ # Django Backend API
β βββ config/ # Project Logic & Settings
β βββ core/ # Main Application Logic
β β βββ agents/ # AI Agents & Orchestrator Logic
β β βββ models.py # Database Models
β β βββ views.py # API Views
β βββ manage.py # Django Entry Point
β βββ .env.example # Environment Variables Template
β βββ requirements.txt # Python Dependencies
βββ frontend/ # React Frontend Application
β βββ src/
β β βββ components/ # Reusable UI Components
β β βββ pages/ # Application Pages
β β βββ lib/ # Utilities & Helpers
β β βββ api.js # API Integration
β β βββ App.jsx # Main Component
β β βββ main.jsx # Entry Point
β βββ package.json # Node.js Dependencies
β βββ vite.config.ts # Vite Configuration
βββ docs/ # Additional Documentation
βββ render.yaml # Render Deployment Config
βββ README.md # Project Documentation
- Framework: Django 5.0 & Django Rest Framework
- Authentication: Neon Auth (Serverless Postgres Authentication)
- AI Engine: OpenAI API (GPT-4 Vision Preview)
- Database: Neon Postgres (Serverless)
- Image Processing: Pillow (PIL)
- Server: Gunicorn with Whitenoise for static files
- HTTP Client: Requests
- Framework: React 19
- Build Tool: Vite
- Styling: TailwindCSS
- Icons: Lucide React
- Animations: Framer Motion
- HTTP Client: Axios
- Routing: React Router DOM
- Hosting: Render
- Versioning: Git
- Multi-Agent Orchestration: Sequential processing by Visual, Knowledge, Impact, and Recommendation agents.
- Fail-Safe Mechanism: Automatically aborts analysis if product identification confidence is low.
- Health & Environment Scoring: detailed breakdown of product risks and sustainability.
- Ethical Shopping: Suggests alternatives and disables buy links for high-risk items.
- Real-time Status: Live updates on the frontend as each agent completes its task.
- Responsive Dashboard: Modern, glassmorphism-inspired UI built for all devices.
Follow these steps to set up VisionProbe-AI locally.
- Python 3.10 or higher
- Node.js 18 or higher
- Git
-
Clone the repository:
git clone https://github.com/your-username/VisionProbe-AI.git cd VisionProbe-AI/backend -
Create and activate a virtual environment:
# Windows python -m venv venv venv\Scripts\activate # Mac/Linux python3 -m venv venv source venv/bin/activate
-
Install dependencies:
pip install -r requirements.txt
-
Run migrations:
python manage.py migrate
-
Navigate to the frontend directory:
cd ../frontend -
Install dependencies:
npm install
To run the project, you need to configure environment variables.
Create a .env file in the backend/ directory:
| Variable | Description | Example |
|---|---|---|
SECRET_KEY |
Django Secret Key | django-insecure-... |
DEBUG |
Debug Mode (True/False) | True |
OPENAI_API_KEY |
Your OpenAI API Key | sk-... |
DATABASE_URL |
(Optional) Postgres URL | postgres://user:pass@host/db |
Create a .env file in the frontend/ directory:
| Variable | Description | Example |
|---|---|---|
VITE_API_URL |
Backend API URL | http://localhost:8000/api/v1 |
-
Start the Backend:
# In backend/ terminal python manage.py runserverThe API will be available at
http://localhost:8000. -
Start the Frontend:
# In frontend/ terminal npm run devThe application will launch at
http://localhost:5173.
- Navigate to
http://localhost:5173. - Upload a product image (e.g., a soda can, a gadget, a snack).
- Watch the "Agent Status" panel as analysis progresses.
- Review the final report for Health, Environment, and Purchase recommendations.
This project is configured for easy deployment on Render.
- Push code to GitHub.
- Create a New Web Service on Render.
- Connect your repository.
- Configuration:
- Runtime: Python 3
- Build Command:
pip install -r backend/requirements.txt - Start Command:
cd backend && gunicorn config.wsgi:application
- Environment Variables: Add your
OPENAI_API_KEY,SECRET_KEY, andDATABASE_URLin the Render dashboard.
Note: For the Frontend, create a separate Static Site on Render with Build Command npm run build and Publish Directory dist.
| Method | Endpoint | Description |
|---|---|---|
POST |
/api/v1/auth/login/ |
User authentication |
POST |
/api/v1/analysis/analyze/ |
Upload image for multi-agent analysis |
GET |
/api/v1/analysis/history/ |
specific user's analysis history |
Contributions are welcome! Please fork the repository and submit a pull request for any enhancements or bug fixes.
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature) - Commit your Changes (
git commit -m 'Add some AmazingFeature') - Push to the Branch (
git push origin feature/AmazingFeature) - Open a Pull Request
Β© 2025 VisionProbe-AI. Built with β€οΈ and Passion.