diff --git a/.DS_Store b/.DS_Store new file mode 100644 index 0000000000..02c9f56962 Binary files /dev/null and b/.DS_Store differ diff --git a/README.md b/README.md index a9049b4779..f23dc46d77 100644 --- a/README.md +++ b/README.md @@ -148,6 +148,22 @@ Copy the content and use that as password / secret key. And run that again to ge ## How To Use It - Alternative With Copier +## PostgreSQL 18 + pgvector (local Docker) + +This project includes a Docker image to run PostgreSQL 18 with the `pgvector` v0.8 extension built-in. + +- Build and start the full stack (uses the custom image for the `db` service): + +```bash +docker compose up --build -d +``` + +- The container image is built from `docker/postgres-pgvector/Dockerfile` and the initialization SQL + `docker/postgres-pgvector/initdb/01-enable-pgvector.sql` creates the `vector` extension on first + initialization. + +If you prefer to use a pre-built image, modify `docker-compose.yml` to point `db.image` to your image. + This repository also supports generating a new project using [Copier](https://copier.readthedocs.io). It will copy all the files, ask you configuration questions, and update the `.env` files with your answers. diff --git a/backend/README.md b/backend/README.md index c217000fc2..3363cc9e11 100644 --- a/backend/README.md +++ b/backend/README.md @@ -2,8 +2,8 @@ ## Requirements -* [Docker](https://www.docker.com/). -* [uv](https://docs.astral.sh/uv/) for Python package and environment management. +- [Docker](https://www.docker.com/). +- [uv](https://docs.astral.sh/uv/) for Python package and environment management. ## Docker Compose @@ -127,23 +127,23 @@ As during local development your app directory is mounted as a volume inside the Make sure you create a "revision" of your models and that you "upgrade" your database with that revision every time you change them. As this is what will update the tables in your database. Otherwise, your application will have errors. -* Start an interactive session in the backend container: +- Start an interactive session in the backend container: ```console $ docker compose exec backend bash ``` -* Alembic is already configured to import your SQLModel models from `./backend/app/models.py`. +- Alembic is already configured to import your SQLModel models from `./backend/app/models.py`. -* After changing a model (for example, adding a column), inside the container, create a revision, e.g.: +- After changing a model (for example, adding a column), inside the container, create a revision, e.g.: ```console $ alembic revision --autogenerate -m "Add column last_name to User model" ``` -* Commit to the git repository the files generated in the alembic directory. +- Commit to the git repository the files generated in the alembic directory. -* After creating the revision, run the migration in the database (this is what will actually change the database): +- After creating the revision, run the migration in the database (this is what will actually change the database): ```console $ alembic upgrade head @@ -170,3 +170,44 @@ The email templates are in `./backend/app/email-templates/`. Here, there are two Before continuing, ensure you have the [MJML extension](https://marketplace.visualstudio.com/items?itemName=attilabuti.vscode-mjml) installed in your VS Code. Once you have the MJML extension installed, you can create a new email template in the `src` directory. After creating the new email template and with the `.mjml` file open in your editor, open the command palette with `Ctrl+Shift+P` and search for `MJML: Export to HTML`. This will convert the `.mjml` file to a `.html` file and now you can save it in the build directory. + +## Background Tasks (Celery) and Upstash Redis + +This project supports running background tasks using Celery with Redis as +broker/result backend. You can use a local Redis (via Docker Compose) or a +hosted provider such as Upstash. The project reads these settings from the +environment via the `app.core.settings` values. + +- **Configure via `.env` or environment**: set either `REDIS_URL` (recommended) + or `CELERY_BROKER_URL` and `CELERY_RESULT_BACKEND` explicitly. For Upstash + use the `rediss://` URL provided by Upstash (it contains the host and token). + +Example `.env` entries for Upstash (replace with your values): + +``` +REDIS_URL=rediss://default:REPLACE_WITH_YOUR_TOKEN@global-xxxx.upstash.io:6379 +# or explicit celery vars +CELERY_BROKER_URL=rediss://default:REPLACE_WITH_YOUR_TOKEN@global-xxxx.upstash.io:6379 +CELERY_RESULT_BACKEND=rediss://default:REPLACE_WITH_YOUR_TOKEN@global-xxxx.upstash.io:6379 +``` + +- **Run worker (recommended)**: from the `backend/` directory either use the + Celery CLI or the lightweight Python entrypoint: + +``` +# using Celery CLI (preferred) +celery -A app.core.celery_app.celery_app worker --loglevel=info + +# quick start via python entrypoint (run from the `backend/` directory) +# module form: +python -m app.workers.celery_worker +``` + +- **Test a task**: in a Python shell (with your virtualenv activated): + +``` +python -c "from app.workers import add; res = add.delay(2,3); print(res.get(timeout=10))" +``` + +The example tasks are in `app/tasks.py`. Replace `send_welcome_email` with +your real email sending logic to run it asynchronously. diff --git a/backend/WEBSOCKETS.md b/backend/WEBSOCKETS.md new file mode 100644 index 0000000000..3f1253043a --- /dev/null +++ b/backend/WEBSOCKETS.md @@ -0,0 +1,32 @@ +**WebSocket Infrastructure**: Quick setup + +- **Purpose**: Provide real-time sync across connected clients and across multiple app instances using Redis pub/sub. +- **Components**: + + - `app.api.websocket_manager.WebSocketManager`: manages local WebSocket connections and subscribes to Redis channels `ws:{room}`. + - `app.api.routes.ws`: WebSocket endpoint at `GET /api/v1/ws/{room}` (path under API prefix). + - Uses existing Redis client configured via `REDIS_URL` in `app.core.config.Settings`. + +- **How it works**: + + - Each connected client opens a WebSocket to `/api/v1/ws/{room}`. + - When a client sends a text message, the endpoint publishes the message to Redis channel `ws:{room}`. + - The `WebSocketManager` subscribes to `ws:*` and forwards published messages to all local WebSocket connections in the given room. + - This allows multiple app instances to broadcast to each other's connected clients. + +- **Env / Config**: + + - Ensure `REDIS_URL` is configured in the project's environment (default: `redis://redis:6379/0`). + +- **Frontend example** (browser JS): + +```js +const ws = new WebSocket(`wss://your-backend.example.com/api/v1/ws/room-123`); +ws.addEventListener("message", (ev) => console.log("msg", ev.data)); +ws.addEventListener("open", () => ws.send(JSON.stringify({ type: "hello" }))); +``` + +- **Notes & next steps**: + - Messages are sent/received as plain text; consider JSON schema enforcement and auth. + - Add authentication (JWT in query param/header) and room access checks as needed. + - Consider rate limiting and maximum connections per client. diff --git a/backend/app/api/main.py b/backend/app/api/main.py index eac18c8e8f..01521bf336 100644 --- a/backend/app/api/main.py +++ b/backend/app/api/main.py @@ -1,6 +1,6 @@ from fastapi import APIRouter -from app.api.routes import items, login, private, users, utils +from app.api.routes import items, login, private, users, utils, ws from app.core.config import settings api_router = APIRouter() @@ -8,6 +8,7 @@ api_router.include_router(users.router) api_router.include_router(utils.router) api_router.include_router(items.router) +api_router.include_router(ws.router) if settings.ENVIRONMENT == "local": diff --git a/backend/app/api/routes/ws.py b/backend/app/api/routes/ws.py new file mode 100644 index 0000000000..47f71b7499 --- /dev/null +++ b/backend/app/api/routes/ws.py @@ -0,0 +1,21 @@ +from fastapi import APIRouter, WebSocket, WebSocketDisconnect + +router = APIRouter() + + +@router.websocket("/ws/{room}") +async def websocket_endpoint(websocket: WebSocket, room: str): + """Simple WebSocket endpoint that forwards client messages to Redis + and receives published messages via the WebSocketManager (attached to + the app state) to broadcast to local clients. + """ + manager = websocket.app.state.ws_manager + await manager.connect(websocket, room) + try: + while True: + # receive text from client and publish to Redis so other instances + # receive it and forward to their connected clients + data = await websocket.receive_text() + await manager.publish(room, data) + except WebSocketDisconnect: + await manager.disconnect(websocket, room) diff --git a/backend/app/api/websocket_manager.py b/backend/app/api/websocket_manager.py new file mode 100644 index 0000000000..d826d832de --- /dev/null +++ b/backend/app/api/websocket_manager.py @@ -0,0 +1,102 @@ +import asyncio +import logging +from typing import Dict, Set + +from fastapi import WebSocket + +logger = logging.getLogger(__name__) + + +class WebSocketManager: + """Manage WebSocket connections and Redis pub/sub bridging. + + - Keeps in-memory mapping of rooms -> WebSocket connections for local broadcasts. + - Subscribes to Redis channels `ws:{room}` and broadcasts published messages + to local connections so multiple app instances stay in sync. + """ + + def __init__(self, redis_client): + self.redis = redis_client + self.connections: Dict[str, Set[WebSocket]] = {} + self._pubsub = None + self._listen_task: asyncio.Task | None = None + + async def start(self) -> None: + try: + self._pubsub = self.redis.pubsub() + # Subscribe to all ws channels using pattern subscription + await self._pubsub.psubscribe("ws:*") + self._listen_task = asyncio.create_task(self._reader_loop()) + logger.info("WebSocketManager redis listener started") + except Exception as e: + logger.warning(f"WebSocketManager start failed: {e}") + + async def _reader_loop(self) -> None: + try: + async for message in self._pubsub.listen(): + if not message: + continue + mtype = message.get("type") + # handle pmessage (pattern) and message + if mtype not in ("pmessage", "message"): + continue + # redis.asyncio returns bytes for channel/data in some setups + channel = message.get("channel") or message.get("pattern") + data = message.get("data") + if isinstance(channel, (bytes, bytearray)): + channel = channel.decode() + if isinstance(data, (bytes, bytearray)): + data = data.decode() + # channel format: ws: + try: + room = str(channel).split("ws:", 1)[1] + except Exception: + continue + await self._broadcast_to_local(room, data) + except asyncio.CancelledError: + logger.info("WebSocketManager listener task cancelled") + except Exception as e: + logger.exception(f"WebSocketManager listener error: {e}") + + async def publish(self, room: str, message: str) -> None: + try: + await self.redis.publish(f"ws:{room}", message) + except Exception as e: + logger.warning(f"Failed to publish websocket message: {e}") + + async def connect(self, websocket: WebSocket, room: str) -> None: + await websocket.accept() + self.connections.setdefault(room, set()).add(websocket) + + async def disconnect(self, websocket: WebSocket, room: str) -> None: + conns = self.connections.get(room) + if not conns: + return + conns.discard(websocket) + if not conns: + self.connections.pop(room, None) + + async def send_personal(self, websocket: WebSocket, message: str) -> None: + await websocket.send_text(message) + + async def _broadcast_to_local(self, room: str, message: str) -> None: + conns = list(self.connections.get(room, [])) + for ws in conns: + try: + await ws.send_text(message) + except Exception: + # ignore send errors; disconnect will clean up + pass + + async def stop(self) -> None: + if self._listen_task: + self._listen_task.cancel() + try: + await self._listen_task + except Exception: + pass + if self._pubsub: + try: + await self._pubsub.close() + except Exception: + pass diff --git a/backend/app/core/celery_app.py b/backend/app/core/celery_app.py new file mode 100644 index 0000000000..903311b9a1 --- /dev/null +++ b/backend/app/core/celery_app.py @@ -0,0 +1,28 @@ +from __future__ import annotations + +from celery import Celery + +from .config import settings + + +broker_url = settings.CELERY_BROKER_URL or settings.REDIS_URL +result_backend = settings.CELERY_RESULT_BACKEND or settings.REDIS_URL + + +celery_app = Celery( + settings.PROJECT_NAME if getattr(settings, "PROJECT_NAME", None) else "app", + broker=broker_url, + backend=result_backend, +) + + +celery_app.conf.update( + result_expires=3600, + task_serializer="json", + result_serializer="json", + accept_content=["json"], + timezone="UTC", + enable_utc=True, +) + +celery_app.autodiscover_tasks(["app.workers"]) diff --git a/backend/app/core/config.py b/backend/app/core/config.py index 650b9f7910..d7dfedb8b7 100644 --- a/backend/app/core/config.py +++ b/backend/app/core/config.py @@ -55,6 +55,21 @@ def all_cors_origins(self) -> list[str]: POSTGRES_USER: str POSTGRES_PASSWORD: str = "" POSTGRES_DB: str = "" + # Redis connection URL. Default points to the compose service `redis`. + REDIS_URL: str = "redis://redis:6379/0" + # Celery broker/result backend. By default reuse `REDIS_URL` so you can + # configure an Upstash or other hosted Redis via `REDIS_URL` or explicitly + # via `CELERY_BROKER_URL` / `CELERY_RESULT_BACKEND` env vars. + CELERY_BROKER_URL: str | None = None + CELERY_RESULT_BACKEND: str | None = None + + # Cloudflare R2 (S3 compatible) settings + R2_ENABLED: bool = False + R2_ACCOUNT_ID: str | None = None + R2_ACCESS_KEY_ID: str | None = None + R2_SECRET_ACCESS_KEY: str | None = None + R2_BUCKET: str | None = None + R2_ENDPOINT_URL: AnyUrl | None = None @computed_field # type: ignore[prop-decorator] @property @@ -90,6 +105,39 @@ def _set_default_emails_from(self) -> Self: def emails_enabled(self) -> bool: return bool(self.SMTP_HOST and self.EMAILS_FROM_EMAIL) + @computed_field # type: ignore[prop-decorator] + @property + def r2_endpoint(self) -> str | None: + """Return explicit endpoint URL if set, otherwise construct from account id.""" + if self.R2_ENDPOINT_URL: + return str(self.R2_ENDPOINT_URL) + if self.R2_ACCOUNT_ID: + return f"https://{self.R2_ACCOUNT_ID}.r2.cloudflarestorage.com" + return None + + @computed_field # type: ignore[prop-decorator] + @property + def r2_enabled(self) -> bool: + """Whether R2 integration is configured/enabled.""" + if not self.R2_ENABLED: + return False + return bool(self.R2_BUCKET and self.R2_ACCESS_KEY_ID and self.R2_SECRET_ACCESS_KEY) + + @computed_field # type: ignore[prop-decorator] + @property + def r2_boto3_config(self) -> dict[str, Any]: + """Return a dict of kwargs suitable for boto3/aioboto3 client creation.""" + if not self.r2_enabled: + return {} + cfg: dict[str, Any] = { + "aws_access_key_id": self.R2_ACCESS_KEY_ID, + "aws_secret_access_key": self.R2_SECRET_ACCESS_KEY, + } + endpoint = self.r2_endpoint + if endpoint: + cfg["endpoint_url"] = endpoint + return cfg + EMAIL_TEST_USER: EmailStr = "test@example.com" FIRST_SUPERUSER: EmailStr FIRST_SUPERUSER_PASSWORD: str diff --git a/backend/app/core/r2.py b/backend/app/core/r2.py new file mode 100644 index 0000000000..2afeacb49d --- /dev/null +++ b/backend/app/core/r2.py @@ -0,0 +1,77 @@ +"""Simple async Cloudflare R2 (S3-compatible) helpers using aioboto3. + +This module provides small wrappers for common operations used by the +application: upload, download, delete and generating presigned URLs. + +Usage: + await upload_bytes("path/to/key", b"data") + data = await download_bytes("path/to/key") +""" +from __future__ import annotations + +from typing import Optional + +import aioboto3 +from botocore.exceptions import ClientError + +from .config import settings + + +async def upload_bytes( + key: str, + data: bytes, + bucket: Optional[str] = None, + content_type: Optional[str] = None, +) -> None: + bucket = bucket or settings.R2_BUCKET + if not settings.r2_enabled: + raise RuntimeError("R2 is not configured") + + async with aioboto3.client("s3", **settings.r2_boto3_config) as client: + params = {"Bucket": bucket, "Key": key, "Body": data} + if content_type: + params["ContentType"] = content_type + await client.put_object(**params) + + +async def download_bytes(key: str, bucket: Optional[str] = None) -> bytes: + bucket = bucket or settings.R2_BUCKET + if not settings.r2_enabled: + raise RuntimeError("R2 is not configured") + + async with aioboto3.client("s3", **settings.r2_boto3_config) as client: + resp = await client.get_object(Bucket=bucket, Key=key) + async with resp["Body"] as stream: + return await stream.read() + + +async def delete_object(key: str, bucket: Optional[str] = None) -> None: + bucket = bucket or settings.R2_BUCKET + if not settings.r2_enabled: + raise RuntimeError("R2 is not configured") + + async with aioboto3.client("s3", **settings.r2_boto3_config) as client: + await client.delete_object(Bucket=bucket, Key=key) + + +async def generate_presigned_url(key: str, expires_in: int = 3600, bucket: Optional[str] = None) -> str: + bucket = bucket or settings.R2_BUCKET + if not settings.r2_enabled: + raise RuntimeError("R2 is not configured") + + session = aioboto3.Session() + async with session.client("s3", **settings.r2_boto3_config) as client: + # generate_presigned_url is provided by botocore client + return client.generate_presigned_url( + "get_object", + Params={"Bucket": bucket, "Key": key}, + ExpiresIn=expires_in, + ) + + +__all__ = [ + "upload_bytes", + "download_bytes", + "delete_object", + "generate_presigned_url", +] diff --git a/backend/app/core/redis.py b/backend/app/core/redis.py new file mode 100644 index 0000000000..b4a485f582 --- /dev/null +++ b/backend/app/core/redis.py @@ -0,0 +1,66 @@ +import redis.asyncio as aioredis +from typing import Optional +import json +import logging +from app.core.config import settings + +logger = logging.getLogger(__name__) + + +class RedisClient: + _instance: Optional[aioredis.Redis] = None + + @classmethod + async def get_client(cls) -> aioredis.Redis: + if cls._instance is None: + cls._instance = await aioredis.from_url( + settings.REDIS_URL, + encoding="utf-8", + decode_responses=True, + max_connections=50 + ) + logger.info("Redis client initialized") + return cls._instance + + @classmethod + async def close(cls): + if cls._instance: + await cls._instance.close() + cls._instance = None + logger.info("Redis client closed") + + +async def get_redis() -> aioredis.Redis: + return await RedisClient.get_client() + + +class CacheService: + def __init__(self, redis_client: aioredis.Redis): + self.redis = redis_client + + async def get(self, key: str) -> Optional[dict]: + try: + value = await self.redis.get(key) + return json.loads(value) if value else None + except Exception as e: + logger.error(f"Redis GET error: {e}") + return None + + async def set(self, key: str, value: dict, expire: int = 3600): + try: + await self.redis.set(key, json.dumps(value), ex=expire) + except Exception as e: + logger.error(f"Redis SET error: {e}") + + async def delete(self, key: str): + try: + await self.redis.delete(key) + except Exception as e: + logger.error(f"Redis DELETE error: {e}") + + async def exists(self, key: str) -> bool: + try: + return await self.redis.exists(key) > 0 + except Exception as e: + logger.error(f"Redis EXISTS error: {e}") + return False diff --git a/backend/app/main.py b/backend/app/main.py index 9a95801e74..9f99baca2f 100644 --- a/backend/app/main.py +++ b/backend/app/main.py @@ -1,11 +1,22 @@ +import logging import sentry_sdk from fastapi import FastAPI from fastapi.routing import APIRoute from starlette.middleware.cors import CORSMiddleware +import asyncio from app.api.main import api_router from app.core.config import settings +# middlewares +from app.middlewares.logger import RequestLoggerMiddleware +from app.middlewares.rate_limiter import RateLimiterMiddleware + +# redis client and threading utils +from app.core.redis import RedisClient +from app.utils_helper.threading import ThreadingUtils +from app.api.websocket_manager import WebSocketManager + def custom_generate_unique_id(route: APIRoute) -> str: return f"{route.tags[0]}-{route.name}" @@ -31,3 +42,43 @@ def custom_generate_unique_id(route: APIRoute) -> str: ) app.include_router(api_router, prefix=settings.API_V1_STR) + +# Register additional middlewares +app.add_middleware(RequestLoggerMiddleware) +app.add_middleware(RateLimiterMiddleware, requests_per_minute=100) + + +@app.on_event("startup") +async def startup_event(): + # Configure basic logging + logging.basicConfig(level=logging.INFO) + + # Initialize redis client and attach to app.state + try: + app.state.redis = await RedisClient.get_client() + # Initialize WebSocket manager and start Redis listener + try: + app.state.ws_manager = WebSocketManager(app.state.redis) + # start the manager which spawns a background redis subscription + await app.state.ws_manager.start() + except Exception as e: + logging.getLogger(__name__).warning(f"WS manager init failed: {e}") + except Exception as e: + logging.getLogger(__name__).warning(f"Redis init failed: {e}") + + # Attach threading utilities to app state for global access + app.state.threading = ThreadingUtils + + +@app.on_event("shutdown") +async def shutdown_event(): + try: + await RedisClient.close() + except Exception as e: + logging.getLogger(__name__).warning(f"Redis close failed: {e}") + # stop websocket manager if present + try: + if getattr(app.state, "ws_manager", None): + await app.state.ws_manager.stop() + except Exception as e: + logging.getLogger(__name__).warning(f"WS manager stop failed: {e}") diff --git a/backend/app/middlewares/cors.py b/backend/app/middlewares/cors.py new file mode 100644 index 0000000000..44dea926e0 --- /dev/null +++ b/backend/app/middlewares/cors.py @@ -0,0 +1,13 @@ +from fastapi.middleware.cors import CORSMiddleware +from app.config.settings import settings + + +def setup_cors(app): + app.add_middleware( + CORSMiddleware, + allow_origins=settings.CORS_ORIGINS, + allow_credentials=True, + allow_methods=["*"], + allow_headers=["*"], + expose_headers=["X-Request-ID", "X-Process-Time"] + ) diff --git a/backend/app/middlewares/error_handler.py b/backend/app/middlewares/error_handler.py new file mode 100644 index 0000000000..94febd737b --- /dev/null +++ b/backend/app/middlewares/error_handler.py @@ -0,0 +1,70 @@ +import logging +from fastapi import Request, status +from fastapi.responses import JSONResponse +from fastapi.exceptions import RequestValidationError +from starlette.exceptions import HTTPException as StarletteHTTPException +from app.core.exceptions import AppException +from app.schemas.response import ResponseSchema + +logger = logging.getLogger(__name__) + + +async def app_exception_handler(request: Request, exc: AppException): + logger.error(f"AppException: {exc.message} - Details: {exc.details}") + return JSONResponse( + status_code=exc.status_code, + content=ResponseSchema( + success=False, + message=exc.message, + errors=exc.details, + data=None + ).model_dump() + ) + + +async def validation_exception_handler(request: Request, exc: RequestValidationError): + errors = [ + { + "field": ".".join(str(loc) for loc in err["loc"]), + "message": err["msg"], + "type": err["type"] + } + for err in exc.errors() + ] + + logger.warning(f"Validation error: {errors}") + return JSONResponse( + status_code=status.HTTP_422_UNPROCESSABLE_ENTITY, + content=ResponseSchema( + success=False, + message="Validation error", + errors=errors, + data=None + ).model_dump() + ) + + +async def http_exception_handler(request: Request, exc: StarletteHTTPException): + logger.error(f"HTTPException: {exc.status_code} - {exc.detail}") + return JSONResponse( + status_code=exc.status_code, + content=ResponseSchema( + success=False, + message=exc.detail, + errors=None, + data=None + ).model_dump() + ) + + +async def unhandled_exception_handler(request: Request, exc: Exception): + logger.exception(f"Unhandled exception: {str(exc)}") + return JSONResponse( + status_code=status.HTTP_500_INTERNAL_SERVER_ERROR, + content=ResponseSchema( + success=False, + message="Internal server error", + errors=str(exc) if request.app.state.settings.DEBUG else None, + data=None + ).model_dump() + ) diff --git a/backend/app/middlewares/logger.py b/backend/app/middlewares/logger.py new file mode 100644 index 0000000000..0dd2a990d9 --- /dev/null +++ b/backend/app/middlewares/logger.py @@ -0,0 +1,33 @@ +import time +import logging +from fastapi import Request +from starlette.middleware.base import BaseHTTPMiddleware +from typing import Callable + +logger = logging.getLogger(__name__) + + +class RequestLoggerMiddleware(BaseHTTPMiddleware): + async def dispatch(self, request: Request, call_next: Callable): + request_id = request.headers.get("X-Request-ID", "N/A") + start_time = time.time() + + logger.info( + f"Request started: {request.method} {request.url.path} " + f"[Request ID: {request_id}]" + ) + + response = await call_next(request) + + process_time = time.time() - start_time + response.headers["X-Process-Time"] = str(process_time) + response.headers["X-Request-ID"] = request_id + + logger.info( + f"Request completed: {request.method} {request.url.path} " + f"Status: {response.status_code} " + f"Duration: {process_time:.3f}s " + f"[Request ID: {request_id}]" + ) + + return response diff --git a/backend/app/middlewares/rate_limiter.py b/backend/app/middlewares/rate_limiter.py new file mode 100644 index 0000000000..57053082ca --- /dev/null +++ b/backend/app/middlewares/rate_limiter.py @@ -0,0 +1,55 @@ +import time +from fastapi import Request, HTTPException, status +from starlette.middleware.base import BaseHTTPMiddleware +from typing import Callable, Dict +from collections import defaultdict +import asyncio + + +class RateLimiterMiddleware(BaseHTTPMiddleware): + def __init__(self, app, requests_per_minute: int = 100): + super().__init__(app) + self.requests_per_minute = requests_per_minute + self.requests: Dict[str, list] = defaultdict(list) + self.cleanup_interval = 60 + self._start_cleanup_task() + + def _start_cleanup_task(self): + asyncio.create_task(self._cleanup_old_requests()) + + async def _cleanup_old_requests(self): + while True: + await asyncio.sleep(self.cleanup_interval) + current_time = time.time() + for ip in list(self.requests.keys()): + self.requests[ip] = [ + req_time for req_time in self.requests[ip] + if current_time - req_time < 60 + ] + if not self.requests[ip]: + del self.requests[ip] + + async def dispatch(self, request: Request, call_next: Callable): + client_ip = request.client.host + current_time = time.time() + + self.requests[client_ip] = [ + req_time for req_time in self.requests[client_ip] + if current_time - req_time < 60 + ] + + if len(self.requests[client_ip]) >= self.requests_per_minute: + raise HTTPException( + status_code=status.HTTP_429_TOO_MANY_REQUESTS, + detail="Rate limit exceeded. Please try again later." + ) + + self.requests[client_ip].append(current_time) + response = await call_next(request) + + response.headers["X-RateLimit-Limit"] = str(self.requests_per_minute) + response.headers["X-RateLimit-Remaining"] = str( + self.requests_per_minute - len(self.requests[client_ip]) + ) + + return response diff --git a/backend/app/middlewares/response.py b/backend/app/middlewares/response.py new file mode 100644 index 0000000000..c2d7f7c10a --- /dev/null +++ b/backend/app/middlewares/response.py @@ -0,0 +1,14 @@ +from fastapi import Request +from starlette.middleware.base import BaseHTTPMiddleware +from starlette.responses import JSONResponse +from typing import Callable +import json + + +class ResponseFormatterMiddleware(BaseHTTPMiddleware): + async def dispatch(self, request: Request, call_next: Callable): + response = await call_next(request) + if not response.headers.get("content-type", "").startswith("application/json"): + return response + + return response diff --git a/backend/app/schemas/base.py b/backend/app/schemas/base.py new file mode 100644 index 0000000000..a479398fbc --- /dev/null +++ b/backend/app/schemas/base.py @@ -0,0 +1,12 @@ +from pydantic import BaseModel, ConfigDict +from datetime import datetime +from typing import Optional + + +class BaseSchema(BaseModel): + model_config = ConfigDict(from_attributes=True) + + +class TimestampMixin(BaseModel): + created_at: Optional[datetime] = None + updated_at: Optional[datetime] = None diff --git a/backend/app/schemas/response.py b/backend/app/schemas/response.py new file mode 100644 index 0000000000..c584594500 --- /dev/null +++ b/backend/app/schemas/response.py @@ -0,0 +1,34 @@ +from typing import Optional, Any, Generic, TypeVar +from pydantic import BaseModel, Field + +T = TypeVar('T') + + +class ResponseSchema(BaseModel, Generic[T]): + success: bool = Field(default=True, description="Operation success status") + message: str = Field(default="Success", description="Response message") + data: Optional[T] = Field(default=None, description="Response data") + errors: Optional[Any] = Field(default=None, description="Error details") + meta: Optional[dict] = Field(default=None, description="Additional metadata") + + class Config: + json_schema_extra = { + "example": { + "success": True, + "message": "Operation completed successfully", + "data": {"id": 1, "name": "Example"}, + "errors": None, + "meta": {"timestamp": "2024-01-01T00:00:00"} + } + } + + +class PaginationMeta(BaseModel): + page: int + page_size: int + total_items: int + total_pages: int + + +class PaginatedResponseSchema(ResponseSchema[T], Generic[T]): + meta: Optional[PaginationMeta] = None diff --git a/backend/app/tasks/tasks.py b/backend/app/tasks/tasks.py new file mode 100644 index 0000000000..d79faf3027 --- /dev/null +++ b/backend/app/tasks/tasks.py @@ -0,0 +1,5 @@ +from __future__ import annotations + +from app.workers import add, send_welcome_email + +__all__ = ["add", "send_welcome_email"] diff --git a/backend/app/utils_helper/helpers.py b/backend/app/utils_helper/helpers.py new file mode 100644 index 0000000000..29b6b5ec17 --- /dev/null +++ b/backend/app/utils_helper/helpers.py @@ -0,0 +1,28 @@ +import hashlib +import uuid +from datetime import datetime, timedelta +from typing import Any, Optional + + +def generate_uuid() -> str: + return str(uuid.uuid4()) + + +def generate_hash(data: str) -> str: + return hashlib.sha256(data.encode()).hexdigest() + + +def get_current_timestamp() -> datetime: + return datetime.utcnow() + + +def add_time(hours: int = 0, minutes: int = 0, days: int = 0) -> datetime: + return datetime.utcnow() + timedelta(hours=hours, minutes=minutes, days=days) + + +def format_datetime(dt: datetime, fmt: str = "%Y-%m-%d %H:%M:%S") -> str: + return dt.strftime(fmt) + + +def parse_datetime(dt_str: str, fmt: str = "%Y-%m-%d %H:%M:%S") -> datetime: + return datetime.strptime(dt_str, fmt) diff --git a/backend/app/utils_helper/messages.py b/backend/app/utils_helper/messages.py new file mode 100644 index 0000000000..40acb1b5a5 --- /dev/null +++ b/backend/app/utils_helper/messages.py @@ -0,0 +1,29 @@ +class Messages: + # Success messages + SUCCESS = "Operation completed successfully" + CREATED = "Resource created successfully" + UPDATED = "Resource updated successfully" + DELETED = "Resource deleted successfully" + + # Error messages + NOT_FOUND = "Resource not found" + ALREADY_EXISTS = "Resource already exists" + UNAUTHORIZED = "Unauthorized access" + FORBIDDEN = "Access forbidden" + BAD_REQUEST = "Invalid request" + INTERNAL_ERROR = "Internal server error" + VALIDATION_ERROR = "Validation error" + + # User messages + USER_CREATED = "User created successfully" + USER_NOT_FOUND = "User not found" + USER_UPDATED = "User updated successfully" + USER_DELETED = "User deleted successfully" + INVALID_CREDENTIALS = "Invalid credentials" + + # Rate limit + RATE_LIMIT_EXCEEDED = "Rate limit exceeded. Please try again later" + + @staticmethod + def custom(message: str) -> str: + return message diff --git a/backend/app/utils_helper/threading.py b/backend/app/utils_helper/threading.py new file mode 100644 index 0000000000..eed313db1c --- /dev/null +++ b/backend/app/utils_helper/threading.py @@ -0,0 +1,24 @@ +import asyncio +from concurrent.futures import ThreadPoolExecutor +from typing import Callable, Any +from functools import wraps + + +class ThreadingUtils: + executor = ThreadPoolExecutor(max_workers=10) + + @staticmethod + async def run_in_thread(func: Callable, *args, **kwargs) -> Any: + loop = asyncio.get_event_loop() + return await loop.run_in_executor( + ThreadingUtils.executor, + lambda: func(*args, **kwargs) + ) + + @staticmethod + def async_to_sync(func: Callable) -> Callable: + @wraps(func) + def wrapper(*args, **kwargs): + loop = asyncio.get_event_loop() + return loop.run_until_complete(func(*args, **kwargs)) + return wrapper diff --git a/backend/app/workers/__init__.py b/backend/app/workers/__init__.py new file mode 100644 index 0000000000..2cef3e8b56 --- /dev/null +++ b/backend/app/workers/__init__.py @@ -0,0 +1,6 @@ +from __future__ import annotations + +from .tasks import add, send_welcome_email +from .celery_worker import main as worker_main + +__all__ = ["add", "send_welcome_email", "worker_main"] diff --git a/backend/app/workers/celery_worker.py b/backend/app/workers/celery_worker.py new file mode 100644 index 0000000000..a1e28abe7c --- /dev/null +++ b/backend/app/workers/celery_worker.py @@ -0,0 +1,15 @@ +from __future__ import annotations + +from app.core.celery_app import celery_app + + +def main() -> None: + argv = [ + "worker", + "--loglevel=info", + ] + celery_app.worker_main(argv) + + +if __name__ == "__main__": + main() diff --git a/backend/pyproject.toml b/backend/pyproject.toml index d72454c28a..9735c2d11e 100644 --- a/backend/pyproject.toml +++ b/backend/pyproject.toml @@ -21,6 +21,10 @@ dependencies = [ "pydantic-settings<3.0.0,>=2.2.1", "sentry-sdk[fastapi]<2.0.0,>=1.40.6", "pyjwt<3.0.0,>=2.8.0", + "redis<5.0.0,>=4.6.0", + "celery[redis]<6,>=5.3.0", + "boto3>=1.26", + "aioboto3>=10.5", ] [tool.uv] diff --git a/docker-compose.override.test-dev.yml b/docker-compose.override.test-dev.yml new file mode 100644 index 0000000000..65d8ce0c30 --- /dev/null +++ b/docker-compose.override.test-dev.yml @@ -0,0 +1,83 @@ +services: + + # Local services are available on their ports, but also available on: + # http://api.localhost.tiangolo.com: backend + # http://dashboard.localhost.tiangolo.com: frontend + # etc. To enable it, update .env, set: + # DOMAIN=localhost.tiangolo.com + + db: + restart: "no" + ports: + - "5432:5432" + + backend: + restart: "no" + ports: + - "8000:8000" + build: + context: ./backend + command: + - fastapi + - run + - --reload + - "app/main.py" + develop: + watch: + - path: ./backend + action: sync + target: /app + ignore: + - ./backend/.venv + - .venv + - path: ./backend/pyproject.toml + action: rebuild + volumes: + - ./backend/htmlcov:/app/htmlcov + environment: + SMTP_HOST: "mailcatcher" + SMTP_PORT: "1025" + SMTP_TLS: "false" + EMAILS_FROM_EMAIL: "noreply@example.com" + + mailcatcher: + image: schickling/mailcatcher + ports: + - "1080:1080" + - "1025:1025" + + frontend: + restart: "no" + ports: + - "5173:80" + build: + context: ./frontend + args: + - VITE_API_URL=http://localhost:8000 + - NODE_ENV=development + + playwright: + build: + context: ./frontend + dockerfile: Dockerfile.playwright + args: + - VITE_API_URL=http://backend:8000 + - NODE_ENV=production + ipc: host + depends_on: + - backend + - mailcatcher + env_file: + - .env + environment: + - VITE_API_URL=http://backend:8000 + - MAILCATCHER_HOST=http://mailcatcher:1080 + - PLAYWRIGHT_HTML_HOST=0.0.0.0 + - CI=${CI} + volumes: + - ./frontend/blob-report:/app/blob-report + - ./frontend/test-results:/app/test-results + ports: + - 9323:9323 + +# Traefik network removed for local dev diff --git a/docker-compose.override.yml b/docker-compose.override.yml index 0751abe901..c10788bcc1 100644 --- a/docker-compose.override.yml +++ b/docker-compose.override.yml @@ -50,11 +50,6 @@ services: ports: - "5432:5432" - adminer: - restart: "no" - ports: - - "8080:8080" - backend: restart: "no" ports: diff --git a/docker-compose.test-dev.yml b/docker-compose.test-dev.yml new file mode 100644 index 0000000000..1a13330c0e --- /dev/null +++ b/docker-compose.test-dev.yml @@ -0,0 +1,121 @@ +services: + + db: + image: postgres:17 + restart: always + healthcheck: + test: [ "CMD-SHELL", "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}" ] + interval: 10s + retries: 5 + start_period: 30s + timeout: 10s + volumes: + - app-db-data:/var/lib/postgresql/data/pgdata + env_file: + - .env + environment: + - PGDATA=/var/lib/postgresql/data/pgdata + - POSTGRES_PASSWORD=${POSTGRES_PASSWORD?Variable not set} + - POSTGRES_USER=${POSTGRES_USER?Variable not set} + - POSTGRES_DB=${POSTGRES_DB?Variable not set} + + prestart: + image: '${DOCKER_IMAGE_BACKEND?Variable not set}:${TAG-latest}' + build: + context: ./backend + networks: + - default + depends_on: + db: + condition: service_healthy + restart: true + command: bash scripts/prestart.sh + env_file: + - .env + environment: + - DOMAIN=${DOMAIN} + - FRONTEND_HOST=${FRONTEND_HOST?Variable not set} + - ENVIRONMENT=${ENVIRONMENT} + - BACKEND_CORS_ORIGINS=${BACKEND_CORS_ORIGINS} + - SECRET_KEY=${SECRET_KEY?Variable not set} + - FIRST_SUPERUSER=${FIRST_SUPERUSER?Variable not set} + - FIRST_SUPERUSER_PASSWORD=${FIRST_SUPERUSER_PASSWORD?Variable not set} + - SMTP_HOST=${SMTP_HOST} + - SMTP_USER=${SMTP_USER} + - SMTP_PASSWORD=${SMTP_PASSWORD} + - EMAILS_FROM_EMAIL=${EMAILS_FROM_EMAIL} + - POSTGRES_SERVER=db + - POSTGRES_PORT=${POSTGRES_PORT} + - POSTGRES_DB=${POSTGRES_DB} + - POSTGRES_USER=${POSTGRES_USER?Variable not set} + - POSTGRES_PASSWORD=${POSTGRES_PASSWORD?Variable not set} + - SENTRY_DSN=${SENTRY_DSN} + + backend: + image: '${DOCKER_IMAGE_BACKEND?Variable not set}:${TAG-latest}' + restart: always + networks: + - default + depends_on: + db: + condition: service_healthy + restart: true + prestart: + condition: service_completed_successfully + env_file: + - .env + environment: + - DOMAIN=${DOMAIN} + - FRONTEND_HOST=${FRONTEND_HOST?Variable not set} + - ENVIRONMENT=${ENVIRONMENT} + - BACKEND_CORS_ORIGINS=${BACKEND_CORS_ORIGINS} + - SECRET_KEY=${SECRET_KEY?Variable not set} + - FIRST_SUPERUSER=${FIRST_SUPERUSER?Variable not set} + - FIRST_SUPERUSER_PASSWORD=${FIRST_SUPERUSER_PASSWORD?Variable not set} + - SMTP_HOST=${SMTP_HOST} + - SMTP_USER=${SMTP_USER} + - SMTP_PASSWORD=${SMTP_PASSWORD} + - EMAILS_FROM_EMAIL=${EMAILS_FROM_EMAIL} + - POSTGRES_SERVER=db + - POSTGRES_PORT=${POSTGRES_PORT} + - POSTGRES_DB=${POSTGRES_DB} + - POSTGRES_USER=${POSTGRES_USER?Variable not set} + - POSTGRES_PASSWORD=${POSTGRES_PASSWORD?Variable not set} + - SENTRY_DSN=${SENTRY_DSN} + + healthcheck: + test: [ "CMD", "curl", "-f", "http://localhost:8000/api/v1/utils/health-check/" ] + interval: 10s + timeout: 5s + retries: 5 + + build: + context: ./backend + # Traefik labels removed for dev environment + + frontend: + image: '${DOCKER_IMAGE_FRONTEND?Variable not set}:${TAG-latest}' + restart: always + networks: + - default + build: + context: ./frontend + args: + - VITE_API_URL=https://api.${DOMAIN?Variable not set} + - NODE_ENV=production + # Traefik labels removed for dev environment + + # Redis service added for dev/test usage + redis: + image: redis:7-alpine + restart: always + ports: + - "6379:6379" + networks: + - default + +volumes: + app-db-data: + + +networks: # Traefik network removed for dev/test environment diff --git a/docker-compose.yml b/docker-compose.yml index b1aa17ed43..09a91cff48 100644 --- a/docker-compose.yml +++ b/docker-compose.yml @@ -1,10 +1,12 @@ services: db: - image: postgres:17 + build: + context: ./docker/postgres-pgvector + image: organyz/postgres-pgvector:18 restart: always healthcheck: - test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}"] + test: [ "CMD-SHELL", "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}" ] interval: 10s retries: 5 start_period: 30s @@ -109,7 +111,7 @@ services: - SENTRY_DSN=${SENTRY_DSN} healthcheck: - test: ["CMD", "curl", "-f", "http://localhost:8000/api/v1/utils/health-check/"] + test: [ "CMD", "curl", "-f", "http://localhost:8000/api/v1/utils/health-check/" ] interval: 10s timeout: 5s retries: 5 @@ -165,6 +167,7 @@ services: volumes: app-db-data: + networks: traefik-public: # Allow setting it to false for testing diff --git a/docker/postgres-pgvector/Dockerfile b/docker/postgres-pgvector/Dockerfile new file mode 100644 index 0000000000..df32d9682a --- /dev/null +++ b/docker/postgres-pgvector/Dockerfile @@ -0,0 +1,32 @@ +FROM postgres:18 + +ENV PGVECTOR_VERSION= + +RUN apt-get update \ + && apt-get install -y --no-install-recommends \ + ca-certificates \ + build-essential \ + git \ + gcc \ + make \ + wget \ + libssl-dev \ + postgresql-server-dev-18 \ + && rm -rf /var/lib/apt/lists/* + +RUN if [ -z "${PGVECTOR_VERSION}" ]; then \ + git clone --depth 1 https://github.com/pgvector/pgvector.git /tmp/pgvector; \ + else \ + git clone --depth 1 --branch ${PGVECTOR_VERSION} https://github.com/pgvector/pgvector.git /tmp/pgvector; \ + fi \ + && cd /tmp/pgvector \ + && make \ + && make install \ + && cd / \ + && rm -rf /tmp/pgvector + +# Copy initialization SQL scripts into the image so they run on first init +COPY initdb /docker-entrypoint-initdb.d/ +RUN chmod -R 755 /docker-entrypoint-initdb.d + +# Keep the default entrypoint from postgres image diff --git a/docker/postgres-pgvector/initdb/01-enable-pgvector.sql b/docker/postgres-pgvector/initdb/01-enable-pgvector.sql new file mode 100644 index 0000000000..ae36c0b39a --- /dev/null +++ b/docker/postgres-pgvector/initdb/01-enable-pgvector.sql @@ -0,0 +1,2 @@ +-- Enable pgvector extension in the default database on initialization +CREATE EXTENSION IF NOT EXISTS vector; diff --git a/dockercompose-dev.yml b/dockercompose-dev.yml new file mode 100644 index 0000000000..79841aee3d --- /dev/null +++ b/dockercompose-dev.yml @@ -0,0 +1,111 @@ +services: + + db: + build: + context: ./docker/postgres-pgvector + image: organyz/postgres-pgvector:18 + restart: always + healthcheck: + test: [ "CMD-SHELL", "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}" ] + interval: 10s + retries: 5 + start_period: 30s + timeout: 10s + volumes: + - app-db-data:/var/lib/postgresql/data/pgdata + env_file: + - .env + environment: + - PGDATA=/var/lib/postgresql/data/pgdata + - POSTGRES_PASSWORD=${POSTGRES_PASSWORD?Variable not set} + - POSTGRES_USER=${POSTGRES_USER?Variable not set} + - POSTGRES_DB=${POSTGRES_DB?Variable not set} + + # adminer service removed for dev environment + + prestart: + image: '${DOCKER_IMAGE_BACKEND?Variable not set}:${TAG-latest}' + build: + context: ./backend + networks: + - default + depends_on: + db: + condition: service_healthy + restart: true + command: bash scripts/prestart.sh + env_file: + - .env + environment: + - DOMAIN=${DOMAIN} + - FRONTEND_HOST=${FRONTEND_HOST?Variable not set} + - ENVIRONMENT=${ENVIRONMENT} + - BACKEND_CORS_ORIGINS=${BACKEND_CORS_ORIGINS} + - SECRET_KEY=${SECRET_KEY?Variable not set} + - FIRST_SUPERUSER=${FIRST_SUPERUSER?Variable not set} + - FIRST_SUPERUSER_PASSWORD=${FIRST_SUPERUSER_PASSWORD?Variable not set} + - SMTP_HOST=${SMTP_HOST} + - SMTP_USER=${SMTP_USER} + - SMTP_PASSWORD=${SMTP_PASSWORD} + - EMAILS_FROM_EMAIL=${EMAILS_FROM_EMAIL} + - POSTGRES_SERVER=db + - POSTGRES_PORT=${POSTGRES_PORT} + - POSTGRES_DB=${POSTGRES_DB} + - POSTGRES_USER=${POSTGRES_USER?Variable not set} + - POSTGRES_PASSWORD=${POSTGRES_PASSWORD?Variable not set} + - SENTRY_DSN=${SENTRY_DSN} + + backend: + image: '${DOCKER_IMAGE_BACKEND?Variable not set}:${TAG-latest}' + restart: always + networks: + - default + depends_on: + db: + condition: service_healthy + restart: true + prestart: + condition: service_completed_successfully + env_file: + - .env + environment: + - DOMAIN=${DOMAIN} + - FRONTEND_HOST=${FRONTEND_HOST?Variable not set} + - ENVIRONMENT=${ENVIRONMENT} + - BACKEND_CORS_ORIGINS=${BACKEND_CORS_ORIGINS} + - SECRET_KEY=${SECRET_KEY?Variable not set} + - FIRST_SUPERUSER=${FIRST_SUPERUSER?Variable not set} + - FIRST_SUPERUSER_PASSWORD=${FIRST_SUPERUSER_PASSWORD?Variable not set} + - SMTP_HOST=${SMTP_HOST} + - SMTP_USER=${SMTP_USER} + - SMTP_PASSWORD=${SMTP_PASSWORD} + - EMAILS_FROM_EMAIL=${EMAILS_FROM_EMAIL} + - POSTGRES_SERVER=db + - POSTGRES_PORT=${POSTGRES_PORT} + - POSTGRES_DB=${POSTGRES_DB} + - POSTGRES_USER=${POSTGRES_USER?Variable not set} + - POSTGRES_PASSWORD=${POSTGRES_PASSWORD?Variable not set} + - SENTRY_DSN=${SENTRY_DSN} + + healthcheck: + test: [ "CMD", "curl", "-f", "http://localhost:8000/api/v1/utils/health-check/" ] + interval: 10s + timeout: 5s + retries: 5 + + build: + context: ./backend + + frontend: + image: '${DOCKER_IMAGE_FRONTEND?Variable not set}:${TAG-latest}' + restart: always + networks: + - default + build: + context: ./frontend + args: + - VITE_API_URL=https://api.${DOMAIN?Variable not set} + - NODE_ENV=production + +volumes: + app-db-data: diff --git a/scripts/build-push.sh b/scripts/build-push.sh index 3fa3aa7e6b..b8a5f2fab2 100644 --- a/scripts/build-push.sh +++ b/scripts/build-push.sh @@ -7,4 +7,4 @@ TAG=${TAG?Variable not set} \ FRONTEND_ENV=${FRONTEND_ENV-production} \ sh ./scripts/build.sh -docker-compose -f docker-compose.yml push +docker-compose -f docker-compose.test-dev.yml -f docker-compose.override.test-dev.yml push diff --git a/scripts/build.sh b/scripts/build.sh index 21528c538e..a0c71bfb3d 100644 --- a/scripts/build.sh +++ b/scripts/build.sh @@ -3,8 +3,10 @@ # Exit in case of error set -e +build TAG=${TAG?Variable not set} \ FRONTEND_ENV=${FRONTEND_ENV-production} \ docker-compose \ --f docker-compose.yml \ +-f docker-compose.test-dev.yml \ +-f docker-compose.override.test-dev.yml \ build diff --git a/scripts/docker-compose-dev.sh b/scripts/docker-compose-dev.sh new file mode 100755 index 0000000000..ab60b4c403 --- /dev/null +++ b/scripts/docker-compose-dev.sh @@ -0,0 +1,13 @@ +#!/usr/bin/env zsh +# Wrapper to run `docker compose` using the generated `dockercompose-dev.yml` file +# Usage: +# ./scripts/docker-compose-dev.sh up -d +# ./scripts/docker-compose-dev.sh ps + +export COMPOSE_FILE="dockercompose-dev.yml" + +if [ "$#" -eq 0 ]; then + docker compose help +else + docker compose "$@" +fi diff --git a/scripts/docker-compose-test-dev.sh b/scripts/docker-compose-test-dev.sh new file mode 100755 index 0000000000..453c25f0cf --- /dev/null +++ b/scripts/docker-compose-test-dev.sh @@ -0,0 +1,17 @@ +#!/usr/bin/env zsh +# Wrapper to run `docker compose` using docker-compose.override.test-dev.yml +# This avoids modifying the original compose files. Usage: +# ./scripts/docker-compose-test-dev.sh up -d +# ./scripts/docker-compose-test-dev.sh ps + +# Compose will read files in the order they are listed. We set COMPOSE_FILE +# so `docker compose` uses `docker-compose.yml` together with +# `docker-compose.override.test-dev.yml` instead of the default override file. +export COMPOSE_FILE="docker-compose.yml:docker-compose.override.test-dev.yml" + +# Forward all args to docker compose. If no args provided, show help. +if [ "$#" -eq 0 ]; then + docker compose help +else + docker compose "$@" +fi diff --git a/scripts/test-local.sh b/scripts/test-local.sh index 7f2fa9fbce..15bf13941a 100644 --- a/scripts/test-local.sh +++ b/scripts/test-local.sh @@ -3,13 +3,13 @@ # Exit in case of error set -e -docker-compose down -v --remove-orphans # Remove possibly previous broken stacks left hanging after an error +docker-compose -f docker-compose.test-dev.yml -f docker-compose.override.test-dev.yml down -v --remove-orphans # Remove possibly previous broken stacks left hanging after an error if [ $(uname -s) = "Linux" ]; then echo "Remove __pycache__ files" sudo find . -type d -name __pycache__ -exec rm -r {} \+ fi -docker-compose build -docker-compose up -d -docker-compose exec -T backend bash scripts/tests-start.sh "$@" +docker-compose -f docker-compose.test-dev.yml -f docker-compose.override.test-dev.yml build +docker-compose -f docker-compose.test-dev.yml -f docker-compose.override.test-dev.yml up -d +docker-compose -f docker-compose.test-dev.yml -f docker-compose.override.test-dev.yml exec -T backend bash scripts/tests-start.sh "$@"