Embedded Systems Engineer | Self-Hosted AI Infrastructure
Embedded systems development and maintenance. Building local AI workflows for code analysis, documentation, and automation. Focus on privacy, cost efficiency, and self-hosted solutions.
- Mac Studio M2 Ultra (64GB unified memory) - Primary LLM inference node
- Serving: LM Studio, MLX-LM
- Models: Qwen3-Next 80B, Nemotron 3 Nano 30B, Qwen3 Coder 30B
- Razer Blade 15 (i7-12800H, 64GB DDR5, RTX 3070 Ti) - GPU workloads, development
- Firebat MN56 (Ryzen 7 8745HS, 32GB DDR5) - Helper node
- Beelink SER5-MAX (Ryzen 7 6800U, 32GB DDR5) - Proxmox homelab core
- TP-Link Omada Smart SG2008 managed switch
- AdGuard for local DNS
- WireGuard VPN (VPS gateway) - secure remote access to homelab
- Docker/LXC containerization
Workflows
- Code analysis with large context windows (128k+ tokens)
- RAG pipelines for technical documentation
- Learning agents orchestraition tools like Autogen, Langchain
- Languages: C++, Python, Bash
- AI/ML: LM Studio, Ollama, AnythingLLM, Open Web UI
- Infrastructure: Proxmox, Docker, Linux (Debian, Ubuntu, macOS)
- Embedded: Firmware development, hardware integration
- Workflow: Neovim, tmux, Git, Zsh
- Local AI Infrastructure: Self-hosted LLM serving with privacy-first approach
- Homelab Automation: Orchestration and monitoring for distributed inference
Location: Bydgoszcz, Poland
Languages: Polish (native), English (C1)



