Skip to content

Latest commit

 

History

History

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

README.md

IPFS Accelerate Python Documentation

Comprehensive documentation for the IPFS Accelerate Python framework - a complete solution for hardware-accelerated machine learning inference with IPFS network-based distribution.

Documentation Status Coverage Last Audit

🎯 Quick Navigation

Essential Reading

Project Information

  • 📋 Changelog - Version history and release notes
  • 🔐 Security Policy - Security reporting and best practices
  • 🤝 Contributing Guide - How to contribute to the project
  • 📄 License - AGPLv3+ license details
  • 🗂️ Project Docs - production project status, summaries, dashboard, router, and migration docs
  • 🧠 MCP++ Docs - conformance, cutover, and unification records

Platform-Specific

Advanced Topics

What is IPFS Accelerate Python?

IPFS Accelerate Python is a comprehensive, enterprise-grade framework that combines:

  • Hardware Acceleration: Support for CPU, CUDA, ROCm, OpenVINO, Apple MPS, WebNN, and WebGPU
  • 🌐 IPFS Integration: Distributed model storage, caching, and peer-to-peer inference
  • 🌍 Browser Support: Client-side acceleration using WebNN and WebGPU
  • 🤖 300+ Models: Compatible with HuggingFace Transformers and custom models
  • 🔒 Enterprise Security: Zero-trust architecture with compliance validation
  • High Performance: Optimized inference pipelines with intelligent caching
  • 🚀 Cross-Platform: Works on Linux, macOS, and Windows

Why Choose IPFS Accelerate?

Feature Benefit
Multi-Hardware Support Run on any device - from servers to browsers
Distributed Architecture Scale horizontally with P2P networking
Zero Configuration Sensible defaults, works out of the box
Production Ready Battle-tested, comprehensive monitoring
Open Source AGPLv3+ license, community-driven

Documentation Structure

Getting Started

  1. Installation & Setup - Complete installation guide with hardware setup
  2. Usage Guide - Basic to advanced usage patterns with examples
  3. Examples - Practical examples and demos

Technical Reference

  1. API Reference - Complete API documentation with all methods and parameters
  2. Architecture Overview - System design, components, and data flow
  3. Testing Guide - Testing framework, benchmarks, and quality assurance

Specialization Guides

  1. Hardware Optimization - Platform-specific optimization strategies
  2. IPFS Integration - Distributed inference and content addressing
  3. P2P Architecture - P2P workflow scheduling and distributed computing
  4. WebNN/WebGPU Integration - Browser-based acceleration

Organized Guides

  1. GitHub Guides - GitHub Actions, autoscaling, authentication, P2P cache
  2. Docker Guides - Container deployment, caching, security
  3. P2P Guides - Distributed computing, libp2p, workflow scheduling
  4. Deployment Guides - Production deployment, cross-platform
  5. Project Documentation - permanent location for project status, summary, and migration records
  6. MCP++ Records - cutover evidence, conformance, and migration backlog

Key Features Covered

🔧 Hardware Acceleration

  • CPU Optimization: x86/x64, ARM with SIMD acceleration
  • NVIDIA CUDA: GPU acceleration with TensorRT optimization
  • AMD ROCm: AMD GPU support with HIP/ROCm
  • Intel OpenVINO: CPU and Intel GPU optimization
  • Apple Silicon: Metal Performance Shaders (MPS) for M1/M2/M3
  • WebNN/WebGPU: Browser-based hardware acceleration
  • Qualcomm: Mobile and edge device acceleration

🌐 IPFS Network Features

  • Content Addressing: Cryptographically secure model storage
  • Distributed Inference: Peer-to-peer model sharing and processing
  • Intelligent Caching: Multi-level caching with LRU eviction
  • Provider Discovery: Automatic network peer discovery and selection
  • Fault Tolerance: Robust error handling and fallback mechanisms

🤖 Model Support

  • Text Models: BERT, GPT, T5, RoBERTa, DistilBERT, ALBERT, etc.
  • Vision Models: ViT, ResNet, EfficientNet, CLIP, DETR, etc.
  • Audio Models: Whisper, Wav2Vec2, WavLM, etc.
  • Multimodal: CLIP, BLIP, LLaVA, etc.
  • Custom Models: Support for custom model architectures

🌍 Browser Integration

  • Cross-Browser: Chrome, Firefox, Edge, Safari support
  • WebNN API: Native neural network acceleration
  • WebGPU: High-performance GPU compute in browsers
  • Precision Control: fp16, fp32, mixed precision support
  • Real-time Performance: Optimized for interactive applications

Getting Help

Documentation Navigation

  • 📖 Getting Started Guide - Complete beginner's tutorial
  • FAQ - Frequently asked questions and quick answers
  • 📚 Full Documentation Index - Comprehensive guide listing
  • Use the Table of Contents in each document for quick navigation
  • Look for 🔗 Cross-references between related sections
  • Check 💡 Tips and Examples throughout the documentation
  • Reference ⚠️ Troubleshooting sections when needed

Community Support

Contributing

Documentation Organization

Current Documentation

All active, maintained documentation is organized in this directory:

  • Core Docs: Installation, Usage, API, Architecture, Testing
  • Guides: Topic-specific guides (GitHub, Docker, P2P, Deployment)
  • Architecture: System architecture and design docs
  • Project: project execution history, dashboard workstreams, router summaries, SDK-utilization records, and migration guides
  • MCP++: MCP++ cutover, conformance, and server-unification records

Historical Documentation

  • Archive: Historical session summaries and implementation reports
  • Development History: Major milestones and phase completions
  • Exports: HTML, PDF, and other non-markdown exports

Documentation Audit

A comprehensive audit was completed in January 2026:

  • Audit Report: Complete findings and recommendations
  • 200+ files reviewed, duplicates removed, links fixed
  • Archive organized and documented

Documentation Updates

This documentation was comprehensively updated to reflect the current state of the IPFS Accelerate Python framework, including recent additions such as:

  • P2P Workflow Scheduler: Distributed task execution with merkle clocks and fibonacci heaps
  • MCP Server: Model Context Protocol server with 14+ tools
  • CLI Endpoint Adapters: Direct integration with Claude, OpenAI, Gemini, VSCode CLIs
  • Enhanced Inference: Multi-backend routing (local, distributed, API, CLI modes)
  • GitHub Integration: P2P cache, autoscaler, workflow discovery

All examples, APIs, and features have been verified and updated for accuracy.

Last Updated: January 2026
Last Audit: January 31, 2026
Framework Version: 0.0.45+
Documentation Coverage: Comprehensive (Core + Recent Features)


Start with the Installation Guide to begin using IPFS Accelerate Python! 🚀