Thank you for your interest in contributing to the Three-Layer AI Framework! This document provides guidelines for contributing to the project.
Be respectful, inclusive, and professional in all interactions.
- Check if the bug has already been reported in Issues
- If not, create a new issue with:
- Clear title and description
- Steps to reproduce
- Expected vs actual behavior
- Environment details (OS, Python version, etc.)
- Code samples or error messages
- Check existing issues and discussions
- Create a new issue with:
- Clear use case description
- Proposed solution
- Alternative solutions considered
- Impact on existing functionality
-
Fork the repository
git clone https://github.com/maree217/three-layer-ai-framework cd three-layer-ai-framework -
Create a feature branch
git checkout -b feature/your-feature-name
-
Make your changes
- Follow the coding standards below
- Add tests for new functionality
- Update documentation as needed
-
Test your changes
# Run tests pytest tests/ # Check code format black src/ examples/ flake8 src/ examples/ # Type checking mypy src/
-
Commit your changes
git add . git commit -m "feat: add new feature"
Follow Conventional Commits:
feat:new featurefix:bug fixdocs:documentation changestest:test additions/changesrefactor:code refactoringperf:performance improvements
-
Push to your fork
git push origin feature/your-feature-name
-
Create Pull Request
- Describe your changes
- Reference related issues
- Include screenshots if UI changes
- Ensure CI passes
- Python 3.9+
- Git
- Azure account (for testing Azure integrations)
# Clone repository
git clone https://github.com/maree217/three-layer-ai-framework
cd three-layer-ai-framework
# Create virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install development dependencies
pip install -r requirements-dev.txt
# Install pre-commit hooks
pre-commit install# Run all tests
pytest
# Run with coverage
pytest --cov=src tests/
# Run specific test file
pytest tests/test_rag_chatbot.py
# Run with verbose output
pytest -v# Format code
black src/ examples/ tests/
# Lint code
flake8 src/ examples/ tests/
# Type checking
mypy src/
# Sort imports
isort src/ examples/ tests/Follow PEP 8 and use type hints:
# Good
def process_query(query: str, max_tokens: int = 1000) -> str:
"""
Process a user query and return response.
Args:
query: User query string
max_tokens: Maximum tokens in response
Returns:
Generated response string
"""
return response
# Bad
def process_query(query, max_tokens=1000):
return responseUse Google-style docstrings:
def train_model(data: pd.DataFrame, target: str) -> Model:
"""Train a machine learning model.
Args:
data: Training data as pandas DataFrame
target: Name of target column
Returns:
Trained model instance
Raises:
ValueError: If target column not found in data
Example:
>>> model = train_model(df, target='revenue')
>>> predictions = model.predict(test_data)
"""
passWrite tests for all new functionality:
import pytest
from src.layer1.rag_chatbot import RAGChatbot
def test_chatbot_initialization():
"""Test chatbot initializes correctly."""
bot = RAGChatbot(knowledge_base="./test_data")
assert bot is not None
def test_chatbot_response():
"""Test chatbot generates response."""
bot = RAGChatbot(knowledge_base="./test_data")
response = bot.chat("Hello")
assert isinstance(response, str)
assert len(response) > 0
@pytest.mark.parametrize("query,expected_length", [
("short", 10),
("medium length query", 20),
("this is a much longer query that should generate a detailed response", 50)
])
def test_response_lengths(query, expected_length):
"""Test response lengths vary with query complexity."""
bot = RAGChatbot(knowledge_base="./test_data")
response = bot.chat(query)
assert len(response) >= expected_lengththree-layer-ai-framework/
├── src/ # Source code
│ ├── layer1/ # UX Automation
│ ├── layer2/ # Data Intelligence
│ └── layer3/ # Strategic Systems
├── examples/ # Example implementations
├── tests/ # Test suite
├── docs/ # Documentation
├── templates/ # Deployment templates
└── requirements.txt # Dependencies
- Add implementation to
src/layer1/ - Add tests to
tests/layer1/ - Add example to
examples/ - Update
docs/layer1-ux-automation.md
- Add implementation to
src/layer2/ - Add connector to
src/layer2/connectors/if needed - Add tests to
tests/layer2/ - Update
docs/layer2-data-intelligence.md
- Add implementation to
src/layer3/ - Add ML model to
src/layer3/models/if needed - Add tests to
tests/layer3/ - Update
docs/layer3-strategic-systems.md
Keep README.md concise. Detailed docs go in docs/.
Update docs/api.md for new public APIs.
Add working examples to examples/ directory with:
- README.md explaining the example
- Sample data (or instructions to generate it)
- Expected output
Maintainers will:
- Update version in
setup.py - Update CHANGELOG.md
- Create git tag
- Build and publish to PyPI
- Create GitHub release
- 📖 Read the documentation
- 💬 Open a GitHub Discussion
- 🐛 Report bugs via Issues
- 📧 Email: 2maree@gmail.com
By contributing, you agree that your contributions will be licensed under the MIT License.
Thank you for contributing! 🎉