Skip to content

Added comprehensive testing for visualization components #2737

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 24 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
24 commits
Select commit Hold shift + click to select a range
d7fcfaf
Add comprehensive testing for visualization components
Ya-shh Mar 27, 2025
29524c0
Fix visualization test files and add comprehensive documentation
Ya-shh Mar 27, 2025
174c5b4
Fix formatting and trailing whitespace issues
Ya-shh Mar 27, 2025
00c6990
Fix CI tests by making browser tests optional and updating Python ver…
Ya-shh Mar 27, 2025
39ff395
Fixed issues
Ya-shh Mar 27, 2025
ed88a4a
updated
Ya-shh Mar 27, 2025
92000f8
Fixed
Ya-shh Mar 27, 2025
d544996
Fix Agent initialization in visualization tests and improve Python co…
Ya-shh Mar 27, 2025
ff0e53f
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Mar 27, 2025
8e1425f
Fix Altair component to ensure post_process is called and add explici…
Ya-shh Mar 27, 2025
9047f0c
Add module-level test_performance_benchmarks function for CI compatib…
Ya-shh Mar 27, 2025
115b193
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Mar 27, 2025
5e041c2
Fix datetime.UTC usage to be compatible with older Python versions
Ya-shh Mar 27, 2025
7d5b570
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Mar 27, 2025
8451b72
Merge branch 'main' into add-visualization-tests
Ya-shh Apr 1, 2025
a98f6e4
Updated test
Ya-shh Apr 1, 2025
e8a4727
Fixed issues
Ya-shh Apr 1, 2025
bcaa4c6
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Apr 1, 2025
18d718b
Updated
Ya-shh Apr 2, 2025
a61443d
Add performance benchmark test
Ya-shh Apr 2, 2025
60ee350
Use more robust selectors for visualization components in browser tests
Ya-shh Apr 2, 2025
9200215
Improve tooltip type inference in altair components
Ya-shh Apr 2, 2025
18d7a45
Revert post-process exception handling while keeping tooltip type inf…
Ya-shh Apr 2, 2025
d99ae5d
Delete PR_DESCRIPTION.md
Ya-shh Apr 2, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/build_lint.yml
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ jobs:
- name: Install Mesa and dependencies
run: uv pip install --system .[dev]
- name: Test with pytest
run: pytest --durations=10 --cov=mesa tests/ --cov-report=xml
run: pytest --durations=10 --cov=mesa tests/ --cov-report=xml --ignore=tests/ui/
- if: matrix.os == 'ubuntu'
name: Codecov
uses: codecov/codecov-action@v5
Expand Down
36 changes: 36 additions & 0 deletions .github/workflows/ui-viz-tests.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
name: Browser-based Visualization Tests

on:
workflow_dispatch: # Manual trigger only

jobs:
browser-tests:
runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v3

- name: Set up Python 3.11
uses: actions/setup-python@v4
with:
python-version: '3.11'

- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip install -e .[viz-test]
playwright install chromium

- name: Run UI visualization tests
run: |
pytest tests/ui/test_browser_viz.py -v

- name: Upload test screenshots (on failure)
if: failure()
uses: actions/upload-artifact@v3
with:
name: test-screenshots
path: |
tests/ui/screenshots
tests/ui/diff-*.png
if-no-files-found: ignore
48 changes: 48 additions & 0 deletions .github/workflows/viz-tests.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
name: Visualization Tests

on:
push:
branches: [ main, master, dev ]
pull_request:
branches: [ main, master, dev ]

jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.11", "3.12"]

steps:
- uses: actions/checkout@v3

- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}

- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip install pytest pytest-mock pytest-cov
python -m pip install -e .[dev,viz]

- name: Run visualization tests
run: |
pytest -xvs tests/test_visualization_components.py

- name: Run visualization benchmarks
run: |
pytest -xvs tests/test_visualization_components.py::test_performance_benchmarks

- name: Generate test coverage report
run: |
pytest --cov=mesa.visualization tests/test_visualization_components.py --cov-report=xml

- name: Upload coverage report to Codecov
uses: codecov/codecov-action@v3
with:
token: ${{ secrets.CODECOV_TOKEN }}
file: ./coverage.xml
flags: visualization
name: viz-coverage
Empty file added SUMMARY.md
Empty file.
126 changes: 126 additions & 0 deletions docs/VISUALIZATION_TESTING.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,126 @@
# Testing Mesa's Visualization Components

This document explains how Mesa's visualization components are tested and how you can run or extend these tests.

## Testing Approach

Mesa's visualization components are tested using two complementary approaches:

1. **Browser-less Unit Tests**: Fast tests that validate component functionality without requiring a browser
2. **Browser-based UI Tests**: Full integration tests that validate rendering and interaction in an actual browser

### Browser-less Unit Tests

Located in `tests/test_visualization_components.py`, these tests:

- Render components using Solara's test utilities without a browser
- Test both Matplotlib and Altair visualization backends
- Verify component properties, rendering logic, and interactive features
- Include performance benchmarks for visualization rendering

These tests run quickly and are ideal for CI pipelines.

### Browser-based UI Tests

Located in `tests/ui/test_browser_viz.py`, these tests:

- Use Playwright to render components in an actual browser environment
- Perform visual regression testing via screenshot comparisons
- Test complex interactions that require a real browser
- Validate end-user experience with Mesa visualizations

These tests require additional dependencies and are designed to run less frequently (weekly or on-demand).

## Running Tests Locally

### Setup

1. Install the testing dependencies:

```bash
# Install base testing dependencies
pip install -e ".[test]"

# For browser-based tests, also install:
pip install -e ".[viz-test]"
playwright install chromium
```

### Running Tests

```bash
# Run all tests (excluding browser tests)
pytest tests/test_visualization_components.py -v

# Run browser-based tests
pytest tests/ui/test_browser_viz.py -v

# Run tests with the helper script
./tests/run_viz_tests.sh # Browser-less tests only
./tests/run_viz_tests.sh --ui # Include browser tests
./tests/run_viz_tests.sh --benchmarks # Run performance benchmarks
```

## CI Integration

Testing is integrated into the CI pipeline with two workflows:

1. **Standard Tests** (`viz-tests.yml`): Runs browser-less tests on all PRs and commits
2. **UI Tests** (`ui-viz-tests.yml`): Runs browser-based tests on manual trigger or weekly schedule

## Test Coverage

The test suite covers:

- **Component Creation**: Tests for all visualization component types
- **Model Integration**: Tests with all example models (Schelling, Conway, Boids, etc.)
- **Interactive Features**: Tests for step buttons, sliders, reset buttons
- **Visual Appearance**: Tests for color schemes, layouts, and responsive behavior
- **Performance**: Benchmarks for rendering speed across different models and settings

## Adding New Tests

### Adding a Browser-less Test

1. Add your test to `tests/test_visualization_components.py`
2. Focus on component logic, properties, and basic rendering
3. Use Solara's test utilities: `solara.render()`

Example:

```python
def test_new_component():
component = make_plot_component({"Data": "blue"})
box, rc = solara.render(component(model), handle_error=False)
assert rc.find("div").widget is not None
```

### Adding a Browser-based Test

1. Add your test to `tests/ui/test_browser_viz.py`
2. Use the Playwright utilities for browser interaction
3. Consider adding visual regression tests for new components

Example:

```python
@pytest.mark.skip_if_no_browser
def test_new_component_browser(browser_page):
# Test implementation
screenshot = browser_page.screenshot()
assert_screenshot_matches(screenshot, "reference_screenshot.png")
```

## Best Practices

1. **Test Both Backends**: Always test both Matplotlib and Altair components
2. **Use Seeds**: Set random seeds in tests for deterministic results
3. **Test Responsiveness**: Ensure components work at different sizes
4. **Performance**: Include performance tests for computationally intensive visualizations
5. **Visual Testing**: Use screenshot assertions only for critical visual elements

## Troubleshooting

- **Failed Visual Tests**: Examine the screenshot differences and update references if needed
- **Slow Tests**: Use the `--benchmark-only` flag to identify performance bottlenecks
- **Browser Issues**: Try running with `--headed` flag to observe browser behavior
4 changes: 2 additions & 2 deletions mesa/discrete_space/property_layer.py
Original file line number Diff line number Diff line change
Expand Up @@ -106,7 +106,7 @@ def from_data(cls, name: str, data: np.ndarray):
layer = cls(
name,
data.shape,
default_value=data[*[0 for _ in range(len(data.shape))]],
default_value=data[tuple(0 for _ in range(len(data.shape)))],
dtype=data.dtype.type,
)
layer.set_cells(data)
Expand Down Expand Up @@ -321,7 +321,7 @@ def get_neighborhood_mask(
# Convert the neighborhood list to a NumPy array and use advanced indexing
coords = np.array([c.coordinate for c in neighborhood])
indices = [coords[:, i] for i in range(coords.shape[1])]
mask[*indices] = True
mask[tuple(indices)] = True
return mask

def select_cells(
Expand Down
6 changes: 6 additions & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -64,6 +64,12 @@ dev = [
"sphinx",
"pytest-mock",
]
# Visualization testing dependencies
viz-test = [
"mesa[viz]",
"pytest-playwright",
"pytest-ipywidgets[solara]",
]
examples = [
"mesa[rec]",
"pytest",
Expand Down
99 changes: 99 additions & 0 deletions tests/VISUALIZATION_TESTING.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,99 @@
# Mesa Visualization Testing Strategy

This document outlines the approach to testing Mesa's visualization components (SolaraViz) to ensure visualization functionality works correctly across all example models.

## Testing Approach

The visualization testing strategy consists of three main components:

1. **Unit Tests (Browser-less)**: Fast tests that validate individual visualization components without requiring a browser.
2. **Performance Benchmarks**: Measure rendering performance across different visualization backends and model sizes.
3. **Browser-Based UI Tests**: Comprehensive tests that validate visualization rendering and interaction in a real browser environment.

## Test Organization

- `test_solara_viz.py`: Tests for the SolaraViz component's core functionality
- `test_visualization_components.py`: Tests for individual visualization components (matplotlib, altair)
- `ui/test_browser_viz.py`: Browser-based tests for visualization rendering and interaction
- `run_viz_tests.sh`: Helper script for running visualization tests

## Running Tests

To run all visualization tests:

```bash
./tests/run_viz_tests.sh
```

To include browser-based UI tests:

```bash
./tests/run_viz_tests.sh --ui
```

To run performance benchmarks:

```bash
./tests/run_viz_tests.sh --benchmarks
```

To update UI snapshot references:

```bash
./tests/run_viz_tests.sh --ui --update-snapshots
```

## Continuous Integration

Visualization tests are integrated into the CI pipeline:

1. `viz-tests.yml`: Runs browser-less tests on every PR and push to main branches
2. `ui-viz-tests.yml`: Runs browser-based tests weekly and on PRs that change visualization code

## Dependencies

Visualization tests require additional dependencies beyond the core Mesa package:

```bash
pip install -e .[viz,dev] # For browser-less tests
pip install -e .[viz-test] # For browser-based tests
playwright install chromium # For browser-based tests
```

## Test Coverage

The visualization tests cover:

1. **Component Rendering**: Testing that visualization components render correctly
2. **Model Interaction**: Testing model stepping, play/pause, and reset functionality
3. **Parameter Controls**: Testing that user input controls work correctly
4. **Performance**: Benchmarking visualization performance
5. **Cross-Model Compatibility**: Testing visualization components across all example models

## Snapshot Testing

Browser-based tests use snapshot testing to compare rendered visualizations against reference images. This ensures visual consistency across code changes.

To update reference snapshots:

```bash
pytest tests/ui/ --solara-update-snapshots --solara-runner=solara
```

## Adding New Tests

When adding new visualization components:

1. Add browser-less tests to `test_visualization_components.py`
2. Add performance benchmarks for significant new components
3. Add browser-based tests for components with complex UI interaction
4. Ensure CI workflows run the new tests

## Troubleshooting

If visualization tests fail:

1. Check dependency versions (especially solara, matplotlib, altair)
2. For snapshot failures, compare the test results in `test-results/` with the reference images
3. Update reference snapshots if the visual changes are expected
4. For browser-based test failures, try running with `--headed` flag to see the browser
Loading
Loading