Skip to content

Conversation

@ANAMASGARD
Copy link
Contributor

Fixes #873

  • The linearity and variance plots were hard to read when models had only
    categorical predictors - the confidence ribbons were huge and covered up
    the actual data pattern.

  • Now check_model() detects when all predictors are categorical and hides
    the dots automatically. Makes the variance across groups way easier to see.

  • Works with factor(x), as.factor(x), or variables that are already factors.

  • If your model has both categorical and continuous predictors, it still
    shows dots like before.

  • You can override this with show_dots = TRUE if you want the old behavior.

  • Added tests, updated docs. Everything passes locally on my system .

- Add show_dots auto-detection for models with only categorical predictors
- Implement .has_only_categorical_predictors() helper function
- Add comprehensive tests for new functionality
- Update documentation and NEWS.md

FIXES  easystats#873
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @ANAMASGARD, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a visualization challenge within the check_model() function, specifically for models that rely solely on categorical predictors. It introduces an intelligent mechanism to automatically suppress the display of individual data points in diagnostic plots under these conditions. This change dramatically enhances the clarity and interpretability of linearity and variance plots by eliminating visual clutter, making model diagnostics more intuitive for users working with categorical variables. The update maintains backward compatibility for mixed-type models and provides an explicit override for user control.

Highlights

  • Improved check_model() plots for categorical predictors: The check_model() function now automatically hides data points (dots) in diagnostic plots when a model contains only categorical predictors. This significantly improves the readability of linearity and variance plots by reducing visual clutter.
  • Enhanced clarity for variance patterns: By hiding dots in categorical-only models, the visualization of variance patterns across groups becomes much clearer and easier to interpret.
  • Flexible categorical predictor detection: The automatic detection works for variables explicitly wrapped in factor() or as.factor(), as well as pre-existing factor variables in the dataset.
  • Preservation of existing behavior for mixed models: If a model includes both categorical and continuous predictors, check_model() will continue to display data points as it did previously.
  • User override option: Users can explicitly override the automatic dot-hiding behavior by setting the show_dots = TRUE argument in check_model().
  • Comprehensive updates: The pull request includes new tests to validate the functionality and updated documentation to reflect these changes.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a helpful feature to improve plot readability for models with only categorical predictors by automatically hiding data points. The implementation is mostly solid, with good documentation and initial tests.

My review focuses on improving the robustness of the new predictor detection logic and strengthening the tests:

  • I've identified and suggested a fix for a couple of bugs in the .has_only_categorical_predictors() helper function that could lead to misclassifying predictors, particularly for factors not explicitly wrapped in factor() and for binary factors created in the formula.
  • I've also suggested making one of the new tests more precise and adding another test case to cover a scenario that was initially buggy.

Overall, these are great changes that will improve the user experience. The suggested fixes will make the new feature more reliable across different model specifications.

expect_s3_class(result, "check_model")
# Should keep dots by default for mixed models
expect_true(is.null(attr(result, "show_dots")) || attr(result, "show_dots"))
})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

It would be beneficial to add a test case for a model with a single binary predictor that is converted to a factor within the formula. This was a scenario where the original implementation had a bug, and adding a test for it would prevent regressions.

Here is a suggested test:

test_that("`check_model()` auto-disables dots for binary factor in formula", {
  data(mtcars)
  m <- lm(mpg ~ as.factor(am), data = mtcars)
  result <- check_model(m, verbose = FALSE)

  # Should auto-disable dots for categorical-only models
  expect_s3_class(result, "check_model")
  expect_false(attr(result, "show_dots"))
})

@bwiernik
Copy link
Contributor

Can you show an example before and after image? I'm not following why hiding the dots is the correct fix here

ANAMASGARD and others added 2 commits November 16, 2025 20:58
- Fix binary factor detection (e.g., as.factor(am))
- Improve regex to distinguish continuous vs categorical predictors
- Add test for binary factors
- Make mixed model test more precise
@ANAMASGARD
Copy link
Contributor Author

Sir @bwiernik
You're right to question this - let me be honest about the approach.
Looking at the plots in #873, the main visual problem is the huge confidence bands that dominate the plot when you have categorical predictors. The LOESS smooth creates massive CI ribbons between the discrete category positions.
I initially implemented hiding dots because:

  • Points stack at discrete positions anyway (not much info there)
  • It makes the smooth line more visible
  • Base R's diagnostic plots do something similar

But @strengejacke makes a good point - hiding the CI bands might be the better fix. The dots aren't really the problem; the CI is what's making the plots unreadable (as originally reported in #642).
I'm happy to switch the implementation to hide CI instead of dots if that's what is preferred. It might actually be more in line with what users originally requested.
What do you think?
And please feel free to correct me if I am wrong .
Thank you !

@bwiernik
Copy link
Contributor

Could you paste examples of the before and after images here?

@ANAMASGARD
Copy link
Contributor Author

⚠️ Important Discovery - PR is Incomplete

Hi @bwiernik, you're absolutely right that the images look identical. I discovered the issue:

The Problem

  1. ✅ Our PR correctly sets attr(result, "show_dots") = FALSE for categorical models
  2. ❌ But the see package is responsible for actually plotting, and it's not respecting this attribute

Verification

devtools::load_all() # Load our changes
star <- read.csv("https://drmankin.github.io/disc_stats/star.csv")
star$star2 <- as.factor(star$star2)
model <- lm(math2 ~ star2, data = star, na.action = na.exclude)

result <- check_model(model)
attr(result, "show_dots") # Returns FALSE ✅ (our code works)

plot(result) # But still shows dots ❌ (see package doesn't respect it)### Complete Solution Requires Two PRs

  1. This PR (performance package): Sets the show_dots attribute ✅
  2. Companion PR (see package): Modifies plotting to respect the attribute ❌ (not done yet)

I also ran the before and after but the problem is :-

BEFORE (with dots - old behavior)

plot_BEFORE_with_dots

AFTER (without dots - new auto-detected behavior)

plot_AFTER_without_dots

But problem is they are exactly the same , no difference as you can see

  • The same plots with data points visible
  • The same confidence ribbons (gray shaded areas)
  • No visible difference in the "Linearity" and "Homogeneity of Variance" plots

Our "AFTER" image should show NO DOTS (just the smooth lines and CI ribbons), but it still has all the dots visible, just like the "BEFORE" image.

What approach would you prefer? @strengejacke @bwiernik

I apologize for not catching this earlier - I should have verified the actual visual output, not just the attribute setting.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

check_model() linearity & variance for categorical predictors

2 participants