diff --git a/DESCRIPTION b/DESCRIPTION index 1333066..bd0ba58 100644 --- a/DESCRIPTION +++ b/DESCRIPTION @@ -1,15 +1,15 @@ Type: Package Package: lime Title: Local Interpretable Model-Agnostic Explanations -Version: 0.5.3.9000 +Version: 0.5.4.9000 Authors@R: c( - person("Emil", "Hvitfeldt", , "emilhhvitfeldt@gmail.com", role = c("aut", "cre"), + person("Emil", "Hvitfeldt", , "emil.hvitfeldt@posit.co", role = c("aut", "cre"), comment = c(ORCID = "0000-0002-0679-1945")), person("Thomas Lin", "Pedersen", , "thomasp85@gmail.com", role = "aut", comment = c(ORCID = "0000-0002-5147-4711")), person("Michaël", "Benesty", , "michael@benesty.fr", role = "aut") ) -Maintainer: Emil Hvitfeldt +Maintainer: Emil Hvitfeldt Description: When building complex models, it is often difficult to explain why the model should be trusted. While global measures such as accuracy are useful, they cannot be used for explaining why a model @@ -17,7 +17,7 @@ Description: When building complex models, it is often difficult to package) is a method for explaining the outcome of black box models by fitting a local model around the point in question an perturbations of this point. The approach is described in more detail in the article by - Ribeiro et al. (2016) . + Ribeiro et al. (2016) . License: MIT + file LICENSE URL: https://lime.data-imaginist.com, https://github.com/tidymodels/lime, https://lime.data-imaginist.com/ diff --git a/NEWS.md b/NEWS.md index 7956077..eacace0 100644 --- a/NEWS.md +++ b/NEWS.md @@ -1,5 +1,7 @@ # lime (development version) +# lime 0.5.4 + * Make package work with all versions of xgboost. (#202) # lime 0.5.3 diff --git a/README.Rmd b/README.Rmd index 55a9e68..a3f6a09 100644 --- a/README.Rmd +++ b/README.Rmd @@ -19,8 +19,8 @@ knitr::opts_chunk$set( [![Codecov test coverage](https://codecov.io/gh/tidymodels/lime/graph/badge.svg)](https://app.codecov.io/gh/tidymodels/lime) [![CRAN_Release_Badge](http://www.r-pkg.org/badges/version-ago/lime)](https://CRAN.R-project.org/package=lime) [![CRAN_Download_Badge](http://cranlogs.r-pkg.org/badges/lime)](https://CRAN.R-project.org/package=lime) -[![R-CMD-check](https://github.com/thomasp85/lime/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/thomasp85/lime/actions/workflows/R-CMD-check.yaml) -[![Codecov test coverage](https://codecov.io/gh/thomasp85/lime/graph/badge.svg)](https://app.codecov.io/gh/thomasp85/lime) +[![R-CMD-check](https://github.com/tidymodels/lime/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/tidymodels/lime/actions/workflows/R-CMD-check.yaml) +[![Codecov test coverage](https://codecov.io/gh/tidymodels/lime/graph/badge.svg)](https://app.codecov.io/gh/tidymodels/lime) > There once was a package called lime, diff --git a/README.md b/README.md index 09f57e6..d786b93 100644 --- a/README.md +++ b/README.md @@ -10,9 +10,9 @@ coverage](https://codecov.io/gh/tidymodels/lime/graph/badge.svg)](https://app.codecov.io/gh/tidymodels/lime) [![CRAN_Release_Badge](http://www.r-pkg.org/badges/version-ago/lime)](https://CRAN.R-project.org/package=lime) [![CRAN_Download_Badge](http://cranlogs.r-pkg.org/badges/lime)](https://CRAN.R-project.org/package=lime) -[![R-CMD-check](https://github.com/thomasp85/lime/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/thomasp85/lime/actions/workflows/R-CMD-check.yaml) +[![R-CMD-check](https://github.com/tidymodels/lime/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/tidymodels/lime/actions/workflows/R-CMD-check.yaml) [![Codecov test -coverage](https://codecov.io/gh/thomasp85/lime/graph/badge.svg)](https://app.codecov.io/gh/thomasp85/lime) +coverage](https://codecov.io/gh/tidymodels/lime/graph/badge.svg)](https://app.codecov.io/gh/tidymodels/lime) > There once was a package called lime, @@ -80,16 +80,16 @@ explanation #> # A tibble: 10 × 13 #> model_type case label label_prob model_r2 model_intercept model_prediction #> -#> 1 classificat… 1 seto… 1 0.686 0.125 0.978 -#> 2 classificat… 1 seto… 1 0.686 0.125 0.978 -#> 3 classificat… 2 seto… 1 0.696 0.119 0.982 -#> 4 classificat… 2 seto… 1 0.696 0.119 0.982 -#> 5 classificat… 3 seto… 1 0.682 0.123 0.977 -#> 6 classificat… 3 seto… 1 0.682 0.123 0.977 -#> 7 classificat… 4 seto… 1 0.679 0.124 0.979 -#> 8 classificat… 4 seto… 1 0.679 0.124 0.979 -#> 9 classificat… 5 seto… 1 0.688 0.123 0.988 -#> 10 classificat… 5 seto… 1 0.688 0.123 0.988 +#> 1 classificat… 1 seto… 1 0.700 0.120 0.984 +#> 2 classificat… 1 seto… 1 0.700 0.120 0.984 +#> 3 classificat… 2 seto… 1 0.681 0.128 0.978 +#> 4 classificat… 2 seto… 1 0.681 0.128 0.978 +#> 5 classificat… 3 seto… 1 0.686 0.126 0.976 +#> 6 classificat… 3 seto… 1 0.686 0.126 0.976 +#> 7 classificat… 4 seto… 1 0.708 0.119 0.982 +#> 8 classificat… 4 seto… 1 0.708 0.119 0.982 +#> 9 classificat… 5 seto… 1 0.682 0.126 0.981 +#> 10 classificat… 5 seto… 1 0.682 0.126 0.981 #> # ℹ 6 more variables: feature , feature_value , feature_weight , #> # feature_desc , data , prediction diff --git a/cran-comments.md b/cran-comments.md index a988a00..d1be1d9 100644 --- a/cran-comments.md +++ b/cran-comments.md @@ -1,4 +1,4 @@ -Small patch release with main event being a maintainer change +Small patch release to fix breaking tests on CRAN ## revdepcheck results diff --git a/man/figures/README-unnamed-chunk-2-1.png b/man/figures/README-unnamed-chunk-2-1.png index 281ddf6..b1539c5 100644 Binary files a/man/figures/README-unnamed-chunk-2-1.png and b/man/figures/README-unnamed-chunk-2-1.png differ diff --git a/man/lime-package.Rd b/man/lime-package.Rd index 75b2561..051085a 100644 --- a/man/lime-package.Rd +++ b/man/lime-package.Rd @@ -7,7 +7,7 @@ \description{ \if{html}{\figure{logo.png}{options: style='float: right' alt='logo' width='120'}} -When building complex models, it is often difficult to explain why the model should be trusted. While global measures such as accuracy are useful, they cannot be used for explaining why a model made a specific prediction. 'lime' (a port of the 'lime' 'Python' package) is a method for explaining the outcome of black box models by fitting a local model around the point in question an perturbations of this point. The approach is described in more detail in the article by Ribeiro et al. (2016) \href{https://arxiv.org/abs/1602.04938}{arXiv:1602.04938}. +When building complex models, it is often difficult to explain why the model should be trusted. While global measures such as accuracy are useful, they cannot be used for explaining why a model made a specific prediction. 'lime' (a port of the 'lime' 'Python' package) is a method for explaining the outcome of black box models by fitting a local model around the point in question an perturbations of this point. The approach is described in more detail in the article by Ribeiro et al. (2016) \doi{10.48550/arXiv.1602.04938}. } \details{ This package is a port of the original Python lime package implementing the @@ -40,7 +40,7 @@ Useful links: } \author{ -\strong{Maintainer}: Emil Hvitfeldt \email{emilhhvitfeldt@gmail.com} (\href{https://orcid.org/0000-0002-0679-1945}{ORCID}) +\strong{Maintainer}: Emil Hvitfeldt \email{emil.hvitfeldt@posit.co} (\href{https://orcid.org/0000-0002-0679-1945}{ORCID}) Authors: \itemize{