diff --git a/CLAUDE.md b/CLAUDE.md index 69658f3..4679c70 100644 --- a/CLAUDE.md +++ b/CLAUDE.md @@ -101,6 +101,7 @@ When `tables` are specified: ### Exported functions - `compile_acs_data(tables, ...)` - Pull and compute ACS data +- `interpolate_acs(.data, target_geoid, weight, ...)` - Aggregate or interpolate ACS data to custom geographies. `weight = NULL` for complete nesting (direct aggregation); `weight = "col"` for fractional allocation via crosswalk. - `list_tables()` - Available table names for the `tables` parameter (construct-level names) - `get_acs_codebook(year, table)` - Browse ACS variables with clean names and table codes - `list_variables(year)` - Tibble mapping all variables (raw + computed) to their table name diff --git a/DESCRIPTION b/DESCRIPTION index 7659f67..1a38d81 100644 --- a/DESCRIPTION +++ b/DESCRIPTION @@ -28,6 +28,7 @@ Imports: tibble, sf Suggests: + crosswalk, distributional, ggdist, ggplot2, @@ -39,7 +40,8 @@ Suggests: testthat (>= 3.0.0), scales, urbnthemes (>= 0.0.2) -Remotes: +Remotes: + UI-Research/crosswalk, UrbanInstitute/urbnthemes RoxygenNote: 7.3.3 URL: https://ui-research.github.io/urbnindicators/ diff --git a/NAMESPACE b/NAMESPACE index 32443b3..3b576b7 100644 --- a/NAMESPACE +++ b/NAMESPACE @@ -1,7 +1,6 @@ # Generated by roxygen2: do not edit by hand export("%>%") -export(calculate_custom_geographies) export(compile_acs_data) export(define_across_percent) export(define_across_sum) @@ -10,6 +9,7 @@ export(define_one_minus) export(define_percent) export(filter_variables) export(get_acs_codebook) +export(interpolate_acs) export(list_acs_variables) export(list_tables) export(list_variables) diff --git a/R/calculate_custom_geographies.R b/R/calculate_custom_geographies.R deleted file mode 100644 index 8f8a2c7..0000000 --- a/R/calculate_custom_geographies.R +++ /dev/null @@ -1,421 +0,0 @@ -#' @title Aggregate ACS data to custom geographies -#' @description Aggregate tract-level ACS data to user-defined custom geographies -#' by properly handling different variable types (counts, percentages, medians, etc.) -#' and recalculating all error measures appropriately. -#' @param .data A dataframe returned from \code{compile_acs_data()} at the tract level. -#' Must have a codebook attribute attached. -#' @param group_id Character. The name of a column in \code{.data} that contains -#' the custom geography identifiers to aggregate to. -#' @param spatial Logical. If TRUE, dissolve tract geometries to create custom geography -#' boundaries using \code{sf::st_union()}. Default is FALSE. -#' @param weight_variable Character. The variable name to use for population-weighted -#' averages of non-aggregatable variables. Default is "total_population_universe". -#' @returns A dataframe aggregated to custom geographies with recalculated estimates, -#' MOEs, SEs, and CVs. A modified codebook is attached as an attribute. -#' @examples -#' \dontrun{ -#' # First, create tract-level data -#' tract_data = compile_acs_data( -#' years = 2022, -#' geography = "tract", -#' states = "DC" -#' ) -#' -#' # Add a custom geography column (e.g., from a crosswalk) -#' tract_data_with_neighborhoods = tract_data %>% -#' dplyr::left_join(neighborhood_crosswalk, by = "GEOID") -#' -#' # Aggregate to custom geographies -#' neighborhood_data = calculate_custom_geographies( -#' .data = tract_data_with_neighborhoods, -#' group_id = "neighborhood_id", -#' spatial = TRUE -#' ) -#' } -#' @export -#' @importFrom magrittr %>% -#' @importFrom rlang .data -calculate_custom_geographies = function( - .data, - group_id, - spatial = FALSE, - weight_variable = "total_population_universe") { - - ####----Input Validation----#### - codebook = attr(.data, "codebook") - if (is.null(codebook)) { - stop("Input data must have a codebook attribute. Use output from compile_acs_data().") - } - - if (!group_id %in% colnames(.data)) { - stop(paste0("Column '", group_id, "' not found in .data.")) - } - - if (!weight_variable %in% colnames(.data)) { - stop(paste0("Weight variable '", weight_variable, "' not found in .data.")) - } - - ## Get resolved tables from attribute (for re-running definitions) - resolved_tables = attr(.data, "resolved_tables") - if (is.null(resolved_tables)) { - resolved_tables = names(.table_registry$tables) - } - - ## Warn about NA values in group_id - na_count = sum(is.na(.data[[group_id]])) - if (na_count > 0) { - message(paste0( - "Warning: ", na_count, " rows have NA values in '", group_id, - "' and will be excluded from aggregation.")) - } - - ## Check for geometry - has_geometry = inherits(.data, "sf") - - ## Filter out NA group_ids - data_filtered = .data %>% - dplyr::filter(!is.na(!!rlang::sym(group_id))) - - if (has_geometry) { - data_no_geom = sf::st_drop_geometry(data_filtered) - } else { - data_no_geom = data_filtered - } - - ####----Classify Variables (using pre-parsed codebook columns)----#### - sum_variables = codebook %>% - dplyr::filter(aggregation_strategy == "sum") %>% - dplyr::pull(calculated_variable) - - percent_variables = codebook %>% - dplyr::filter(aggregation_strategy == "recalculate_percent") %>% - dplyr::pull(calculated_variable) - - weighted_avg_variables = codebook %>% - dplyr::filter(aggregation_strategy == "weighted_average") %>% - dplyr::pull(calculated_variable) - - ## Filter to variables that actually exist in the data - sum_variables = sum_variables[sum_variables %in% colnames(data_no_geom)] - percent_variables = percent_variables[percent_variables %in% colnames(data_no_geom)] - weighted_avg_variables = weighted_avg_variables[weighted_avg_variables %in% colnames(data_no_geom)] - - ## MOE variables for summed counts - sum_moe_variables = paste0(sum_variables, "_M") - sum_moe_variables = sum_moe_variables[sum_moe_variables %in% colnames(data_no_geom)] - - ####----Aggregate Sum Variables----#### - aggregated_sums = data_no_geom %>% - dplyr::group_by(dplyr::across(dplyr::all_of(c(group_id, "data_source_year")))) %>% - dplyr::summarise( - dplyr::across(dplyr::all_of(sum_variables), ~ sum(.x, na.rm = TRUE)), - .groups = "drop") - - ## Calculate MOEs for summed variables using se_sum() - aggregated_sum_moes = data_no_geom %>% - dplyr::distinct(dplyr::across(dplyr::all_of(c(group_id, "data_source_year")))) - - for (var in sum_variables) { - moe_var = paste0(var, "_M") - if (!moe_var %in% colnames(data_no_geom)) next - - var_moes = data_no_geom %>% - dplyr::group_by(dplyr::across(dplyr::all_of(c(group_id, "data_source_year")))) %>% - dplyr::group_split() %>% - purrr::map(function(group_df) { - group_keys = group_df %>% - dplyr::distinct(dplyr::across(dplyr::all_of(c(group_id, "data_source_year")))) - - estimates = group_df[[var]] - moes = group_df[[moe_var]] - - se = se_sum(as.list(moes), as.list(estimates)) - - group_keys %>% - dplyr::mutate(!!moe_var := se * 1.645) - }) %>% purrr::list_rbind() - - aggregated_sum_moes = aggregated_sum_moes %>% - dplyr::left_join(var_moes, by = c(group_id, "data_source_year")) - } - - ####----Aggregate Weighted Average Variables----#### - if (length(weighted_avg_variables) > 0) { - weight_moe_variable = paste0(weight_variable, "_M") - has_weight_moe = weight_moe_variable %in% colnames(data_no_geom) - - if (!has_weight_moe) { - warning(paste0("MOE column '", weight_moe_variable, "' not found for weight variable. ", - "SE calculations for weighted averages will be skipped.")) - } - - aggregated_weighted = data_no_geom %>% - dplyr::group_by(dplyr::across(dplyr::all_of(c(group_id, "data_source_year")))) %>% - dplyr::summarise( - dplyr::across( - dplyr::all_of(weighted_avg_variables), - ~ sum(.x * .data[[weight_variable]], na.rm = TRUE) / sum(.data[[weight_variable]], na.rm = TRUE)), - .groups = "drop") - - ## Calculate SEs for weighted averages - if (has_weight_moe) { - aggregated_weighted_ses = data_no_geom %>% - dplyr::distinct(dplyr::across(dplyr::all_of(c(group_id, "data_source_year")))) - - for (var in weighted_avg_variables) { - moe_var = paste0(var, "_M") - if (!moe_var %in% colnames(data_no_geom)) next - - se_col_name = paste0(var, "_SE") - - var_ses = data_no_geom %>% - dplyr::group_by(dplyr::across(dplyr::all_of(c(group_id, "data_source_year")))) %>% - dplyr::group_split() %>% - purrr::map(function(group_df) { - group_keys = group_df %>% - dplyr::distinct(dplyr::across(dplyr::all_of(c(group_id, "data_source_year")))) - - se_result = tryCatch({ - se_weighted_mean( - values = group_df[[var]], - weights = group_df[[weight_variable]], - moe_values = group_df[[moe_var]], - moe_weights = group_df[[weight_moe_variable]]) - }, error = function(e) NA_real_) - - group_keys %>% - dplyr::mutate(!!se_col_name := se_result) - }) %>% purrr::list_rbind() - - aggregated_weighted_ses = aggregated_weighted_ses %>% - dplyr::left_join(var_ses, by = c(group_id, "data_source_year")) - } - - ## Convert SEs to MOEs - se_col_names = paste0(weighted_avg_variables, "_SE") - se_col_names = se_col_names[se_col_names %in% colnames(aggregated_weighted_ses)] - moe_new_names = stringr::str_replace(se_col_names, "_SE$", "_M") - - aggregated_weighted_moes = aggregated_weighted_ses %>% - dplyr::mutate( - dplyr::across( - dplyr::all_of(se_col_names), - ~ .x * 1.645, - .names = "{stringr::str_replace(.col, '_SE$', '_M')}")) %>% - dplyr::select(dplyr::all_of(c(group_id, "data_source_year", moe_new_names))) - } else { - aggregated_weighted_ses = NULL - aggregated_weighted_moes = NULL - } - } else { - aggregated_weighted = NULL - aggregated_weighted_ses = NULL - aggregated_weighted_moes = NULL - } - - ####----Handle Metadata Variables----#### - area_variables = c("area_land_sq_kilometer", "area_water_sq_kilometer", "area_land_water_sq_kilometer") - area_variables = area_variables[area_variables %in% colnames(data_no_geom)] - - if (length(area_variables) > 0) { - aggregated_areas = data_no_geom %>% - dplyr::group_by(dplyr::across(dplyr::all_of(c(group_id, "data_source_year")))) %>% - dplyr::summarise( - dplyr::across(dplyr::all_of(area_variables), ~ sum(.x, na.rm = TRUE)), - .groups = "drop") - } else { - aggregated_areas = data_no_geom %>% - dplyr::distinct(dplyr::across(dplyr::all_of(c(group_id, "data_source_year")))) - } - - ####----Handle Geometry (if spatial = TRUE)----#### - if (spatial && has_geometry) { - aggregated_geometry = data_filtered %>% - dplyr::group_by(dplyr::across(dplyr::all_of(c(group_id, "data_source_year")))) %>% - dplyr::summarize(geometry = sf::st_union(geometry), .groups = "drop") - } - - ####----Combine Results----#### - result = aggregated_sums %>% - dplyr::left_join(aggregated_sum_moes, by = c(group_id, "data_source_year")) %>% - dplyr::left_join(aggregated_areas, by = c(group_id, "data_source_year")) - - if (!is.null(aggregated_weighted)) { - result = result %>% - dplyr::left_join(aggregated_weighted, by = c(group_id, "data_source_year")) - if (!is.null(aggregated_weighted_moes)) { - result = result %>% - dplyr::left_join(aggregated_weighted_moes, by = c(group_id, "data_source_year")) - } - } - - ####----Recalculate Population Density----#### - if ("total_population_universe" %in% colnames(result) && "area_land_sq_kilometer" %in% colnames(result)) { - result = result %>% - dplyr::mutate( - population_density_land_sq_kilometer = safe_divide(total_population_universe, area_land_sq_kilometer)) - - if ("total_population_universe_M" %in% colnames(result)) { - result = result %>% - dplyr::mutate( - population_density_land_sq_kilometer_M = (se_simple(total_population_universe_M) / area_land_sq_kilometer) * 1.645) - } - } - - ####----Recalculate Percent Variables via Registry Definitions----#### - ## Save MOE columns (execute_definitions doesn't use them, but regex patterns - ## in resolve_regex_columns exclude _M$ already, so this is a safety measure) - moe_cols = result %>% - dplyr::select(dplyr::all_of(c(group_id, "data_source_year")), - dplyr::matches("_M$")) - - ## Strip MOE columns before re-running definitions - result_for_defs = result %>% - as.data.frame() %>% - dplyr::select(-dplyr::matches("_M$")) - - ## Re-run definitions for each resolved table to recalculate percentages - result_for_defs = purrr::reduce(resolved_tables, function(.data, table_name) { - table_entry = get_table(table_name) - if (!is.null(table_entry) && !is.null(table_entry[["definitions"]]) && length(table_entry[["definitions"]]) > 0) { - execute_definitions(.data, table_entry[["definitions"]]) - } else { - .data - } - }, .init = result_for_defs) - - ## Re-attach MOE columns - result = result_for_defs %>% - dplyr::left_join(moe_cols, by = c(group_id, "data_source_year")) - - ####----Calculate SEs/MOEs for Percent Variables----#### - if (length(percent_variables) > 0) { - percent_components = codebook %>% - dplyr::filter(calculated_variable %in% percent_variables) %>% - dplyr::select( - calculated_variable, - numerator_add = numerator_vars, - numerator_subtract = numerator_subtract_vars, - denominator_add = denominator_vars, - denominator_subtract = denominator_subtract_vars) - - ## Helper to calculate SE/MOE/CV for a single percent variable - calculate_percent_se = function(df, component_row) { - if (nrow(df) == 0) return(df) - - var_name = component_row$calculated_variable - if (!var_name %in% colnames(df)) return(df) - - num_add = component_row$numerator_add[[1]] - num_sub = component_row$numerator_subtract[[1]] - denom_add = component_row$denominator_add[[1]] - denom_sub = component_row$denominator_subtract[[1]] - - num_est_cols = c(num_add, num_sub) - num_est_cols = num_est_cols[nchar(num_est_cols) > 0] - denom_est_cols = c(denom_add, denom_sub) - denom_est_cols = denom_est_cols[nchar(denom_est_cols) > 0] - - if (length(num_est_cols) == 0 || length(denom_est_cols) == 0) return(df) - - num_moe_cols = paste0(num_est_cols, "_M") - denom_moe_cols = paste0(denom_est_cols, "_M") - - all_required = c(num_est_cols, denom_est_cols, num_moe_cols, denom_moe_cols) - if (!all(all_required %in% colnames(df))) return(df) - - num_est = if (length(num_add) > 0) { - rowSums(as.matrix(dplyr::select(df, dplyr::all_of(num_add))), na.rm = TRUE) - } else { rep(0, nrow(df)) } - if (length(num_sub) > 0) { - num_est = num_est - rowSums(as.matrix(dplyr::select(df, dplyr::all_of(num_sub))), na.rm = TRUE) - } - - denom_est = if (length(denom_add) > 0) { - rowSums(as.matrix(dplyr::select(df, dplyr::all_of(denom_add))), na.rm = TRUE) - } else { rep(0, nrow(df)) } - if (length(denom_sub) > 0) { - denom_est = denom_est - rowSums(as.matrix(dplyr::select(df, dplyr::all_of(denom_sub))), na.rm = TRUE) - } - - num_se = tryCatch({ - if (length(num_est_cols) > 0) { - se_sum( - purrr::map(num_moe_cols, ~ df[[.x]]), - purrr::map(num_est_cols, ~ df[[.x]])) - } else { rep(0, nrow(df)) } - }, error = function(e) rep(NA_real_, nrow(df))) - - denom_se = tryCatch({ - if (length(denom_est_cols) > 0) { - se_sum( - purrr::map(denom_moe_cols, ~ df[[.x]]), - purrr::map(denom_est_cols, ~ df[[.x]])) - } else { rep(0, nrow(df)) } - }, error = function(e) rep(NA_real_, nrow(df))) - - if (all(is.na(num_se)) || all(is.na(denom_se))) return(df) - - percent_se = tryCatch({ - se_proportion_ratio( - estimate_numerator = num_est, - estimate_denominator = denom_est, - se_numerator = num_se, - se_denominator = denom_se) - }, error = function(e) rep(NA_real_, nrow(df))) - - df %>% - dplyr::mutate( - !!paste0(var_name, "_M") := percent_se * 1.645) - } - - result = purrr::reduce( - seq_len(nrow(percent_components)), - function(df, i) calculate_percent_se(df, percent_components[i, ]), - .init = result) - } - - ####----Sum Variables Already Have MOEs----#### - ## Sum variables already have _M columns from the aggregation step above; - ## no additional error derivation is needed for them. - { - } - - ####----Attach Geometry if Spatial----#### - if (spatial && has_geometry) { - result = result %>% - as.data.frame() %>% - dplyr::left_join( - aggregated_geometry %>% dplyr::select(dplyr::all_of(c(group_id, "data_source_year"))), - by = c(group_id, "data_source_year")) %>% - sf::st_as_sf() - } - - ####----Rename Group ID to GEOID for Consistency----#### - result = result %>% - dplyr::rename(GEOID = !!rlang::sym(group_id)) - - ####----Update Codebook----#### - updated_codebook = codebook %>% - dplyr::mutate( - definition = dplyr::case_when( - aggregation_strategy == "sum" ~ - paste0(definition, " [Aggregated via direct summation.]"), - aggregation_strategy == "recalculate_percent" ~ - paste0(definition, " [Percentage recalculated from summed components.]"), - aggregation_strategy == "weighted_average" ~ - paste0(definition, " [Aggregated via population-weighted average using ", weight_variable, ".]"), - TRUE ~ definition)) %>% - dplyr::select(calculated_variable, variable_type, definition, dplyr::everything()) - - attr(result, "codebook") = updated_codebook - - return(result) -} - -utils::globalVariables(c( - ":=", "variable_type", "aggregation_strategy", "calculated_variable", - "total_population_universe", "area_land_sq_kilometer", - "total_population_universe_M", "population_density_land_sq_kilometer", - "data_source_year", "geometry", - "numerator_vars", "numerator_subtract_vars", "denominator_vars", "denominator_subtract_vars")) diff --git a/R/interpolate_acs.R b/R/interpolate_acs.R new file mode 100644 index 0000000..7fc0d51 --- /dev/null +++ b/R/interpolate_acs.R @@ -0,0 +1,630 @@ +#' @importFrom magrittr %>% +#' @importFrom rlang .data + +## Internal workhorse for aggregating data to target geographies. +## interpolate_acs() calls this function after preparing data. +## For fractional allocation mode, extensive variables are pre-multiplied +## by crosswalk weights before this function is called. +## +## @param data_no_geom Data frame without geometry. For interpolation, count +## estimates, count MOEs, area variables, and the weight_variable (+ its MOE) +## have already been multiplied by the crosswalk allocation weight. +## @param target_col Character. Column name to group by (the target geography ID). +## @param weight_variable Character. Column name used for population-weighted +## averages of intensive variables. +## @param codebook Data frame. The codebook attribute from compile_acs_data() output. +## @param resolved_tables Character vector. Table names for re-running definitions. +## @returns Data frame aggregated to target geographies with estimates and MOEs. +## @keywords internal +.aggregate_to_target = function( + data_no_geom, + target_col, + weight_variable, + codebook, + resolved_tables) { + + ####----Ensure codebook has aggregation_strategy column----#### + if (!"aggregation_strategy" %in% colnames(codebook)) { + codebook = codebook %>% + dplyr::mutate( + aggregation_strategy = dplyr::case_when( + variable_type %in% c("Count", "Sum") ~ "sum", + variable_type == "Percent" ~ "recalculate_percent", + variable_type %in% c("Median ($)", "Median", "Average", "Quintile ($)", "Index") ~ "weighted_average", + variable_type == "Metadata" ~ "metadata", + TRUE ~ "unknown")) + } + + ####----Classify Variables (using pre-parsed codebook columns)----#### + sum_variables = codebook %>% + dplyr::filter(aggregation_strategy == "sum") %>% + dplyr::pull(calculated_variable) + + percent_variables = codebook %>% + dplyr::filter(aggregation_strategy == "recalculate_percent") %>% + dplyr::pull(calculated_variable) + + weighted_avg_variables = codebook %>% + dplyr::filter(aggregation_strategy == "weighted_average") %>% + dplyr::pull(calculated_variable) + + ## Filter to variables that actually exist in the data + sum_variables = sum_variables[sum_variables %in% colnames(data_no_geom)] + percent_variables = percent_variables[percent_variables %in% colnames(data_no_geom)] + weighted_avg_variables = weighted_avg_variables[weighted_avg_variables %in% colnames(data_no_geom)] + + ####----Aggregate Sum Variables----#### + aggregated_sums = data_no_geom %>% + dplyr::group_by(dplyr::across(dplyr::all_of(c(target_col, "data_source_year")))) %>% + dplyr::summarise( + dplyr::across(dplyr::all_of(sum_variables), ~ sum(.x, na.rm = TRUE)), + .groups = "drop") + + ## Calculate MOEs for summed variables using se_sum() + aggregated_sum_moes = data_no_geom %>% + dplyr::distinct(dplyr::across(dplyr::all_of(c(target_col, "data_source_year")))) + + for (var in sum_variables) { + moe_var = paste0(var, "_M") + if (!moe_var %in% colnames(data_no_geom)) next + + var_moes = data_no_geom %>% + dplyr::group_by(dplyr::across(dplyr::all_of(c(target_col, "data_source_year")))) %>% + dplyr::group_split() %>% + purrr::map(function(group_df) { + group_keys = group_df %>% + dplyr::distinct(dplyr::across(dplyr::all_of(c(target_col, "data_source_year")))) + + estimates = group_df[[var]] + moes = group_df[[moe_var]] + + se = se_sum(as.list(moes), as.list(estimates)) + + group_keys %>% + dplyr::mutate(!!moe_var := se * 1.645) + }) %>% purrr::list_rbind() + + aggregated_sum_moes = aggregated_sum_moes %>% + dplyr::left_join(var_moes, by = c(target_col, "data_source_year")) + } + + ####----Aggregate Weighted Average Variables----#### + if (length(weighted_avg_variables) > 0) { + weight_moe_variable = paste0(weight_variable, "_M") + has_weight_moe = weight_moe_variable %in% colnames(data_no_geom) + + if (!has_weight_moe) { + warning(paste0("MOE column '", weight_moe_variable, "' not found for weight variable. ", + "SE calculations for weighted averages will be skipped.")) + } + + aggregated_weighted = data_no_geom %>% + dplyr::group_by(dplyr::across(dplyr::all_of(c(target_col, "data_source_year")))) %>% + dplyr::summarise( + dplyr::across( + dplyr::all_of(weighted_avg_variables), + ~ sum(.x * .data[[weight_variable]], na.rm = TRUE) / sum(.data[[weight_variable]], na.rm = TRUE)), + .groups = "drop") + + ## Calculate SEs for weighted averages + if (has_weight_moe) { + aggregated_weighted_ses = data_no_geom %>% + dplyr::distinct(dplyr::across(dplyr::all_of(c(target_col, "data_source_year")))) + + for (var in weighted_avg_variables) { + moe_var = paste0(var, "_M") + if (!moe_var %in% colnames(data_no_geom)) next + + se_col_name = paste0(var, "_SE") + + var_ses = data_no_geom %>% + dplyr::group_by(dplyr::across(dplyr::all_of(c(target_col, "data_source_year")))) %>% + dplyr::group_split() %>% + purrr::map(function(group_df) { + group_keys = group_df %>% + dplyr::distinct(dplyr::across(dplyr::all_of(c(target_col, "data_source_year")))) + + se_result = tryCatch({ + se_weighted_mean( + values = group_df[[var]], + weights = group_df[[weight_variable]], + moe_values = group_df[[moe_var]], + moe_weights = group_df[[weight_moe_variable]]) + }, error = function(e) NA_real_) + + group_keys %>% + dplyr::mutate(!!se_col_name := se_result) + }) %>% purrr::list_rbind() + + aggregated_weighted_ses = aggregated_weighted_ses %>% + dplyr::left_join(var_ses, by = c(target_col, "data_source_year")) + } + + ## Convert SEs to MOEs + se_col_names = paste0(weighted_avg_variables, "_SE") + se_col_names = se_col_names[se_col_names %in% colnames(aggregated_weighted_ses)] + moe_new_names = stringr::str_replace(se_col_names, "_SE$", "_M") + + aggregated_weighted_moes = aggregated_weighted_ses %>% + dplyr::mutate( + dplyr::across( + dplyr::all_of(se_col_names), + ~ .x * 1.645, + .names = "{stringr::str_replace(.col, '_SE$', '_M')}")) %>% + dplyr::select(dplyr::all_of(c(target_col, "data_source_year", moe_new_names))) + } else { + aggregated_weighted_ses = NULL + aggregated_weighted_moes = NULL + } + } else { + aggregated_weighted = NULL + aggregated_weighted_ses = NULL + aggregated_weighted_moes = NULL + } + + ####----Handle Metadata Variables (Area)----#### + area_variables = c("area_land_sq_kilometer", "area_water_sq_kilometer", "area_land_water_sq_kilometer") + area_variables = area_variables[area_variables %in% colnames(data_no_geom)] + + if (length(area_variables) > 0) { + aggregated_areas = data_no_geom %>% + dplyr::group_by(dplyr::across(dplyr::all_of(c(target_col, "data_source_year")))) %>% + dplyr::summarise( + dplyr::across(dplyr::all_of(area_variables), ~ sum(.x, na.rm = TRUE)), + .groups = "drop") + } else { + aggregated_areas = data_no_geom %>% + dplyr::distinct(dplyr::across(dplyr::all_of(c(target_col, "data_source_year")))) + } + + ####----Combine Results----#### + result = aggregated_sums %>% + dplyr::left_join(aggregated_sum_moes, by = c(target_col, "data_source_year")) %>% + dplyr::left_join(aggregated_areas, by = c(target_col, "data_source_year")) + + if (!is.null(aggregated_weighted)) { + result = result %>% + dplyr::left_join(aggregated_weighted, by = c(target_col, "data_source_year")) + if (!is.null(aggregated_weighted_moes)) { + result = result %>% + dplyr::left_join(aggregated_weighted_moes, by = c(target_col, "data_source_year")) + } + } + + ####----Recalculate Population Density----#### + if ("total_population_universe" %in% colnames(result) && "area_land_sq_kilometer" %in% colnames(result)) { + result = result %>% + dplyr::mutate( + population_density_land_sq_kilometer = safe_divide(total_population_universe, area_land_sq_kilometer)) + + if ("total_population_universe_M" %in% colnames(result)) { + result = result %>% + dplyr::mutate( + population_density_land_sq_kilometer_M = (se_simple(total_population_universe_M) / area_land_sq_kilometer) * 1.645) + } + } + + ####----Recalculate Percent Variables via Registry Definitions----#### + ## Save MOE columns (execute_definitions doesn't use them, but regex patterns + ## in resolve_regex_columns exclude _M$ already, so this is a safety measure) + moe_cols = result %>% + dplyr::select(dplyr::all_of(c(target_col, "data_source_year")), + dplyr::matches("_M$")) + + ## Strip MOE columns before re-running definitions + result_for_defs = result %>% + as.data.frame() %>% + dplyr::select(-dplyr::matches("_M$")) + + ## Re-run definitions for each resolved table to recalculate percentages + result_for_defs = purrr::reduce(resolved_tables, function(.data, table_name) { + table_entry = get_table(table_name) + if (!is.null(table_entry) && !is.null(table_entry[["definitions"]]) && length(table_entry[["definitions"]]) > 0) { + execute_definitions(.data, table_entry[["definitions"]]) + } else { + .data + } + }, .init = result_for_defs) + + ## Re-attach MOE columns + result = result_for_defs %>% + dplyr::left_join(moe_cols, by = c(target_col, "data_source_year")) + + ####----Calculate SEs/MOEs for Percent Variables----#### + has_parsed_columns = all(c("numerator_vars", "numerator_subtract_vars", + "denominator_vars", "denominator_subtract_vars") %in% colnames(codebook)) + + if (length(percent_variables) > 0 && has_parsed_columns) { + percent_components = codebook %>% + dplyr::filter(calculated_variable %in% percent_variables) %>% + dplyr::select( + calculated_variable, + numerator_add = numerator_vars, + numerator_subtract = numerator_subtract_vars, + denominator_add = denominator_vars, + denominator_subtract = denominator_subtract_vars) + + ## Helper to calculate SE/MOE for a single percent variable + calculate_percent_se = function(df, component_row) { + if (nrow(df) == 0) return(df) + + var_name = component_row$calculated_variable + if (!var_name %in% colnames(df)) return(df) + + num_add = component_row$numerator_add[[1]] + num_sub = component_row$numerator_subtract[[1]] + denom_add = component_row$denominator_add[[1]] + denom_sub = component_row$denominator_subtract[[1]] + + num_est_cols = c(num_add, num_sub) + num_est_cols = num_est_cols[nchar(num_est_cols) > 0] + denom_est_cols = c(denom_add, denom_sub) + denom_est_cols = denom_est_cols[nchar(denom_est_cols) > 0] + + if (length(num_est_cols) == 0 || length(denom_est_cols) == 0) return(df) + + num_moe_cols = paste0(num_est_cols, "_M") + denom_moe_cols = paste0(denom_est_cols, "_M") + + all_required = c(num_est_cols, denom_est_cols, num_moe_cols, denom_moe_cols) + if (!all(all_required %in% colnames(df))) return(df) + + num_est = if (length(num_add) > 0) { + rowSums(as.matrix(dplyr::select(df, dplyr::all_of(num_add))), na.rm = TRUE) + } else { rep(0, nrow(df)) } + if (length(num_sub) > 0) { + num_est = num_est - rowSums(as.matrix(dplyr::select(df, dplyr::all_of(num_sub))), na.rm = TRUE) + } + + denom_est = if (length(denom_add) > 0) { + rowSums(as.matrix(dplyr::select(df, dplyr::all_of(denom_add))), na.rm = TRUE) + } else { rep(0, nrow(df)) } + if (length(denom_sub) > 0) { + denom_est = denom_est - rowSums(as.matrix(dplyr::select(df, dplyr::all_of(denom_sub))), na.rm = TRUE) + } + + num_se = tryCatch({ + if (length(num_est_cols) > 0) { + se_sum( + purrr::map(num_moe_cols, ~ df[[.x]]), + purrr::map(num_est_cols, ~ df[[.x]])) + } else { rep(0, nrow(df)) } + }, error = function(e) rep(NA_real_, nrow(df))) + + denom_se = tryCatch({ + if (length(denom_est_cols) > 0) { + se_sum( + purrr::map(denom_moe_cols, ~ df[[.x]]), + purrr::map(denom_est_cols, ~ df[[.x]])) + } else { rep(0, nrow(df)) } + }, error = function(e) rep(NA_real_, nrow(df))) + + if (all(is.na(num_se)) || all(is.na(denom_se))) return(df) + + percent_se = tryCatch({ + se_proportion_ratio( + estimate_numerator = num_est, + estimate_denominator = denom_est, + se_numerator = num_se, + se_denominator = denom_se) + }, error = function(e) rep(NA_real_, nrow(df))) + + df %>% + dplyr::mutate( + !!paste0(var_name, "_M") := percent_se * 1.645) + } + + result = purrr::reduce( + seq_len(nrow(percent_components)), + function(df, i) calculate_percent_se(df, percent_components[i, ]), + .init = result) + } + + return(result) +} + + +#' @title Aggregate or interpolate ACS data to custom geographies +#' @description Aggregate or interpolate ACS data from source geographies to +#' user-defined target geographies. Supports two modes: +#' +#' **Complete nesting** (`weight = NULL`): Each source geography maps entirely +#' to one target geography. Count variables are summed, percentages are +#' recalculated from summed components, and intensive variables (medians, +#' averages) are computed as population-weighted averages. +#' +#' **Fractional allocation** (`weight = "column_name"`): Source geographies +#' can be split across multiple targets using crosswalk weights. Count +#' variables and MOEs are multiplied by the weight before summing. +#' Percentages are recalculated from interpolated components. Intensive +#' variables use the allocated population as weights. +#' +#' MOE propagation uses Census Bureau approximation formulas throughout. +#' Crosswalk weights are treated as constants (no sampling error). +#' @param .data A dataframe returned from \code{compile_acs_data()}. +#' Must have a codebook attribute attached. +#' @param target_geoid Character. Column name for target geography identifiers. +#' Must exist in \code{.data} or in \code{crosswalk}. The result renames +#' this column to \code{GEOID}. +#' @param weight Character or \code{NULL}. When \code{NULL} (default), assumes +#' complete nesting where each source geography maps entirely to one target. +#' When a column name is provided, performs fractional allocation using that +#' column as weights. Weights should sum to approximately 1 per source +#' geography. +#' @param crosswalk A data frame containing the crosswalk mapping. Optional in +#' both modes. When provided, joined to \code{.data} via \code{source_geoid} +#' before processing. Must include columns for \code{source_geoid} and +#' \code{target_geoid} (and \code{weight} if fractional allocation is used). +#' @param source_geoid Character. Column name for source geography identifiers. +#' Must exist in \code{.data} (and in \code{crosswalk} if provided). +#' Default is \code{"GEOID"}. +#' @param weight_variable Character. Variable name used for population-weighted +#' averages of intensive variables (medians, averages, etc.). +#' Default is \code{"total_population_universe"}. +#' @returns A dataframe aggregated to target geographies with recalculated +#' estimates, MOEs, and percentages. A modified codebook is attached as an +#' attribute. +#' @examples +#' \dontrun{ +#' # First, create tract-level data +#' tract_data = compile_acs_data( +#' tables = c("race", "snap"), +#' years = 2022, +#' geography = "tract", +#' states = "DC" +#' ) +#' +#' # Complete nesting: each tract belongs to exactly one neighborhood +#' tract_data$neighborhood = c("Downtown", "Downtown", "Uptown", ...) +#' neighborhood_data = interpolate_acs( +#' .data = tract_data, +#' target_geoid = "neighborhood" +#' ) +#' +#' # Fractional allocation with a crosswalk +#' crosswalk = data.frame( +#' GEOID = c("11001000100", "11001000100", "11001000201"), +#' neighborhood = c("Downtown", "Chinatown", "Downtown"), +#' alloc_weight = c(0.6, 0.4, 1.0) +#' ) +#' +#' neighborhood_data = interpolate_acs( +#' .data = tract_data, +#' target_geoid = "neighborhood", +#' weight = "alloc_weight", +#' crosswalk = crosswalk +#' ) +#' } +#' @export +#' @importFrom magrittr %>% +#' @importFrom rlang .data +interpolate_acs = function( + .data, + target_geoid, + weight = NULL, + crosswalk = NULL, + source_geoid = "GEOID", + weight_variable = "total_population_universe") { + + ####----Input Validation----#### + codebook = attr(.data, "codebook") + if (is.null(codebook)) { + stop("Input data must have a codebook attribute. Use output from compile_acs_data().") + } + + if (!source_geoid %in% colnames(.data)) { + stop(paste0("Column '", source_geoid, "' not found in .data.")) + } + + if (!weight_variable %in% colnames(.data)) { + stop(paste0("Weight variable '", weight_variable, "' not found in .data.")) + } + + ## Get resolved tables from attribute (for re-running definitions) + resolved_tables = attr(.data, "resolved_tables") + if (is.null(resolved_tables)) { + resolved_tables = names(.table_registry$tables) + } + + ## Drop geometry if present + if (inherits(.data, "sf")) { + .data = sf::st_drop_geometry(.data) + } + + ## Join crosswalk if provided + if (!is.null(crosswalk)) { + if (!is.data.frame(crosswalk)) { + stop("`crosswalk` must be a data frame.") + } + if (!source_geoid %in% colnames(crosswalk)) { + stop(paste0("Column '", source_geoid, "' not found in crosswalk.")) + } + if (!target_geoid %in% colnames(crosswalk)) { + stop(paste0("Column '", target_geoid, "' not found in crosswalk.")) + } + + xwalk_cols = c(source_geoid, target_geoid) + if (!is.null(weight)) { + if (!weight %in% colnames(crosswalk)) { + stop(paste0("Column '", weight, "' not found in crosswalk.")) + } + xwalk_cols = c(xwalk_cols, weight) + } + + .data = .data %>% + dplyr::inner_join( + crosswalk %>% dplyr::select(dplyr::all_of(xwalk_cols)), + by = source_geoid) + } else { + if (!target_geoid %in% colnames(.data)) { + stop(paste0("Column '", target_geoid, "' not found in .data. Provide a crosswalk or add the column.")) + } + if (!is.null(weight) && !weight %in% colnames(.data)) { + stop(paste0("Column '", weight, "' not found in .data. Provide a crosswalk or add the column.")) + } + } + + ## Warn about NA values in target_geoid + na_count = sum(is.na(.data[[target_geoid]])) + if (na_count > 0) { + message(paste0( + "Warning: ", na_count, " rows have NA values in '", target_geoid, + "' and will be excluded from aggregation.")) + } + + ## Filter out NA target_geoids + data_filtered = .data %>% + dplyr::filter(!is.na(!!rlang::sym(target_geoid))) + + ####----Ensure codebook has aggregation_strategy column----#### + if (!"aggregation_strategy" %in% colnames(codebook)) { + codebook = codebook %>% + dplyr::mutate( + aggregation_strategy = dplyr::case_when( + variable_type %in% c("Count", "Sum") ~ "sum", + variable_type == "Percent" ~ "recalculate_percent", + variable_type %in% c("Median ($)", "Median", "Average", "Quintile ($)", "Index") ~ "weighted_average", + variable_type == "Metadata" ~ "metadata", + TRUE ~ "unknown")) + } + + if (!is.null(weight)) { + ####----Fractional Allocation Mode----#### + + ## Validate weight values + weight_values = data_filtered[[weight]] + if (!is.numeric(weight_values)) { + stop("Weight column must be numeric.") + } + if (any(weight_values < 0, na.rm = TRUE)) { + stop("Weight column must contain non-negative values.") + } + + ## Check if weights sum to ~1 per source geography and year + weight_sums = data_filtered %>% + dplyr::group_by(dplyr::across(dplyr::all_of(c(source_geoid, "data_source_year")))) %>% + dplyr::summarise(.weight_sum = sum(!!rlang::sym(weight), na.rm = TRUE), .groups = "drop") + + tolerance = 0.01 + non_unity = weight_sums %>% + dplyr::filter(abs(.weight_sum - 1) > tolerance) + if (nrow(non_unity) > 0) { + warning(paste0( + nrow(non_unity), " source geographies have weights that do not sum to 1 ", + "(tolerance: ", tolerance, "). ", + "Range of weight sums: [", + round(min(non_unity$.weight_sum), 4), ", ", + round(max(non_unity$.weight_sum), 4), "].")) + } + + ####----Classify Variables for Pre-allocation----#### + sum_variables = codebook %>% + dplyr::filter(aggregation_strategy == "sum") %>% + dplyr::pull(calculated_variable) + sum_variables = sum_variables[sum_variables %in% colnames(data_filtered)] + + ## MOE columns for count variables + sum_moe_variables = paste0(sum_variables, "_M") + sum_moe_variables = sum_moe_variables[sum_moe_variables %in% colnames(data_filtered)] + + ## Area variables + area_variables = c("area_land_sq_kilometer", "area_water_sq_kilometer", "area_land_water_sq_kilometer") + area_variables = area_variables[area_variables %in% colnames(data_filtered)] + + ####----Pre-allocate: multiply extensive variables by crosswalk weight----#### + ## MOE(w * X) = w * MOE(X) when w is a constant (no sampling error in the + ## crosswalk weight). This is the standard assumption for Census crosswalks. + data_allocated = data_filtered %>% + dplyr::mutate( + dplyr::across( + dplyr::all_of(sum_variables), + ~ .x * !!rlang::sym(weight)), + dplyr::across( + dplyr::all_of(sum_moe_variables), + ~ .x * !!rlang::sym(weight))) + + ## Area variables: allocate proportionally + if (length(area_variables) > 0) { + data_allocated = data_allocated %>% + dplyr::mutate( + dplyr::across( + dplyr::all_of(area_variables), + ~ .x * !!rlang::sym(weight))) + } + + ## Weight variable and its MOE — only allocate if not already in sum_variables. + ## total_population_universe is typically a sum variable, so it's already + ## pre-multiplied above. This handles the edge case where the user specifies + ## a weight_variable that isn't a count. + weight_moe_variable = paste0(weight_variable, "_M") + if (!weight_variable %in% sum_variables) { + data_allocated = data_allocated %>% + dplyr::mutate( + !!weight_variable := !!rlang::sym(weight_variable) * !!rlang::sym(weight)) + if (weight_moe_variable %in% colnames(data_allocated)) { + data_allocated = data_allocated %>% + dplyr::mutate( + !!weight_moe_variable := !!rlang::sym(weight_moe_variable) * !!rlang::sym(weight)) + } + } + + data_for_agg = data_allocated + codebook_tag = "interpolated" + } else { + ####----Complete Nesting Mode (no weights)----#### + data_for_agg = data_filtered + codebook_tag = "aggregated" + } + + ####----Aggregate via shared workhorse----#### + result = .aggregate_to_target( + data_no_geom = data_for_agg, + target_col = target_geoid, + weight_variable = weight_variable, + codebook = codebook, + resolved_tables = resolved_tables) + + ####----Rename target_geoid to GEOID for Consistency----#### + result = result %>% + dplyr::rename(GEOID = !!rlang::sym(target_geoid)) + + ####----Update Codebook----#### + if (codebook_tag == "aggregated") { + updated_codebook = codebook %>% + dplyr::mutate( + definition = dplyr::case_when( + aggregation_strategy == "sum" ~ + paste0(definition, " [Aggregated via direct summation.]"), + aggregation_strategy == "recalculate_percent" ~ + paste0(definition, " [Percentage recalculated from summed components.]"), + aggregation_strategy == "weighted_average" ~ + paste0(definition, " [Aggregated via population-weighted average using ", weight_variable, ".]"), + TRUE ~ definition)) + } else { + updated_codebook = codebook %>% + dplyr::mutate( + definition = dplyr::case_when( + aggregation_strategy == "sum" ~ + paste0(definition, " [Interpolated: allocated by crosswalk weight, then summed.]"), + aggregation_strategy == "recalculate_percent" ~ + paste0(definition, " [Interpolated: percentage recalculated from interpolated components.]"), + aggregation_strategy == "weighted_average" ~ + paste0(definition, " [Interpolated: population-weighted average using allocated ", weight_variable, ".]"), + TRUE ~ definition)) + } + + updated_codebook = updated_codebook %>% + dplyr::select(calculated_variable, variable_type, definition, dplyr::everything()) + + attr(result, "codebook") = updated_codebook + + return(result) +} + +utils::globalVariables(c( + ":=", "variable_type", "aggregation_strategy", "calculated_variable", + "total_population_universe", "area_land_sq_kilometer", + "total_population_universe_M", "population_density_land_sq_kilometer", + "data_source_year", + "numerator_vars", "numerator_subtract_vars", "denominator_vars", + "denominator_subtract_vars", ".weight_sum")) diff --git a/README.Rmd b/README.Rmd index dd2ac9a..6bb9b59 100644 --- a/README.Rmd +++ b/README.Rmd @@ -222,7 +222,7 @@ Confidence intervals are presented around each point but are extremely small"), ACS data are available for standard geographies (tracts, counties, states, etc.), but many analyses require non-standard areas like neighborhoods, school zones, or planning districts. -`calculate_custom_geographies()` aggregates tract-level data to +`interpolate_acs()` aggregates tract-level data to any user-defined geography, properly re-deriving percentages and propagating margins of error: @@ -248,10 +248,9 @@ dc_tracts = dc_tracts %>% select(-centroid, -lon, -lat) ## aggregate tracts to quadrants -dc_quadrants = calculate_custom_geographies( +dc_quadrants = interpolate_acs( .data = dc_tracts, - group_id = "quadrant", - spatial = TRUE) + target_geoid = "quadrant") dc_quadrants %>% sf::st_drop_geometry() %>% diff --git a/README.md b/README.md index 478df6e..5cc04bb 100644 --- a/README.md +++ b/README.md @@ -208,10 +208,9 @@ Confidence intervals are presented around each point but are extremely small"), ACS data are available for standard geographies (tracts, counties, states, etc.), but many analyses require non-standard areas like -neighborhoods, school zones, or planning districts. -`calculate_custom_geographies()` aggregates tract-level data to any -user-defined geography, properly re-deriving percentages and propagating -margins of error: +neighborhoods, school zones, or planning districts. `interpolate_acs()` +aggregates tract-level data to any user-defined geography, properly +re-deriving percentages and propagating margins of error: ``` r dc_tracts = compile_acs_data( @@ -235,10 +234,9 @@ dc_tracts = dc_tracts %>% select(-centroid, -lon, -lat) ## aggregate tracts to quadrants -dc_quadrants = calculate_custom_geographies( +dc_quadrants = interpolate_acs( .data = dc_tracts, - group_id = "quadrant", - spatial = TRUE) + target_geoid = "quadrant") dc_quadrants %>% sf::st_drop_geometry() %>% diff --git a/_pkgdown.yml b/_pkgdown.yml index cbaf1f7..8f7f208 100644 --- a/_pkgdown.yml +++ b/_pkgdown.yml @@ -7,7 +7,7 @@ reference: - title: Acquire data contents: - compile_acs_data - - calculate_custom_geographies + - interpolate_acs - define_percent - define_across_percent - define_across_sum @@ -18,9 +18,9 @@ reference: - se_simple - se_sum - se_proportion_ratio + - cv - title: Helper functions contents: - - get_acs_codebook - list_tables - list_variables - select_variables_by_name @@ -29,4 +29,5 @@ reference: - safe_divide - list_acs_variables - get_acs_codebook + diff --git a/man/calculate_custom_geographies.Rd b/man/calculate_custom_geographies.Rd deleted file mode 100644 index e1cd270..0000000 --- a/man/calculate_custom_geographies.Rd +++ /dev/null @@ -1,56 +0,0 @@ -% Generated by roxygen2: do not edit by hand -% Please edit documentation in R/calculate_custom_geographies.R -\name{calculate_custom_geographies} -\alias{calculate_custom_geographies} -\title{Aggregate ACS data to custom geographies} -\usage{ -calculate_custom_geographies( - .data, - group_id, - spatial = FALSE, - weight_variable = "total_population_universe" -) -} -\arguments{ -\item{.data}{A dataframe returned from \code{compile_acs_data()} at the tract level. -Must have a codebook attribute attached.} - -\item{group_id}{Character. The name of a column in \code{.data} that contains -the custom geography identifiers to aggregate to.} - -\item{spatial}{Logical. If TRUE, dissolve tract geometries to create custom geography -boundaries using \code{sf::st_union()}. Default is FALSE.} - -\item{weight_variable}{Character. The variable name to use for population-weighted -averages of non-aggregatable variables. Default is "total_population_universe".} -} -\value{ -A dataframe aggregated to custom geographies with recalculated estimates, -MOEs, SEs, and CVs. A modified codebook is attached as an attribute. -} -\description{ -Aggregate tract-level ACS data to user-defined custom geographies -by properly handling different variable types (counts, percentages, medians, etc.) -and recalculating all error measures appropriately. -} -\examples{ -\dontrun{ -# First, create tract-level data -tract_data = compile_acs_data( - years = 2022, - geography = "tract", - states = "DC" -) - -# Add a custom geography column (e.g., from a crosswalk) -tract_data_with_neighborhoods = tract_data \%>\% - dplyr::left_join(neighborhood_crosswalk, by = "GEOID") - -# Aggregate to custom geographies -neighborhood_data = calculate_custom_geographies( - .data = tract_data_with_neighborhoods, - group_id = "neighborhood_id", - spatial = TRUE -) -} -} diff --git a/man/interpolate_acs.Rd b/man/interpolate_acs.Rd new file mode 100644 index 0000000..9cb9f03 --- /dev/null +++ b/man/interpolate_acs.Rd @@ -0,0 +1,97 @@ +% Generated by roxygen2: do not edit by hand +% Please edit documentation in R/interpolate_acs.R +\name{interpolate_acs} +\alias{interpolate_acs} +\title{Aggregate or interpolate ACS data to custom geographies} +\usage{ +interpolate_acs( + .data, + target_geoid, + weight = NULL, + crosswalk = NULL, + source_geoid = "GEOID", + weight_variable = "total_population_universe" +) +} +\arguments{ +\item{.data}{A dataframe returned from \code{compile_acs_data()}. +Must have a codebook attribute attached.} + +\item{target_geoid}{Character. Column name for target geography identifiers. +Must exist in \code{.data} or in \code{crosswalk}. The result renames +this column to \code{GEOID}.} + +\item{weight}{Character or \code{NULL}. When \code{NULL} (default), assumes +complete nesting where each source geography maps entirely to one target. +When a column name is provided, performs fractional allocation using that +column as weights. Weights should sum to approximately 1 per source +geography.} + +\item{crosswalk}{A data frame containing the crosswalk mapping. Optional in +both modes. When provided, joined to \code{.data} via \code{source_geoid} +before processing. Must include columns for \code{source_geoid} and +\code{target_geoid} (and \code{weight} if fractional allocation is used).} + +\item{source_geoid}{Character. Column name for source geography identifiers. +Must exist in \code{.data} (and in \code{crosswalk} if provided). +Default is \code{"GEOID"}.} + +\item{weight_variable}{Character. Variable name used for population-weighted +averages of intensive variables (medians, averages, etc.). +Default is \code{"total_population_universe"}.} +} +\value{ +A dataframe aggregated to target geographies with recalculated +estimates, MOEs, and percentages. A modified codebook is attached as an +attribute. +} +\description{ +Aggregate or interpolate ACS data from source geographies to +user-defined target geographies. Supports two modes: + +\strong{Complete nesting} (\code{weight = NULL}): Each source geography maps entirely +to one target geography. Count variables are summed, percentages are +recalculated from summed components, and intensive variables (medians, +averages) are computed as population-weighted averages. + +\strong{Fractional allocation} (\code{weight = "column_name"}): Source geographies +can be split across multiple targets using crosswalk weights. Count +variables and MOEs are multiplied by the weight before summing. +Percentages are recalculated from interpolated components. Intensive +variables use the allocated population as weights. + +MOE propagation uses Census Bureau approximation formulas throughout. +Crosswalk weights are treated as constants (no sampling error). +} +\examples{ +\dontrun{ +# First, create tract-level data +tract_data = compile_acs_data( + tables = c("race", "snap"), + years = 2022, + geography = "tract", + states = "DC" +) + +# Complete nesting: each tract belongs to exactly one neighborhood +tract_data$neighborhood = c("Downtown", "Downtown", "Uptown", ...) +neighborhood_data = interpolate_acs( + .data = tract_data, + target_geoid = "neighborhood" +) + +# Fractional allocation with a crosswalk +crosswalk = data.frame( + GEOID = c("11001000100", "11001000100", "11001000201"), + neighborhood = c("Downtown", "Chinatown", "Downtown"), + alloc_weight = c(0.6, 0.4, 1.0) +) + +neighborhood_data = interpolate_acs( + .data = tract_data, + target_geoid = "neighborhood", + weight = "alloc_weight", + crosswalk = crosswalk +) +} +} diff --git a/renv.lock b/renv.lock index ff937cb..c0d9724 100644 --- a/renv.lock +++ b/renv.lock @@ -145,24 +145,6 @@ "Maintainer": "Winston Chang ", "Repository": "CRAN" }, - "RColorBrewer": { - "Package": "RColorBrewer", - "Version": "1.1-3", - "Source": "Repository", - "Date": "2022-04-03", - "Title": "ColorBrewer Palettes", - "Authors@R": "c(person(given = \"Erich\", family = \"Neuwirth\", role = c(\"aut\", \"cre\"), email = \"erich.neuwirth@univie.ac.at\"))", - "Author": "Erich Neuwirth [aut, cre]", - "Maintainer": "Erich Neuwirth ", - "Depends": [ - "R (>= 2.0.0)" - ], - "Description": "Provides color schemes for maps (and other graphics) designed by Cynthia Brewer as described at http://colorbrewer2.org.", - "License": "Apache License 2.0", - "NeedsCompilation": "no", - "Repository": "https://packagemanager.posit.co/cran/latest", - "Encoding": "UTF-8" - }, "Rcpp": { "Package": "Rcpp", "Version": "1.1.1", @@ -194,68 +176,7 @@ "NeedsCompilation": "yes", "Author": "Dirk Eddelbuettel [aut, cre] (ORCID: ), Romain Francois [aut] (ORCID: ), JJ Allaire [aut] (ORCID: ), Kevin Ushey [aut] (ORCID: ), Qiang Kou [aut] (ORCID: ), Nathan Russell [aut], Iñaki Ucar [aut] (ORCID: ), Doug Bates [aut] (ORCID: ), John Chambers [aut]", "Maintainer": "Dirk Eddelbuettel ", - "Repository": "https://packagemanager.posit.co/cran/latest" - }, - "Rttf2pt1": { - "Package": "Rttf2pt1", - "Version": "1.3.12", - "Source": "GitHub", - "Title": "'ttf2pt1' Program", - "Author": "Winston Chang, Andrew Weeks, Frank M. Siegert, Mark Heath, Thomas Henlick, Sergey Babkin, Turgut Uyar, Rihardas Hepas, Szalay Tamas, Johan Vromans, Petr Titera, Lei Wang, Chen Xiangyang, Zvezdan Petkovic, Rigel, I. Lee Hetherington", - "Maintainer": "Winston Chang ", - "Description": "Contains the program 'ttf2pt1', for use with the 'extrafont' package. This product includes software developed by the 'TTF2PT1' Project and its contributors.", - "Depends": [ - "R (>= 2.15)" - ], - "License": "file LICENSE", - "URL": "https://github.com/wch/Rttf2pt1", - "Encoding": "UTF-8", - "RoxygenNote": "7.2.3", - "RemoteType": "github", - "RemoteHost": "api.github.com", - "RemoteUsername": "wch", - "RemoteRepo": "Rttf2pt1", - "RemoteRef": "main", - "RemoteSha": "f625326af9783f6ae4d42cc5302dd6f2968e008f" - }, - "S7": { - "Package": "S7", - "Version": "0.2.1", - "Source": "Repository", - "Title": "An Object Oriented System Meant to Become a Successor to S3 and S4", - "Authors@R": "c( person(\"Object-Oriented Programming Working Group\", role = \"cph\"), person(\"Davis\", \"Vaughan\", role = \"aut\"), person(\"Jim\", \"Hester\", role = \"aut\", comment = c(ORCID = \"0000-0002-2739-7082\")), person(\"Tomasz\", \"Kalinowski\", role = \"aut\"), person(\"Will\", \"Landau\", role = \"aut\"), person(\"Michael\", \"Lawrence\", role = \"aut\"), person(\"Martin\", \"Maechler\", role = \"aut\", comment = c(ORCID = \"0000-0002-8685-9910\")), person(\"Luke\", \"Tierney\", role = \"aut\"), person(\"Hadley\", \"Wickham\", , \"hadley@posit.co\", role = c(\"aut\", \"cre\"), comment = c(ORCID = \"0000-0003-4757-117X\")) )", - "Description": "A new object oriented programming system designed to be a successor to S3 and S4. It includes formal class, generic, and method specification, and a limited form of multiple dispatch. It has been designed and implemented collaboratively by the R Consortium Object-Oriented Programming Working Group, which includes representatives from R-Core, 'Bioconductor', 'Posit'/'tidyverse', and the wider R community.", - "License": "MIT + file LICENSE", - "URL": "https://rconsortium.github.io/S7/, https://github.com/RConsortium/S7", - "BugReports": "https://github.com/RConsortium/S7/issues", - "Depends": [ - "R (>= 3.5.0)" - ], - "Imports": [ - "utils" - ], - "Suggests": [ - "bench", - "callr", - "covr", - "knitr", - "methods", - "rmarkdown", - "testthat (>= 3.2.0)", - "tibble" - ], - "VignetteBuilder": "knitr", - "Config/build/compilation-database": "true", - "Config/Needs/website": "sloop", - "Config/testthat/edition": "3", - "Config/testthat/parallel": "TRUE", - "Config/testthat/start-first": "external-generic", - "Encoding": "UTF-8", - "RoxygenNote": "7.3.3", - "NeedsCompilation": "yes", - "Author": "Object-Oriented Programming Working Group [cph], Davis Vaughan [aut], Jim Hester [aut] (ORCID: ), Tomasz Kalinowski [aut], Will Landau [aut], Michael Lawrence [aut], Martin Maechler [aut] (ORCID: ), Luke Tierney [aut], Hadley Wickham [aut, cre] (ORCID: )", - "Maintainer": "Hadley Wickham ", - "Repository": "https://packagemanager.posit.co/cran/latest" + "Repository": "CRAN" }, "askpass": { "Package": "askpass", @@ -282,27 +203,6 @@ "Maintainer": "Jeroen Ooms ", "Repository": "CRAN" }, - "base64enc": { - "Package": "base64enc", - "Version": "0.1-6", - "Source": "Repository", - "Title": "Tools for 'base64' Encoding", - "Author": "Simon Urbanek [aut, cre, cph] (https://urbanek.nz, ORCID: )", - "Authors@R": "person(\"Simon\", \"Urbanek\", role=c(\"aut\",\"cre\",\"cph\"), email=\"Simon.Urbanek@r-project.org\", comment=c(\"https://urbanek.nz\", ORCID=\"0000-0003-2297-1732\"))", - "Maintainer": "Simon Urbanek ", - "Depends": [ - "R (>= 2.9.0)" - ], - "Enhances": [ - "png" - ], - "Description": "Tools for handling 'base64' encoding. It is more flexible than the orphaned 'base64' package.", - "License": "GPL-2 | GPL-3", - "URL": "https://www.rforge.net/base64enc", - "BugReports": "https://github.com/s-u/base64enc/issues", - "NeedsCompilation": "yes", - "Repository": "CRAN" - }, "bit": { "Package": "bit", "Version": "4.6.0", @@ -370,93 +270,6 @@ "Maintainer": "Michael Chirico ", "Repository": "CRAN" }, - "bslib": { - "Package": "bslib", - "Version": "0.10.0", - "Source": "Repository", - "Title": "Custom 'Bootstrap' 'Sass' Themes for 'shiny' and 'rmarkdown'", - "Authors@R": "c( person(\"Carson\", \"Sievert\", , \"carson@posit.co\", role = c(\"aut\", \"cre\"), comment = c(ORCID = \"0000-0002-4958-2844\")), person(\"Joe\", \"Cheng\", , \"joe@posit.co\", role = \"aut\"), person(\"Garrick\", \"Aden-Buie\", , \"garrick@posit.co\", role = \"aut\", comment = c(ORCID = \"0000-0002-7111-0077\")), person(\"Posit Software, PBC\", role = c(\"cph\", \"fnd\")), person(, \"Bootstrap contributors\", role = \"ctb\", comment = \"Bootstrap library\"), person(, \"Twitter, Inc\", role = \"cph\", comment = \"Bootstrap library\"), person(\"Javi\", \"Aguilar\", role = c(\"ctb\", \"cph\"), comment = \"Bootstrap colorpicker library\"), person(\"Thomas\", \"Park\", role = c(\"ctb\", \"cph\"), comment = \"Bootswatch library\"), person(, \"PayPal\", role = c(\"ctb\", \"cph\"), comment = \"Bootstrap accessibility plugin\") )", - "Description": "Simplifies custom 'CSS' styling of both 'shiny' and 'rmarkdown' via 'Bootstrap' 'Sass'. Supports 'Bootstrap' 3, 4 and 5 as well as their various 'Bootswatch' themes. An interactive widget is also provided for previewing themes in real time.", - "License": "MIT + file LICENSE", - "URL": "https://rstudio.github.io/bslib/, https://github.com/rstudio/bslib", - "BugReports": "https://github.com/rstudio/bslib/issues", - "Depends": [ - "R (>= 2.10)" - ], - "Imports": [ - "base64enc", - "cachem", - "fastmap (>= 1.1.1)", - "grDevices", - "htmltools (>= 0.5.8)", - "jquerylib (>= 0.1.3)", - "jsonlite", - "lifecycle", - "memoise (>= 2.0.1)", - "mime", - "rlang", - "sass (>= 0.4.9)" - ], - "Suggests": [ - "brand.yml", - "bsicons", - "curl", - "fontawesome", - "future", - "ggplot2", - "knitr", - "lattice", - "magrittr", - "rappdirs", - "rmarkdown (>= 2.7)", - "shiny (>= 1.11.1)", - "testthat", - "thematic", - "tools", - "utils", - "withr", - "yaml" - ], - "Config/Needs/deploy": "BH, chiflights22, colourpicker, commonmark, cpp11, cpsievert/chiflights22, cpsievert/histoslider, dplyr, DT, ggplot2, ggridges, gt, hexbin, histoslider, htmlwidgets, lattice, leaflet, lubridate, markdown, modelr, plotly, reactable, reshape2, rprojroot, rsconnect, rstudio/shiny, scales, styler, tibble", - "Config/Needs/routine": "chromote, desc, renv", - "Config/Needs/website": "brio, crosstalk, dplyr, DT, ggplot2, glue, htmlwidgets, leaflet, lorem, palmerpenguins, plotly, purrr, rprojroot, rstudio/htmltools, scales, stringr, tidyr, webshot2", - "Config/testthat/edition": "3", - "Config/testthat/parallel": "true", - "Config/testthat/start-first": "zzzz-bs-sass, fonts, zzz-precompile, theme-*, rmd-*", - "Encoding": "UTF-8", - "RoxygenNote": "7.3.3", - "Collate": "'accordion.R' 'breakpoints.R' 'bs-current-theme.R' 'bs-dependencies.R' 'bs-global.R' 'bs-remove.R' 'bs-theme-layers.R' 'bs-theme-preset-bootswatch.R' 'bs-theme-preset-brand.R' 'bs-theme-preset-builtin.R' 'bs-theme-preset.R' 'utils.R' 'bs-theme-preview.R' 'bs-theme-update.R' 'bs-theme.R' 'bslib-package.R' 'buttons.R' 'card.R' 'deprecated.R' 'files.R' 'fill.R' 'imports.R' 'input-code-editor.R' 'input-dark-mode.R' 'input-submit.R' 'input-switch.R' 'layout.R' 'nav-items.R' 'nav-update.R' 'navbar_options.R' 'navs-legacy.R' 'navs.R' 'onLoad.R' 'page.R' 'popover.R' 'precompiled.R' 'print.R' 'shiny-devmode.R' 'sidebar.R' 'staticimports.R' 'toast.R' 'tooltip.R' 'utils-deps.R' 'utils-shiny.R' 'utils-tags.R' 'value-box.R' 'version-default.R' 'versions.R'", - "NeedsCompilation": "no", - "Author": "Carson Sievert [aut, cre] (ORCID: ), Joe Cheng [aut], Garrick Aden-Buie [aut] (ORCID: ), Posit Software, PBC [cph, fnd], Bootstrap contributors [ctb] (Bootstrap library), Twitter, Inc [cph] (Bootstrap library), Javi Aguilar [ctb, cph] (Bootstrap colorpicker library), Thomas Park [ctb, cph] (Bootswatch library), PayPal [ctb, cph] (Bootstrap accessibility plugin)", - "Maintainer": "Carson Sievert ", - "Repository": "CRAN" - }, - "cachem": { - "Package": "cachem", - "Version": "1.1.0", - "Source": "Repository", - "Title": "Cache R Objects with Automatic Pruning", - "Description": "Key-value stores with automatic pruning. Caches can limit either their total size or the age of the oldest object (or both), automatically pruning objects to maintain the constraints.", - "Authors@R": "c( person(\"Winston\", \"Chang\", , \"winston@posit.co\", c(\"aut\", \"cre\")), person(family = \"Posit Software, PBC\", role = c(\"cph\", \"fnd\")))", - "License": "MIT + file LICENSE", - "Encoding": "UTF-8", - "ByteCompile": "true", - "URL": "https://cachem.r-lib.org/, https://github.com/r-lib/cachem", - "Imports": [ - "rlang", - "fastmap (>= 1.2.0)" - ], - "Suggests": [ - "testthat" - ], - "RoxygenNote": "7.2.3", - "Config/Needs/routine": "lobstr", - "Config/Needs/website": "pkgdown", - "NeedsCompilation": "yes", - "Author": "Winston Chang [aut, cre], Posit Software, PBC [cph, fnd]", - "Maintainer": "Winston Chang ", - "Repository": "CRAN" - }, "class": { "Package": "class", "Version": "7.3-23", @@ -597,43 +410,6 @@ "Maintainer": "Matthew Lincoln ", "Repository": "CRAN" }, - "conflicted": { - "Package": "conflicted", - "Version": "1.2.0", - "Source": "Repository", - "Title": "An Alternative Conflict Resolution Strategy", - "Authors@R": "c( person(\"Hadley\", \"Wickham\", , \"hadley@rstudio.com\", role = c(\"aut\", \"cre\")), person(\"RStudio\", role = c(\"cph\", \"fnd\")) )", - "Description": "R's default conflict management system gives the most recently loaded package precedence. This can make it hard to detect conflicts, particularly when they arise because a package update creates ambiguity that did not previously exist. 'conflicted' takes a different approach, making every conflict an error and forcing you to choose which function to use.", - "License": "MIT + file LICENSE", - "URL": "https://conflicted.r-lib.org/, https://github.com/r-lib/conflicted", - "BugReports": "https://github.com/r-lib/conflicted/issues", - "Depends": [ - "R (>= 3.2)" - ], - "Imports": [ - "cli (>= 3.4.0)", - "memoise", - "rlang (>= 1.0.0)" - ], - "Suggests": [ - "callr", - "covr", - "dplyr", - "Matrix", - "methods", - "pkgload", - "testthat (>= 3.0.0)", - "withr" - ], - "Config/Needs/website": "tidyverse/tidytemplate", - "Config/testthat/edition": "3", - "Encoding": "UTF-8", - "RoxygenNote": "7.2.3", - "NeedsCompilation": "no", - "Author": "Hadley Wickham [aut, cre], RStudio [cph, fnd]", - "Maintainer": "Hadley Wickham ", - "Repository": "https://packagemanager.posit.co/cran/latest" - }, "cpp11": { "Package": "cpp11", "Version": "0.5.3", @@ -679,7 +455,7 @@ "NeedsCompilation": "no", "Author": "Davis Vaughan [aut, cre] (ORCID: ), Jim Hester [aut] (ORCID: ), Romain François [aut] (ORCID: ), Benjamin Kietzman [ctb], Posit Software, PBC [cph, fnd]", "Maintainer": "Davis Vaughan ", - "Repository": "https://packagemanager.posit.co/cran/latest" + "Repository": "CRAN" }, "crayon": { "Package": "crayon", @@ -745,78 +521,6 @@ "Maintainer": "Jeroen Ooms ", "Repository": "CRAN" }, - "digest": { - "Package": "digest", - "Version": "0.6.39", - "Source": "Repository", - "Authors@R": "c(person(\"Dirk\", \"Eddelbuettel\", role = c(\"aut\", \"cre\"), email = \"edd@debian.org\", comment = c(ORCID = \"0000-0001-6419-907X\")), person(\"Antoine\", \"Lucas\", role=\"ctb\", comment = c(ORCID = \"0000-0002-8059-9767\")), person(\"Jarek\", \"Tuszynski\", role=\"ctb\"), person(\"Henrik\", \"Bengtsson\", role=\"ctb\", comment = c(ORCID = \"0000-0002-7579-5165\")), person(\"Simon\", \"Urbanek\", role=\"ctb\", comment = c(ORCID = \"0000-0003-2297-1732\")), person(\"Mario\", \"Frasca\", role=\"ctb\"), person(\"Bryan\", \"Lewis\", role=\"ctb\"), person(\"Murray\", \"Stokely\", role=\"ctb\"), person(\"Hannes\", \"Muehleisen\", role=\"ctb\", comment = c(ORCID = \"0000-0001-8552-0029\")), person(\"Duncan\", \"Murdoch\", role=\"ctb\"), person(\"Jim\", \"Hester\", role=\"ctb\", comment = c(ORCID = \"0000-0002-2739-7082\")), person(\"Wush\", \"Wu\", role=\"ctb\", comment = c(ORCID = \"0000-0001-5180-0567\")), person(\"Qiang\", \"Kou\", role=\"ctb\", comment = c(ORCID = \"0000-0001-6786-5453\")), person(\"Thierry\", \"Onkelinx\", role=\"ctb\", comment = c(ORCID = \"0000-0001-8804-4216\")), person(\"Michel\", \"Lang\", role=\"ctb\", comment = c(ORCID = \"0000-0001-9754-0393\")), person(\"Viliam\", \"Simko\", role=\"ctb\"), person(\"Kurt\", \"Hornik\", role=\"ctb\", comment = c(ORCID = \"0000-0003-4198-9911\")), person(\"Radford\", \"Neal\", role=\"ctb\", comment = c(ORCID = \"0000-0002-2473-3407\")), person(\"Kendon\", \"Bell\", role=\"ctb\", comment = c(ORCID = \"0000-0002-9093-8312\")), person(\"Matthew\", \"de Queljoe\", role=\"ctb\"), person(\"Dmitry\", \"Selivanov\", role=\"ctb\", comment = c(ORCID = \"0000-0003-0492-6647\")), person(\"Ion\", \"Suruceanu\", role=\"ctb\", comment = c(ORCID = \"0009-0005-6446-4909\")), person(\"Bill\", \"Denney\", role=\"ctb\", comment = c(ORCID = \"0000-0002-5759-428X\")), person(\"Dirk\", \"Schumacher\", role=\"ctb\"), person(\"András\", \"Svraka\", role=\"ctb\", comment = c(ORCID = \"0009-0008-8480-1329\")), person(\"Sergey\", \"Fedorov\", role=\"ctb\", comment = c(ORCID = \"0000-0002-5970-7233\")), person(\"Will\", \"Landau\", role=\"ctb\", comment = c(ORCID = \"0000-0003-1878-3253\")), person(\"Floris\", \"Vanderhaeghe\", role=\"ctb\", comment = c(ORCID = \"0000-0002-6378-6229\")), person(\"Kevin\", \"Tappe\", role=\"ctb\"), person(\"Harris\", \"McGehee\", role=\"ctb\"), person(\"Tim\", \"Mastny\", role=\"ctb\"), person(\"Aaron\", \"Peikert\", role=\"ctb\", comment = c(ORCID = \"0000-0001-7813-818X\")), person(\"Mark\", \"van der Loo\", role=\"ctb\", comment = c(ORCID = \"0000-0002-9807-4686\")), person(\"Chris\", \"Muir\", role=\"ctb\", comment = c(ORCID = \"0000-0003-2555-3878\")), person(\"Moritz\", \"Beller\", role=\"ctb\", comment = c(ORCID = \"0000-0003-4852-0526\")), person(\"Sebastian\", \"Campbell\", role=\"ctb\", comment = c(ORCID = \"0009-0000-5948-4503\")), person(\"Winston\", \"Chang\", role=\"ctb\", comment = c(ORCID = \"0000-0002-1576-2126\")), person(\"Dean\", \"Attali\", role=\"ctb\", comment = c(ORCID = \"0000-0002-5645-3493\")), person(\"Michael\", \"Chirico\", role=\"ctb\", comment = c(ORCID = \"0000-0003-0787-087X\")), person(\"Kevin\", \"Ushey\", role=\"ctb\", comment = c(ORCID = \"0000-0003-2880-7407\")), person(\"Carl\", \"Pearson\", role=\"ctb\", comment = c(ORCID = \"0000-0003-0701-7860\")))", - "Date": "2025-11-19", - "Title": "Create Compact Hash Digests of R Objects", - "Description": "Implementation of a function 'digest()' for the creation of hash digests of arbitrary R objects (using the 'md5', 'sha-1', 'sha-256', 'crc32', 'xxhash', 'murmurhash', 'spookyhash', 'blake3', 'crc32c', 'xxh3_64', and 'xxh3_128' algorithms) permitting easy comparison of R language objects, as well as functions such as 'hmac()' to create hash-based message authentication code. Please note that this package is not meant to be deployed for cryptographic purposes for which more comprehensive (and widely tested) libraries such as 'OpenSSL' should be used.", - "URL": "https://github.com/eddelbuettel/digest, https://eddelbuettel.github.io/digest/, https://dirk.eddelbuettel.com/code/digest.html", - "BugReports": "https://github.com/eddelbuettel/digest/issues", - "Depends": [ - "R (>= 3.3.0)" - ], - "Imports": [ - "utils" - ], - "License": "GPL (>= 2)", - "Suggests": [ - "tinytest", - "simplermarkdown", - "rbenchmark" - ], - "VignetteBuilder": "simplermarkdown", - "Encoding": "UTF-8", - "NeedsCompilation": "yes", - "Author": "Dirk Eddelbuettel [aut, cre] (ORCID: ), Antoine Lucas [ctb] (ORCID: ), Jarek Tuszynski [ctb], Henrik Bengtsson [ctb] (ORCID: ), Simon Urbanek [ctb] (ORCID: ), Mario Frasca [ctb], Bryan Lewis [ctb], Murray Stokely [ctb], Hannes Muehleisen [ctb] (ORCID: ), Duncan Murdoch [ctb], Jim Hester [ctb] (ORCID: ), Wush Wu [ctb] (ORCID: ), Qiang Kou [ctb] (ORCID: ), Thierry Onkelinx [ctb] (ORCID: ), Michel Lang [ctb] (ORCID: ), Viliam Simko [ctb], Kurt Hornik [ctb] (ORCID: ), Radford Neal [ctb] (ORCID: ), Kendon Bell [ctb] (ORCID: ), Matthew de Queljoe [ctb], Dmitry Selivanov [ctb] (ORCID: ), Ion Suruceanu [ctb] (ORCID: ), Bill Denney [ctb] (ORCID: ), Dirk Schumacher [ctb], András Svraka [ctb] (ORCID: ), Sergey Fedorov [ctb] (ORCID: ), Will Landau [ctb] (ORCID: ), Floris Vanderhaeghe [ctb] (ORCID: ), Kevin Tappe [ctb], Harris McGehee [ctb], Tim Mastny [ctb], Aaron Peikert [ctb] (ORCID: ), Mark van der Loo [ctb] (ORCID: ), Chris Muir [ctb] (ORCID: ), Moritz Beller [ctb] (ORCID: ), Sebastian Campbell [ctb] (ORCID: ), Winston Chang [ctb] (ORCID: ), Dean Attali [ctb] (ORCID: ), Michael Chirico [ctb] (ORCID: ), Kevin Ushey [ctb] (ORCID: ), Carl Pearson [ctb] (ORCID: )", - "Maintainer": "Dirk Eddelbuettel ", - "Repository": "CRAN" - }, - "distributional": { - "Package": "distributional", - "Version": "0.6.0", - "Source": "Repository", - "Title": "Vectorised Probability Distributions", - "Authors@R": "c(person(given = \"Mitchell\", family = \"O'Hara-Wild\", role = c(\"aut\", \"cre\"), email = \"mail@mitchelloharawild.com\", comment = c(ORCID = \"0000-0001-6729-7695\")), person(given = \"Matthew\", family = \"Kay\", role = c(\"aut\"), comment = c(ORCID = \"0000-0001-9446-0419\")), person(given = \"Alex\", family = \"Hayes\", role = c(\"aut\"), comment = c(ORCID = \"0000-0002-4985-5160\")), person(given = \"Rob\", family = \"Hyndman\", role = c(\"aut\"), comment = c(ORCID = \"0000-0002-2140-5352\")), person(given = \"Earo\", family = \"Wang\", role = c(\"ctb\"), comment = c(ORCID = \"0000-0001-6448-5260\")), person(given = \"Vencislav\", family = \"Popov\", role = c(\"ctb\"), comment = c(ORCID = \"0000-0002-8073-4199\")))", - "Description": "Vectorised distribution objects with tools for manipulating, visualising, and using probability distributions. Designed to allow model prediction outputs to return distributions rather than their parameters, allowing users to directly interact with predictive distributions in a data-oriented workflow. In addition to providing generic replacements for p/d/q/r functions, other useful statistics can be computed including means, variances, intervals, and highest density regions.", - "License": "GPL-3", - "Depends": [ - "R (>= 4.0.0)" - ], - "Imports": [ - "vctrs (>= 0.3.0)", - "rlang (>= 0.4.5)", - "generics", - "stats", - "numDeriv", - "utils", - "lifecycle", - "pillar" - ], - "Suggests": [ - "testthat (>= 2.1.0)", - "covr", - "mvtnorm", - "actuar (>= 2.0.0)", - "evd", - "ggdist", - "ggplot2", - "gk", - "pkgdown" - ], - "RdMacros": "lifecycle", - "URL": "https://pkg.mitchelloharawild.com/distributional/, https://github.com/mitchelloharawild/distributional", - "BugReports": "https://github.com/mitchelloharawild/distributional/issues", - "Encoding": "UTF-8", - "Language": "en-GB", - "RoxygenNote": "7.3.3", - "NeedsCompilation": "no", - "Author": "Mitchell O'Hara-Wild [aut, cre] (ORCID: ), Matthew Kay [aut] (ORCID: ), Alex Hayes [aut] (ORCID: ), Rob Hyndman [aut] (ORCID: ), Earo Wang [ctb] (ORCID: ), Vencislav Popov [ctb] (ORCID: )", - "Maintainer": "Mitchell O'Hara-Wild ", - "Repository": "https://packagemanager.posit.co/cran/latest" - }, "dplyr": { "Package": "dplyr", "Version": "1.2.0", @@ -912,31 +616,27 @@ "Repository": "CRAN", "Encoding": "UTF-8" }, - "evaluate": { - "Package": "evaluate", - "Version": "1.0.5", + "generics": { + "Package": "generics", + "Version": "0.1.4", "Source": "Repository", - "Type": "Package", - "Title": "Parsing and Evaluation Tools that Provide More Details than the Default", - "Authors@R": "c( person(\"Hadley\", \"Wickham\", , \"hadley@posit.co\", role = c(\"aut\", \"cre\")), person(\"Yihui\", \"Xie\", role = \"aut\", comment = c(ORCID = \"0000-0003-0645-5666\")), person(\"Michael\", \"Lawrence\", role = \"ctb\"), person(\"Thomas\", \"Kluyver\", role = \"ctb\"), person(\"Jeroen\", \"Ooms\", role = \"ctb\"), person(\"Barret\", \"Schloerke\", role = \"ctb\"), person(\"Adam\", \"Ryczkowski\", role = \"ctb\"), person(\"Hiroaki\", \"Yutani\", role = \"ctb\"), person(\"Michel\", \"Lang\", role = \"ctb\"), person(\"Karolis\", \"Koncevičius\", role = \"ctb\"), person(\"Posit Software, PBC\", role = c(\"cph\", \"fnd\")) )", - "Description": "Parsing and evaluation tools that make it easy to recreate the command line behaviour of R.", + "Title": "Common S3 Generics not Provided by Base R Methods Related to Model Fitting", + "Authors@R": "c( person(\"Hadley\", \"Wickham\", , \"hadley@posit.co\", role = c(\"aut\", \"cre\"), comment = c(ORCID = \"0000-0003-4757-117X\")), person(\"Max\", \"Kuhn\", , \"max@posit.co\", role = \"aut\"), person(\"Davis\", \"Vaughan\", , \"davis@posit.co\", role = \"aut\"), person(\"Posit Software, PBC\", role = c(\"cph\", \"fnd\"), comment = c(ROR = \"https://ror.org/03wc8by49\")) )", + "Description": "In order to reduce potential package dependencies and conflicts, generics provides a number of commonly used S3 generics.", "License": "MIT + file LICENSE", - "URL": "https://evaluate.r-lib.org/, https://github.com/r-lib/evaluate", - "BugReports": "https://github.com/r-lib/evaluate/issues", + "URL": "https://generics.r-lib.org, https://github.com/r-lib/generics", + "BugReports": "https://github.com/r-lib/generics/issues", "Depends": [ - "R (>= 3.6.0)" + "R (>= 3.6)" + ], + "Imports": [ + "methods" ], "Suggests": [ - "callr", "covr", - "ggplot2 (>= 3.3.6)", - "lattice", - "methods", "pkgload", - "ragg (>= 1.4.0)", - "rlang (>= 1.1.5)", - "knitr", "testthat (>= 3.0.0)", + "tibble", "withr" ], "Config/Needs/website": "tidyverse/tidytemplate", @@ -944,582 +644,77 @@ "Encoding": "UTF-8", "RoxygenNote": "7.3.2", "NeedsCompilation": "no", - "Author": "Hadley Wickham [aut, cre], Yihui Xie [aut] (ORCID: ), Michael Lawrence [ctb], Thomas Kluyver [ctb], Jeroen Ooms [ctb], Barret Schloerke [ctb], Adam Ryczkowski [ctb], Hiroaki Yutani [ctb], Michel Lang [ctb], Karolis Koncevičius [ctb], Posit Software, PBC [cph, fnd]", + "Author": "Hadley Wickham [aut, cre] (ORCID: ), Max Kuhn [aut], Davis Vaughan [aut], Posit Software, PBC [cph, fnd] (ROR: )", "Maintainer": "Hadley Wickham ", "Repository": "CRAN" }, - "extrafont": { - "Package": "extrafont", - "Version": "0.19", - "Source": "GitHub", - "Title": "Tools for Using Fonts", - "Author": "Winston Chang ", - "Maintainer": "Winston Chang ", - "Description": "Tools to using fonts other than the standard PostScript fonts. This package makes it easy to use system TrueType fonts and with PDF or PostScript output files, and with bitmap output files in Windows. extrafont can also be used with fonts packaged specifically to be used with, such as the fontcm package, which has Computer Modern PostScript fonts with math symbols.", + "glue": { + "Package": "glue", + "Version": "1.8.0", + "Source": "Repository", + "Title": "Interpreted String Literals", + "Authors@R": "c( person(\"Jim\", \"Hester\", role = \"aut\", comment = c(ORCID = \"0000-0002-2739-7082\")), person(\"Jennifer\", \"Bryan\", , \"jenny@posit.co\", role = c(\"aut\", \"cre\"), comment = c(ORCID = \"0000-0002-6983-2759\")), person(\"Posit Software, PBC\", role = c(\"cph\", \"fnd\")) )", + "Description": "An implementation of interpreted string literals, inspired by Python's Literal String Interpolation and Docstrings and Julia's Triple-Quoted String Literals .", + "License": "MIT + file LICENSE", + "URL": "https://glue.tidyverse.org/, https://github.com/tidyverse/glue", + "BugReports": "https://github.com/tidyverse/glue/issues", "Depends": [ - "R (>= 2.15)" + "R (>= 3.6)" ], "Imports": [ - "extrafontdb", - "grDevices", - "utils", - "Rttf2pt1" - ], - "Suggests": [ - "fontcm" - ], - "License": "GPL-2", - "URL": "https://github.com/wch/extrafont", - "RoxygenNote": "7.1.2", - "RemoteType": "github", - "RemoteHost": "api.github.com", - "RemoteUsername": "wch", - "RemoteRepo": "extrafont", - "RemoteRef": "master", - "RemoteSha": "028fc67103b14318410ad84fa182acc3975b54f2", - "Remotes": "Rttf2pt1=github::wch/Rttf2pt1" - }, - "extrafontdb": { - "Package": "extrafontdb", - "Version": "1.1", - "Source": "Repository", - "Type": "Package", - "Title": "Holding the Database for the 'extrafont' Package", - "Date": "2025-09-25", - "Depends": [ - "R (>= 2.14)" + "methods" ], "Suggests": [ - "testthat (>= 3.0.0)" + "crayon", + "DBI (>= 1.2.0)", + "dplyr", + "knitr", + "magrittr", + "rlang", + "rmarkdown", + "RSQLite", + "testthat (>= 3.2.0)", + "vctrs (>= 0.3.0)", + "waldo (>= 0.5.3)", + "withr" ], - "Authors@R": "c( person(given = \"Winston\", family= \"Chang\", role = c(\"aut\")), person(given = \"Frederic\", family= \"Bertrand\", role = c(\"cre\"), email = \"frederic.bertrand@lecnam.net\", comment = c(ORCID = \"0000-0002-0837-8281\")) )", - "Author": "Winston Chang [aut], Frederic Bertrand [cre] (ORCID: )", - "Maintainer": "Frederic Bertrand ", - "Description": "It is meant to be used with the 'extrafont' package. The 'extrafont' package contains the code to install and use fonts, while the 'extrafontdb' package contains the font database.", - "License": "GPL-2", - "LazyLoad": "yes", - "NeedsCompilation": "no", - "URL": "https://github.com/fbertran/extrafontdb", - "BugReports": "https://github.com/fbertran/extrafontdb/issues", - "RoxygenNote": "7.3.3", - "Encoding": "UTF-8", + "VignetteBuilder": "knitr", + "ByteCompile": "true", + "Config/Needs/website": "bench, forcats, ggbeeswarm, ggplot2, R.utils, rprintf, tidyr, tidyverse/tidytemplate", "Config/testthat/edition": "3", - "Collate": "'extrafontdb.r'", - "Repository": "https://packagemanager.posit.co/cran/latest" + "Encoding": "UTF-8", + "RoxygenNote": "7.3.2", + "NeedsCompilation": "yes", + "Author": "Jim Hester [aut] (), Jennifer Bryan [aut, cre] (), Posit Software, PBC [cph, fnd]", + "Maintainer": "Jennifer Bryan ", + "Repository": "CRAN" }, - "farver": { - "Package": "farver", - "Version": "2.1.2", + "hms": { + "Package": "hms", + "Version": "1.1.4", "Source": "Repository", - "Type": "Package", - "Title": "High Performance Colour Space Manipulation", - "Authors@R": "c( person(\"Thomas Lin\", \"Pedersen\", , \"thomas.pedersen@posit.co\", role = c(\"cre\", \"aut\"), comment = c(ORCID = \"0000-0002-5147-4711\")), person(\"Berendea\", \"Nicolae\", role = \"aut\", comment = \"Author of the ColorSpace C++ library\"), person(\"Romain\", \"François\", , \"romain@purrple.cat\", role = \"aut\", comment = c(ORCID = \"0000-0002-2444-4226\")), person(\"Posit, PBC\", role = c(\"cph\", \"fnd\")) )", - "Description": "The encoding of colour can be handled in many different ways, using different colour spaces. As different colour spaces have different uses, efficient conversion between these representations are important. The 'farver' package provides a set of functions that gives access to very fast colour space conversion and comparisons implemented in C++, and offers speed improvements over the 'convertColor' function in the 'grDevices' package.", + "Title": "Pretty Time of Day", + "Date": "2025-10-11", + "Authors@R": "c( person(\"Kirill\", \"Müller\", , \"kirill@cynkra.com\", role = c(\"aut\", \"cre\"), comment = c(ORCID = \"0000-0002-1416-3412\")), person(\"R Consortium\", role = \"fnd\"), person(\"Posit Software, PBC\", role = \"fnd\", comment = c(ROR = \"03wc8by49\")) )", + "Description": "Implements an S3 class for storing and formatting time-of-day values, based on the 'difftime' class.", "License": "MIT + file LICENSE", - "URL": "https://farver.data-imaginist.com, https://github.com/thomasp85/farver", - "BugReports": "https://github.com/thomasp85/farver/issues", + "URL": "https://hms.tidyverse.org/, https://github.com/tidyverse/hms", + "BugReports": "https://github.com/tidyverse/hms/issues", + "Imports": [ + "cli", + "lifecycle", + "methods", + "pkgconfig", + "rlang (>= 1.0.2)", + "vctrs (>= 0.3.8)" + ], "Suggests": [ - "covr", + "crayon", + "lubridate", + "pillar (>= 1.1.0)", "testthat (>= 3.0.0)" ], - "Config/testthat/edition": "3", - "Encoding": "UTF-8", - "RoxygenNote": "7.3.1", - "NeedsCompilation": "yes", - "Author": "Thomas Lin Pedersen [cre, aut] (), Berendea Nicolae [aut] (Author of the ColorSpace C++ library), Romain François [aut] (), Posit, PBC [cph, fnd]", - "Maintainer": "Thomas Lin Pedersen ", - "Repository": "https://packagemanager.posit.co/cran/latest" - }, - "fastmap": { - "Package": "fastmap", - "Version": "1.2.0", - "Source": "Repository", - "Title": "Fast Data Structures", - "Authors@R": "c( person(\"Winston\", \"Chang\", email = \"winston@posit.co\", role = c(\"aut\", \"cre\")), person(given = \"Posit Software, PBC\", role = c(\"cph\", \"fnd\")), person(given = \"Tessil\", role = \"cph\", comment = \"hopscotch_map library\") )", - "Description": "Fast implementation of data structures, including a key-value store, stack, and queue. Environments are commonly used as key-value stores in R, but every time a new key is used, it is added to R's global symbol table, causing a small amount of memory leakage. This can be problematic in cases where many different keys are used. Fastmap avoids this memory leak issue by implementing the map using data structures in C++.", - "License": "MIT + file LICENSE", - "Encoding": "UTF-8", - "RoxygenNote": "7.2.3", - "Suggests": [ - "testthat (>= 2.1.1)" - ], - "URL": "https://r-lib.github.io/fastmap/, https://github.com/r-lib/fastmap", - "BugReports": "https://github.com/r-lib/fastmap/issues", - "NeedsCompilation": "yes", - "Author": "Winston Chang [aut, cre], Posit Software, PBC [cph, fnd], Tessil [cph] (hopscotch_map library)", - "Maintainer": "Winston Chang ", - "Repository": "CRAN" - }, - "fontawesome": { - "Package": "fontawesome", - "Version": "0.5.3", - "Source": "Repository", - "Type": "Package", - "Title": "Easily Work with 'Font Awesome' Icons", - "Description": "Easily and flexibly insert 'Font Awesome' icons into 'R Markdown' documents and 'Shiny' apps. These icons can be inserted into HTML content through inline 'SVG' tags or 'i' tags. There is also a utility function for exporting 'Font Awesome' icons as 'PNG' images for those situations where raster graphics are needed.", - "Authors@R": "c( person(\"Richard\", \"Iannone\", , \"rich@posit.co\", c(\"aut\", \"cre\"), comment = c(ORCID = \"0000-0003-3925-190X\")), person(\"Christophe\", \"Dervieux\", , \"cderv@posit.co\", role = \"ctb\", comment = c(ORCID = \"0000-0003-4474-2498\")), person(\"Winston\", \"Chang\", , \"winston@posit.co\", role = \"ctb\"), person(\"Dave\", \"Gandy\", role = c(\"ctb\", \"cph\"), comment = \"Font-Awesome font\"), person(\"Posit Software, PBC\", role = c(\"cph\", \"fnd\")) )", - "License": "MIT + file LICENSE", - "URL": "https://github.com/rstudio/fontawesome, https://rstudio.github.io/fontawesome/", - "BugReports": "https://github.com/rstudio/fontawesome/issues", - "Encoding": "UTF-8", - "ByteCompile": "true", - "RoxygenNote": "7.3.2", - "Depends": [ - "R (>= 3.3.0)" - ], - "Imports": [ - "rlang (>= 1.0.6)", - "htmltools (>= 0.5.1.1)" - ], - "Suggests": [ - "covr", - "dplyr (>= 1.0.8)", - "gt (>= 0.9.0)", - "knitr (>= 1.31)", - "testthat (>= 3.0.0)", - "rsvg" - ], - "Config/testthat/edition": "3", - "NeedsCompilation": "no", - "Author": "Richard Iannone [aut, cre] (), Christophe Dervieux [ctb] (), Winston Chang [ctb], Dave Gandy [ctb, cph] (Font-Awesome font), Posit Software, PBC [cph, fnd]", - "Maintainer": "Richard Iannone ", - "Repository": "CRAN" - }, - "fs": { - "Package": "fs", - "Version": "1.6.6", - "Source": "Repository", - "Title": "Cross-Platform File System Operations Based on 'libuv'", - "Authors@R": "c( person(\"Jim\", \"Hester\", role = \"aut\"), person(\"Hadley\", \"Wickham\", , \"hadley@posit.co\", role = \"aut\"), person(\"Gábor\", \"Csárdi\", , \"csardi.gabor@gmail.com\", role = c(\"aut\", \"cre\")), person(\"libuv project contributors\", role = \"cph\", comment = \"libuv library\"), person(\"Joyent, Inc. and other Node contributors\", role = \"cph\", comment = \"libuv library\"), person(\"Posit Software, PBC\", role = c(\"cph\", \"fnd\")) )", - "Description": "A cross-platform interface to file system operations, built on top of the 'libuv' C library.", - "License": "MIT + file LICENSE", - "URL": "https://fs.r-lib.org, https://github.com/r-lib/fs", - "BugReports": "https://github.com/r-lib/fs/issues", - "Depends": [ - "R (>= 3.6)" - ], - "Imports": [ - "methods" - ], - "Suggests": [ - "covr", - "crayon", - "knitr", - "pillar (>= 1.0.0)", - "rmarkdown", - "spelling", - "testthat (>= 3.0.0)", - "tibble (>= 1.1.0)", - "vctrs (>= 0.3.0)", - "withr" - ], - "VignetteBuilder": "knitr", - "ByteCompile": "true", - "Config/Needs/website": "tidyverse/tidytemplate", - "Config/testthat/edition": "3", - "Copyright": "file COPYRIGHTS", - "Encoding": "UTF-8", - "Language": "en-US", - "RoxygenNote": "7.2.3", - "SystemRequirements": "GNU make", - "NeedsCompilation": "yes", - "Author": "Jim Hester [aut], Hadley Wickham [aut], Gábor Csárdi [aut, cre], libuv project contributors [cph] (libuv library), Joyent, Inc. and other Node contributors [cph] (libuv library), Posit Software, PBC [cph, fnd]", - "Maintainer": "Gábor Csárdi ", - "Repository": "CRAN" - }, - "generics": { - "Package": "generics", - "Version": "0.1.4", - "Source": "Repository", - "Title": "Common S3 Generics not Provided by Base R Methods Related to Model Fitting", - "Authors@R": "c( person(\"Hadley\", \"Wickham\", , \"hadley@posit.co\", role = c(\"aut\", \"cre\"), comment = c(ORCID = \"0000-0003-4757-117X\")), person(\"Max\", \"Kuhn\", , \"max@posit.co\", role = \"aut\"), person(\"Davis\", \"Vaughan\", , \"davis@posit.co\", role = \"aut\"), person(\"Posit Software, PBC\", role = c(\"cph\", \"fnd\"), comment = c(ROR = \"https://ror.org/03wc8by49\")) )", - "Description": "In order to reduce potential package dependencies and conflicts, generics provides a number of commonly used S3 generics.", - "License": "MIT + file LICENSE", - "URL": "https://generics.r-lib.org, https://github.com/r-lib/generics", - "BugReports": "https://github.com/r-lib/generics/issues", - "Depends": [ - "R (>= 3.6)" - ], - "Imports": [ - "methods" - ], - "Suggests": [ - "covr", - "pkgload", - "testthat (>= 3.0.0)", - "tibble", - "withr" - ], - "Config/Needs/website": "tidyverse/tidytemplate", - "Config/testthat/edition": "3", - "Encoding": "UTF-8", - "RoxygenNote": "7.3.2", - "NeedsCompilation": "no", - "Author": "Hadley Wickham [aut, cre] (ORCID: ), Max Kuhn [aut], Davis Vaughan [aut], Posit Software, PBC [cph, fnd] (ROR: )", - "Maintainer": "Hadley Wickham ", - "Repository": "https://packagemanager.posit.co/cran/latest" - }, - "ggdist": { - "Package": "ggdist", - "Version": "3.3.3", - "Source": "Repository", - "Title": "Visualizations of Distributions and Uncertainty", - "Date": "2025-04-20", - "Authors@R": "c( person(\"Matthew\", \"Kay\", role = c(\"aut\", \"cre\"), email = \"mjskay@northwestern.edu\"), person(\"Brenton M.\", \"Wiernik\", role = \"ctb\", email = \"brenton@wiernik.org\") )", - "Maintainer": "Matthew Kay ", - "Description": "Provides primitives for visualizing distributions using 'ggplot2' that are particularly tuned for visualizing uncertainty in either a frequentist or Bayesian mode. Both analytical distributions (such as frequentist confidence distributions or Bayesian priors) and distributions represented as samples (such as bootstrap distributions or Bayesian posterior samples) are easily visualized. Visualization primitives include but are not limited to: points with multiple uncertainty intervals, eye plots (Spiegelhalter D., 1999) , density plots, gradient plots, dot plots (Wilkinson L., 1999) , quantile dot plots (Kay M., Kola T., Hullman J., Munson S., 2016) , complementary cumulative distribution function barplots (Fernandes M., Walls L., Munson S., Hullman J., Kay M., 2018) , and fit curves with multiple uncertainty ribbons.", - "Depends": [ - "R (>= 4.0.0)" - ], - "Imports": [ - "grid", - "ggplot2 (>= 3.5.0)", - "scales", - "rlang (>= 0.3.0)", - "cli", - "tibble", - "vctrs", - "withr", - "glue", - "gtable", - "distributional (>= 0.3.2)", - "numDeriv", - "quadprog", - "Rcpp" - ], - "Suggests": [ - "tidyselect", - "dplyr (>= 1.0.0)", - "fda", - "posterior (>= 1.4.0)", - "beeswarm (>= 0.4.0)", - "rmarkdown", - "knitr", - "testthat (>= 3.0.0)", - "vdiffr (>= 1.0.0)", - "svglite (>= 2.1.0)", - "fontquiver", - "sysfonts", - "showtext", - "mvtnorm", - "covr", - "broom (>= 0.5.6)", - "patchwork", - "tidyr (>= 1.0.0)", - "ragg (>= 1.3.0)", - "pkgdown" - ], - "License": "GPL (>= 3)", - "Language": "en-US", - "BugReports": "https://github.com/mjskay/ggdist/issues", - "URL": "https://mjskay.github.io/ggdist/, https://github.com/mjskay/ggdist/", - "VignetteBuilder": "knitr", - "RoxygenNote": "7.3.2", - "LazyData": "true", - "Encoding": "UTF-8", - "Collate": "\"ggdist-package.R\" \"util.R\" \"compat.R\" \"rd.R\" \"RcppExports.R\" \"abstract_geom.R\" \"abstract_stat.R\" \"abstract_stat_slabinterval.R\" \"auto_partial.R\" \"binning_methods.R\" \"bounder.R\" \"curve_interval.R\" \"cut_cdf_qi.R\" \"data.R\" \"density.R\" \"distributions.R\" \"draw_key_slabinterval.R\" \"geom.R\" \"geom_slabinterval.R\" \"geom_dotsinterval.R\" \"geom_blur_dots.R\" \"geom_interval.R\" \"geom_lineribbon.R\" \"geom_pointinterval.R\" \"geom_slab.R\" \"geom_spike.R\" \"geom_swarm.R\" \"guide_rampbar.R\" \"interval_widths.R\" \"lkjcorr_marginal.R\" \"parse_dist.R\" \"partial_colour_ramp.R\" \"point_interval.R\" \"position_dodgejust.R\" \"pr.R\" \"rd_density.R\" \"rd_dotsinterval.R\" \"rd_slabinterval.R\" \"rd_spike.R\" \"rd_lineribbon.R\" \"scale_colour_ramp.R\" \"scale_thickness.R\" \"scale_side_mirrored.R\" \"scale_.R\" \"smooth.R\" \"stat.R\" \"stat_slabinterval.R\" \"stat_dotsinterval.R\" \"stat_mcse_dots.R\" \"stat_pointinterval.R\" \"stat_interval.R\" \"stat_lineribbon.R\" \"stat_spike.R\" \"student_t.R\" \"subguide.R\" \"subscale.R\" \"testthat.R\" \"theme_ggdist.R\" \"thickness.R\" \"tidy_format_translators.R\" \"weighted_ecdf.R\" \"weighted_hist.R\" \"weighted_quantile.R\" \"deprecated.R\"", - "Config/testthat/edition": "3", - "LinkingTo": [ - "Rcpp" - ], - "NeedsCompilation": "yes", - "Author": "Matthew Kay [aut, cre], Brenton M. Wiernik [ctb]", - "Repository": "https://packagemanager.posit.co/cran/latest" - }, - "ggplot2": { - "Package": "ggplot2", - "Version": "4.0.2", - "Source": "Repository", - "Title": "Create Elegant Data Visualisations Using the Grammar of Graphics", - "Authors@R": "c( person(\"Hadley\", \"Wickham\", , \"hadley@posit.co\", role = \"aut\", comment = c(ORCID = \"0000-0003-4757-117X\")), person(\"Winston\", \"Chang\", role = \"aut\", comment = c(ORCID = \"0000-0002-1576-2126\")), person(\"Lionel\", \"Henry\", role = \"aut\"), person(\"Thomas Lin\", \"Pedersen\", , \"thomas.pedersen@posit.co\", role = c(\"aut\", \"cre\"), comment = c(ORCID = \"0000-0002-5147-4711\")), person(\"Kohske\", \"Takahashi\", role = \"aut\"), person(\"Claus\", \"Wilke\", role = \"aut\", comment = c(ORCID = \"0000-0002-7470-9261\")), person(\"Kara\", \"Woo\", role = \"aut\", comment = c(ORCID = \"0000-0002-5125-4188\")), person(\"Hiroaki\", \"Yutani\", role = \"aut\", comment = c(ORCID = \"0000-0002-3385-7233\")), person(\"Dewey\", \"Dunnington\", role = \"aut\", comment = c(ORCID = \"0000-0002-9415-4582\")), person(\"Teun\", \"van den Brand\", role = \"aut\", comment = c(ORCID = \"0000-0002-9335-7468\")), person(\"Posit, PBC\", role = c(\"cph\", \"fnd\"), comment = c(ROR = \"03wc8by49\")) )", - "Description": "A system for 'declaratively' creating graphics, based on \"The Grammar of Graphics\". You provide the data, tell 'ggplot2' how to map variables to aesthetics, what graphical primitives to use, and it takes care of the details.", - "License": "MIT + file LICENSE", - "URL": "https://ggplot2.tidyverse.org, https://github.com/tidyverse/ggplot2", - "BugReports": "https://github.com/tidyverse/ggplot2/issues", - "Depends": [ - "R (>= 4.1)" - ], - "Imports": [ - "cli", - "grDevices", - "grid", - "gtable (>= 0.3.6)", - "isoband", - "lifecycle (> 1.0.1)", - "rlang (>= 1.1.0)", - "S7", - "scales (>= 1.4.0)", - "stats", - "vctrs (>= 0.6.0)", - "withr (>= 2.5.0)" - ], - "Suggests": [ - "broom", - "covr", - "dplyr", - "ggplot2movies", - "hexbin", - "Hmisc", - "hms", - "knitr", - "mapproj", - "maps", - "MASS", - "mgcv", - "multcomp", - "munsell", - "nlme", - "profvis", - "quantreg", - "quarto", - "ragg (>= 1.2.6)", - "RColorBrewer", - "roxygen2", - "rpart", - "sf (>= 0.7-3)", - "svglite (>= 2.1.2)", - "testthat (>= 3.1.5)", - "tibble", - "vdiffr (>= 1.0.6)", - "xml2" - ], - "Enhances": [ - "sp" - ], - "VignetteBuilder": "quarto", - "Config/Needs/website": "ggtext, tidyr, forcats, tidyverse/tidytemplate", - "Config/testthat/edition": "3", - "Config/usethis/last-upkeep": "2025-04-23", - "Encoding": "UTF-8", - "LazyData": "true", - "RoxygenNote": "7.3.3", - "Collate": "'ggproto.R' 'ggplot-global.R' 'aaa-.R' 'aes-colour-fill-alpha.R' 'aes-evaluation.R' 'aes-group-order.R' 'aes-linetype-size-shape.R' 'aes-position.R' 'all-classes.R' 'compat-plyr.R' 'utilities.R' 'aes.R' 'annotation-borders.R' 'utilities-checks.R' 'legend-draw.R' 'geom-.R' 'annotation-custom.R' 'annotation-logticks.R' 'scale-type.R' 'layer.R' 'make-constructor.R' 'geom-polygon.R' 'geom-map.R' 'annotation-map.R' 'geom-raster.R' 'annotation-raster.R' 'annotation.R' 'autolayer.R' 'autoplot.R' 'axis-secondary.R' 'backports.R' 'bench.R' 'bin.R' 'coord-.R' 'coord-cartesian-.R' 'coord-fixed.R' 'coord-flip.R' 'coord-map.R' 'coord-munch.R' 'coord-polar.R' 'coord-quickmap.R' 'coord-radial.R' 'coord-sf.R' 'coord-transform.R' 'data.R' 'docs_layer.R' 'facet-.R' 'facet-grid-.R' 'facet-null.R' 'facet-wrap.R' 'fortify-map.R' 'fortify-models.R' 'fortify-spatial.R' 'fortify.R' 'stat-.R' 'geom-abline.R' 'geom-rect.R' 'geom-bar.R' 'geom-tile.R' 'geom-bin2d.R' 'geom-blank.R' 'geom-boxplot.R' 'geom-col.R' 'geom-path.R' 'geom-contour.R' 'geom-point.R' 'geom-count.R' 'geom-crossbar.R' 'geom-segment.R' 'geom-curve.R' 'geom-defaults.R' 'geom-ribbon.R' 'geom-density.R' 'geom-density2d.R' 'geom-dotplot.R' 'geom-errorbar.R' 'geom-freqpoly.R' 'geom-function.R' 'geom-hex.R' 'geom-histogram.R' 'geom-hline.R' 'geom-jitter.R' 'geom-label.R' 'geom-linerange.R' 'geom-pointrange.R' 'geom-quantile.R' 'geom-rug.R' 'geom-sf.R' 'geom-smooth.R' 'geom-spoke.R' 'geom-text.R' 'geom-violin.R' 'geom-vline.R' 'ggplot2-package.R' 'grob-absolute.R' 'grob-dotstack.R' 'grob-null.R' 'grouping.R' 'properties.R' 'margins.R' 'theme-elements.R' 'guide-.R' 'guide-axis.R' 'guide-axis-logticks.R' 'guide-axis-stack.R' 'guide-axis-theta.R' 'guide-legend.R' 'guide-bins.R' 'guide-colorbar.R' 'guide-colorsteps.R' 'guide-custom.R' 'guide-none.R' 'guide-old.R' 'guides-.R' 'guides-grid.R' 'hexbin.R' 'import-standalone-obj-type.R' 'import-standalone-types-check.R' 'labeller.R' 'labels.R' 'layer-sf.R' 'layout.R' 'limits.R' 'performance.R' 'plot-build.R' 'plot-construction.R' 'plot-last.R' 'plot.R' 'position-.R' 'position-collide.R' 'position-dodge.R' 'position-dodge2.R' 'position-identity.R' 'position-jitter.R' 'position-jitterdodge.R' 'position-nudge.R' 'position-stack.R' 'quick-plot.R' 'reshape-add-margins.R' 'save.R' 'scale-.R' 'scale-alpha.R' 'scale-binned.R' 'scale-brewer.R' 'scale-colour.R' 'scale-continuous.R' 'scale-date.R' 'scale-discrete-.R' 'scale-expansion.R' 'scale-gradient.R' 'scale-grey.R' 'scale-hue.R' 'scale-identity.R' 'scale-linetype.R' 'scale-linewidth.R' 'scale-manual.R' 'scale-shape.R' 'scale-size.R' 'scale-steps.R' 'scale-view.R' 'scale-viridis.R' 'scales-.R' 'stat-align.R' 'stat-bin.R' 'stat-summary-2d.R' 'stat-bin2d.R' 'stat-bindot.R' 'stat-binhex.R' 'stat-boxplot.R' 'stat-connect.R' 'stat-contour.R' 'stat-count.R' 'stat-density-2d.R' 'stat-density.R' 'stat-ecdf.R' 'stat-ellipse.R' 'stat-function.R' 'stat-identity.R' 'stat-manual.R' 'stat-qq-line.R' 'stat-qq.R' 'stat-quantilemethods.R' 'stat-sf-coordinates.R' 'stat-sf.R' 'stat-smooth-methods.R' 'stat-smooth.R' 'stat-sum.R' 'stat-summary-bin.R' 'stat-summary-hex.R' 'stat-summary.R' 'stat-unique.R' 'stat-ydensity.R' 'summarise-plot.R' 'summary.R' 'theme.R' 'theme-defaults.R' 'theme-current.R' 'theme-sub.R' 'utilities-break.R' 'utilities-grid.R' 'utilities-help.R' 'utilities-patterns.R' 'utilities-resolution.R' 'utilities-tidy-eval.R' 'zxx.R' 'zzz.R'", - "NeedsCompilation": "no", - "Author": "Hadley Wickham [aut] (ORCID: ), Winston Chang [aut] (ORCID: ), Lionel Henry [aut], Thomas Lin Pedersen [aut, cre] (ORCID: ), Kohske Takahashi [aut], Claus Wilke [aut] (ORCID: ), Kara Woo [aut] (ORCID: ), Hiroaki Yutani [aut] (ORCID: ), Dewey Dunnington [aut] (ORCID: ), Teun van den Brand [aut] (ORCID: ), Posit, PBC [cph, fnd] (ROR: )", - "Maintainer": "Thomas Lin Pedersen ", - "Repository": "https://packagemanager.posit.co/cran/latest" - }, - "ggrepel": { - "Package": "ggrepel", - "Version": "0.9.7", - "Source": "Repository", - "Authors@R": "c( person(\"Kamil\", \"Slowikowski\", email = \"kslowikowski@gmail.com\", role = c(\"aut\", \"cre\"), comment = c(ORCID = \"0000-0002-2843-6370\")), person(\"Teun\", \"van den Brand\", role = \"ctb\", comment = c(ORCID = \"0000-0002-9335-7468\")), person(\"Alicia\", \"Schep\", role = \"ctb\", comment = c(ORCID = \"0000-0002-3915-0618\")), person(\"Sean\", \"Hughes\", role = \"ctb\", comment = c(ORCID = \"0000-0002-9409-9405\")), person(\"Trung Kien\", \"Dang\", role = \"ctb\", comment = c(ORCID = \"0000-0001-7562-6495\")), person(\"Saulius\", \"Lukauskas\", role = \"ctb\"), person(\"Jean-Olivier\", \"Irisson\", role = \"ctb\", comment = c(ORCID = \"0000-0003-4920-3880\")), person(\"Zhian N\", \"Kamvar\", role = \"ctb\", comment = c(ORCID = \"0000-0003-1458-7108\")), person(\"Thompson\", \"Ryan\", role = \"ctb\", comment = c(ORCID = \"0000-0002-0450-8181\")), person(\"Dervieux\", \"Christophe\", role = \"ctb\", comment = c(ORCID = \"0000-0003-4474-2498\")), person(\"Yutani\", \"Hiroaki\", role = \"ctb\"), person(\"Pierre\", \"Gramme\", role = \"ctb\"), person(\"Amir Masoud\", \"Abdol\", role = \"ctb\"), person(\"Malcolm\", \"Barrett\", role = \"ctb\", comment = c(ORCID = \"0000-0003-0299-5825\")), person(\"Robrecht\", \"Cannoodt\", role = \"ctb\", comment = c(ORCID = \"0000-0003-3641-729X\")), person(\"Michał\", \"Krassowski\", role = \"ctb\", comment = c(ORCID = \"0000-0002-9638-7785\")), person(\"Michael\", \"Chirico\", role = \"ctb\", comment = c(ORCID = \"0000-0003-0787-087X\")), person(\"Pedro\", \"Aphalo\", role = \"ctb\", comment = c(ORCID = \"0000-0003-3385-972X\")), person(\"Francis\", \"Barton\", role = \"ctb\") )", - "Title": "Automatically Position Non-Overlapping Text Labels with 'ggplot2'", - "Description": "Provides text and label geoms for 'ggplot2' that help to avoid overlapping text labels. Labels repel away from each other and away from the data points.", - "Depends": [ - "R (>= 4.5.0)", - "ggplot2 (>= 3.5.2)" - ], - "Imports": [ - "grid", - "Rcpp", - "rlang (>= 1.1.6)", - "S7", - "scales (>= 1.4.0)", - "withr (>= 3.0.2)" - ], - "Suggests": [ - "knitr", - "rmarkdown", - "testthat", - "svglite", - "vdiffr", - "gridExtra", - "ggpp", - "patchwork", - "devtools", - "prettydoc", - "ggbeeswarm", - "dplyr", - "magrittr", - "readr", - "stringr", - "marquee", - "rsvg", - "sf" - ], - "VignetteBuilder": "knitr", - "License": "GPL-3 | file LICENSE", - "URL": "https://ggrepel.slowkow.com/, https://github.com/slowkow/ggrepel", - "BugReports": "https://github.com/slowkow/ggrepel/issues", - "RoxygenNote": "7.3.3", - "LinkingTo": [ - "Rcpp" - ], - "Encoding": "UTF-8", - "NeedsCompilation": "yes", - "Author": "Kamil Slowikowski [aut, cre] (ORCID: ), Teun van den Brand [ctb] (ORCID: ), Alicia Schep [ctb] (ORCID: ), Sean Hughes [ctb] (ORCID: ), Trung Kien Dang [ctb] (ORCID: ), Saulius Lukauskas [ctb], Jean-Olivier Irisson [ctb] (ORCID: ), Zhian N Kamvar [ctb] (ORCID: ), Thompson Ryan [ctb] (ORCID: ), Dervieux Christophe [ctb] (ORCID: ), Yutani Hiroaki [ctb], Pierre Gramme [ctb], Amir Masoud Abdol [ctb], Malcolm Barrett [ctb] (ORCID: ), Robrecht Cannoodt [ctb] (ORCID: ), Michał Krassowski [ctb] (ORCID: ), Michael Chirico [ctb] (ORCID: ), Pedro Aphalo [ctb] (ORCID: ), Francis Barton [ctb]", - "Maintainer": "Kamil Slowikowski ", - "Repository": "CRAN" - }, - "glue": { - "Package": "glue", - "Version": "1.8.0", - "Source": "Repository", - "Title": "Interpreted String Literals", - "Authors@R": "c( person(\"Jim\", \"Hester\", role = \"aut\", comment = c(ORCID = \"0000-0002-2739-7082\")), person(\"Jennifer\", \"Bryan\", , \"jenny@posit.co\", role = c(\"aut\", \"cre\"), comment = c(ORCID = \"0000-0002-6983-2759\")), person(\"Posit Software, PBC\", role = c(\"cph\", \"fnd\")) )", - "Description": "An implementation of interpreted string literals, inspired by Python's Literal String Interpolation and Docstrings and Julia's Triple-Quoted String Literals .", - "License": "MIT + file LICENSE", - "URL": "https://glue.tidyverse.org/, https://github.com/tidyverse/glue", - "BugReports": "https://github.com/tidyverse/glue/issues", - "Depends": [ - "R (>= 3.6)" - ], - "Imports": [ - "methods" - ], - "Suggests": [ - "crayon", - "DBI (>= 1.2.0)", - "dplyr", - "knitr", - "magrittr", - "rlang", - "rmarkdown", - "RSQLite", - "testthat (>= 3.2.0)", - "vctrs (>= 0.3.0)", - "waldo (>= 0.5.3)", - "withr" - ], - "VignetteBuilder": "knitr", - "ByteCompile": "true", - "Config/Needs/website": "bench, forcats, ggbeeswarm, ggplot2, R.utils, rprintf, tidyr, tidyverse/tidytemplate", - "Config/testthat/edition": "3", - "Encoding": "UTF-8", - "RoxygenNote": "7.3.2", - "NeedsCompilation": "yes", - "Author": "Jim Hester [aut] (), Jennifer Bryan [aut, cre] (), Posit Software, PBC [cph, fnd]", - "Maintainer": "Jennifer Bryan ", - "Repository": "https://packagemanager.posit.co/cran/latest" - }, - "gridExtra": { - "Package": "gridExtra", - "Version": "2.3", - "Source": "Repository", - "Authors@R": "c(person(\"Baptiste\", \"Auguie\", email = \"baptiste.auguie@gmail.com\", role = c(\"aut\", \"cre\")), person(\"Anton\", \"Antonov\", email = \"tonytonov@gmail.com\", role = c(\"ctb\")))", - "License": "GPL (>= 2)", - "Title": "Miscellaneous Functions for \"Grid\" Graphics", - "Type": "Package", - "Description": "Provides a number of user-level functions to work with \"grid\" graphics, notably to arrange multiple grid-based plots on a page, and draw tables.", - "VignetteBuilder": "knitr", - "Imports": [ - "gtable", - "grid", - "grDevices", - "graphics", - "utils" - ], - "Suggests": [ - "ggplot2", - "egg", - "lattice", - "knitr", - "testthat" - ], - "RoxygenNote": "6.0.1", - "NeedsCompilation": "no", - "Author": "Baptiste Auguie [aut, cre], Anton Antonov [ctb]", - "Maintainer": "Baptiste Auguie ", - "Repository": "https://packagemanager.posit.co/cran/latest", - "Encoding": "UTF-8" - }, - "gtable": { - "Package": "gtable", - "Version": "0.3.6", - "Source": "Repository", - "Title": "Arrange 'Grobs' in Tables", - "Authors@R": "c( person(\"Hadley\", \"Wickham\", , \"hadley@posit.co\", role = \"aut\"), person(\"Thomas Lin\", \"Pedersen\", , \"thomas.pedersen@posit.co\", role = c(\"aut\", \"cre\")), person(\"Posit Software, PBC\", role = c(\"cph\", \"fnd\")) )", - "Description": "Tools to make it easier to work with \"tables\" of 'grobs'. The 'gtable' package defines a 'gtable' grob class that specifies a grid along with a list of grobs and their placement in the grid. Further the package makes it easy to manipulate and combine 'gtable' objects so that complex compositions can be built up sequentially.", - "License": "MIT + file LICENSE", - "URL": "https://gtable.r-lib.org, https://github.com/r-lib/gtable", - "BugReports": "https://github.com/r-lib/gtable/issues", - "Depends": [ - "R (>= 4.0)" - ], - "Imports": [ - "cli", - "glue", - "grid", - "lifecycle", - "rlang (>= 1.1.0)", - "stats" - ], - "Suggests": [ - "covr", - "ggplot2", - "knitr", - "profvis", - "rmarkdown", - "testthat (>= 3.0.0)" - ], - "VignetteBuilder": "knitr", - "Config/Needs/website": "tidyverse/tidytemplate", - "Config/testthat/edition": "3", - "Config/usethis/last-upkeep": "2024-10-25", - "Encoding": "UTF-8", - "RoxygenNote": "7.3.2", - "NeedsCompilation": "no", - "Author": "Hadley Wickham [aut], Thomas Lin Pedersen [aut, cre], Posit Software, PBC [cph, fnd]", - "Maintainer": "Thomas Lin Pedersen ", - "Repository": "https://packagemanager.posit.co/cran/latest" - }, - "highr": { - "Package": "highr", - "Version": "0.11", - "Source": "Repository", - "Type": "Package", - "Title": "Syntax Highlighting for R Source Code", - "Authors@R": "c( person(\"Yihui\", \"Xie\", role = c(\"aut\", \"cre\"), email = \"xie@yihui.name\", comment = c(ORCID = \"0000-0003-0645-5666\")), person(\"Yixuan\", \"Qiu\", role = \"aut\"), person(\"Christopher\", \"Gandrud\", role = \"ctb\"), person(\"Qiang\", \"Li\", role = \"ctb\") )", - "Description": "Provides syntax highlighting for R source code. Currently it supports LaTeX and HTML output. Source code of other languages is supported via Andre Simon's highlight package ().", - "Depends": [ - "R (>= 3.3.0)" - ], - "Imports": [ - "xfun (>= 0.18)" - ], - "Suggests": [ - "knitr", - "markdown", - "testit" - ], - "License": "GPL", - "URL": "https://github.com/yihui/highr", - "BugReports": "https://github.com/yihui/highr/issues", - "VignetteBuilder": "knitr", - "Encoding": "UTF-8", - "RoxygenNote": "7.3.1", - "NeedsCompilation": "no", - "Author": "Yihui Xie [aut, cre] (), Yixuan Qiu [aut], Christopher Gandrud [ctb], Qiang Li [ctb]", - "Maintainer": "Yihui Xie ", - "Repository": "CRAN" - }, - "hms": { - "Package": "hms", - "Version": "1.1.4", - "Source": "Repository", - "Title": "Pretty Time of Day", - "Date": "2025-10-11", - "Authors@R": "c( person(\"Kirill\", \"Müller\", , \"kirill@cynkra.com\", role = c(\"aut\", \"cre\"), comment = c(ORCID = \"0000-0002-1416-3412\")), person(\"R Consortium\", role = \"fnd\"), person(\"Posit Software, PBC\", role = \"fnd\", comment = c(ROR = \"03wc8by49\")) )", - "Description": "Implements an S3 class for storing and formatting time-of-day values, based on the 'difftime' class.", - "License": "MIT + file LICENSE", - "URL": "https://hms.tidyverse.org/, https://github.com/tidyverse/hms", - "BugReports": "https://github.com/tidyverse/hms/issues", - "Imports": [ - "cli", - "lifecycle", - "methods", - "pkgconfig", - "rlang (>= 1.0.2)", - "vctrs (>= 0.3.8)" - ], - "Suggests": [ - "crayon", - "lubridate", - "pillar (>= 1.1.0)", - "testthat (>= 3.0.0)" - ], - "Config/Needs/website": "tidyverse/tidytemplate", + "Config/Needs/website": "tidyverse/tidytemplate", "Config/testthat/edition": "3", "Encoding": "UTF-8", "RoxygenNote": "7.3.3.9000", @@ -1528,82 +723,6 @@ "Maintainer": "Kirill Müller ", "Repository": "CRAN" }, - "htmltools": { - "Package": "htmltools", - "Version": "0.5.9", - "Source": "Repository", - "Type": "Package", - "Title": "Tools for HTML", - "Authors@R": "c( person(\"Joe\", \"Cheng\", , \"joe@posit.co\", role = \"aut\"), person(\"Carson\", \"Sievert\", , \"carson@posit.co\", role = c(\"aut\", \"cre\"), comment = c(ORCID = \"0000-0002-4958-2844\")), person(\"Barret\", \"Schloerke\", , \"barret@posit.co\", role = \"aut\", comment = c(ORCID = \"0000-0001-9986-114X\")), person(\"Winston\", \"Chang\", , \"winston@posit.co\", role = \"aut\", comment = c(ORCID = \"0000-0002-1576-2126\")), person(\"Yihui\", \"Xie\", , \"yihui@posit.co\", role = \"aut\"), person(\"Jeff\", \"Allen\", role = \"aut\"), person(\"Posit Software, PBC\", role = c(\"cph\", \"fnd\")) )", - "Description": "Tools for HTML generation and output.", - "License": "GPL (>= 2)", - "URL": "https://github.com/rstudio/htmltools, https://rstudio.github.io/htmltools/", - "BugReports": "https://github.com/rstudio/htmltools/issues", - "Depends": [ - "R (>= 2.14.1)" - ], - "Imports": [ - "base64enc", - "digest", - "fastmap (>= 1.1.0)", - "grDevices", - "rlang (>= 1.0.0)", - "utils" - ], - "Suggests": [ - "Cairo", - "markdown", - "ragg", - "shiny", - "testthat", - "withr" - ], - "Enhances": [ - "knitr" - ], - "Config/Needs/check": "knitr", - "Config/Needs/website": "rstudio/quillt, bench", - "Encoding": "UTF-8", - "RoxygenNote": "7.3.3", - "Collate": "'colors.R' 'fill.R' 'html_dependency.R' 'html_escape.R' 'html_print.R' 'htmltools-package.R' 'images.R' 'known_tags.R' 'selector.R' 'staticimports.R' 'tag_query.R' 'utils.R' 'tags.R' 'template.R'", - "NeedsCompilation": "yes", - "Author": "Joe Cheng [aut], Carson Sievert [aut, cre] (ORCID: ), Barret Schloerke [aut] (ORCID: ), Winston Chang [aut] (ORCID: ), Yihui Xie [aut], Jeff Allen [aut], Posit Software, PBC [cph, fnd]", - "Maintainer": "Carson Sievert ", - "Repository": "CRAN" - }, - "htmlwidgets": { - "Package": "htmlwidgets", - "Version": "1.6.4", - "Source": "Repository", - "Type": "Package", - "Title": "HTML Widgets for R", - "Authors@R": "c( person(\"Ramnath\", \"Vaidyanathan\", role = c(\"aut\", \"cph\")), person(\"Yihui\", \"Xie\", role = \"aut\"), person(\"JJ\", \"Allaire\", role = \"aut\"), person(\"Joe\", \"Cheng\", , \"joe@posit.co\", role = \"aut\"), person(\"Carson\", \"Sievert\", , \"carson@posit.co\", role = c(\"aut\", \"cre\"), comment = c(ORCID = \"0000-0002-4958-2844\")), person(\"Kenton\", \"Russell\", role = c(\"aut\", \"cph\")), person(\"Ellis\", \"Hughes\", role = \"ctb\"), person(\"Posit Software, PBC\", role = c(\"cph\", \"fnd\")) )", - "Description": "A framework for creating HTML widgets that render in various contexts including the R console, 'R Markdown' documents, and 'Shiny' web applications.", - "License": "MIT + file LICENSE", - "URL": "https://github.com/ramnathv/htmlwidgets", - "BugReports": "https://github.com/ramnathv/htmlwidgets/issues", - "Imports": [ - "grDevices", - "htmltools (>= 0.5.7)", - "jsonlite (>= 0.9.16)", - "knitr (>= 1.8)", - "rmarkdown", - "yaml" - ], - "Suggests": [ - "testthat" - ], - "Enhances": [ - "shiny (>= 1.1)" - ], - "VignetteBuilder": "knitr", - "Encoding": "UTF-8", - "RoxygenNote": "7.2.3", - "NeedsCompilation": "no", - "Author": "Ramnath Vaidyanathan [aut, cph], Yihui Xie [aut], JJ Allaire [aut], Joe Cheng [aut], Carson Sievert [aut, cre] (), Kenton Russell [aut, cph], Ellis Hughes [ctb], Posit Software, PBC [cph, fnd]", - "Maintainer": "Carson Sievert ", - "Repository": "CRAN" - }, "httr": { "Package": "httr", "Version": "1.4.7", @@ -1644,48 +763,6 @@ "Maintainer": "Hadley Wickham ", "Repository": "CRAN" }, - "isoband": { - "Package": "isoband", - "Version": "0.3.0", - "Source": "Repository", - "Title": "Generate Isolines and Isobands from Regularly Spaced Elevation Grids", - "Authors@R": "c( person(\"Hadley\", \"Wickham\", , \"hadley@posit.co\", role = \"aut\", comment = c(ORCID = \"0000-0003-4757-117X\")), person(\"Claus O.\", \"Wilke\", , \"wilke@austin.utexas.edu\", role = \"aut\", comment = c(\"Original author\", ORCID = \"0000-0002-7470-9261\")), person(\"Thomas Lin\", \"Pedersen\", , \"thomas.pedersen@posit.co\", role = c(\"aut\", \"cre\"), comment = c(ORCID = \"0000-0002-5147-4711\")), person(\"Posit, PBC\", role = c(\"cph\", \"fnd\"), comment = c(ROR = \"03wc8by49\")) )", - "Description": "A fast C++ implementation to generate contour lines (isolines) and contour polygons (isobands) from regularly spaced grids containing elevation data.", - "License": "MIT + file LICENSE", - "URL": "https://isoband.r-lib.org, https://github.com/r-lib/isoband", - "BugReports": "https://github.com/r-lib/isoband/issues", - "Imports": [ - "cli", - "grid", - "rlang", - "utils" - ], - "Suggests": [ - "covr", - "ggplot2", - "knitr", - "magick", - "bench", - "rmarkdown", - "sf", - "testthat (>= 3.0.0)", - "xml2" - ], - "VignetteBuilder": "knitr", - "Config/Needs/website": "tidyverse/tidytemplate", - "Config/testthat/edition": "3", - "Config/usethis/last-upkeep": "2025-12-05", - "Encoding": "UTF-8", - "RoxygenNote": "7.3.3", - "Config/build/compilation-database": "true", - "LinkingTo": [ - "cpp11" - ], - "NeedsCompilation": "yes", - "Author": "Hadley Wickham [aut] (ORCID: ), Claus O. Wilke [aut] (Original author, ORCID: ), Thomas Lin Pedersen [aut, cre] (ORCID: ), Posit, PBC [cph, fnd] (ROR: )", - "Maintainer": "Thomas Lin Pedersen ", - "Repository": "https://packagemanager.posit.co/cran/latest" - }, "janitor": { "Package": "janitor", "Version": "2.2.1", @@ -1732,28 +809,6 @@ "Maintainer": "Sam Firke ", "Repository": "CRAN" }, - "jquerylib": { - "Package": "jquerylib", - "Version": "0.1.4", - "Source": "Repository", - "Title": "Obtain 'jQuery' as an HTML Dependency Object", - "Authors@R": "c( person(\"Carson\", \"Sievert\", role = c(\"aut\", \"cre\"), email = \"carson@rstudio.com\", comment = c(ORCID = \"0000-0002-4958-2844\")), person(\"Joe\", \"Cheng\", role = \"aut\", email = \"joe@rstudio.com\"), person(family = \"RStudio\", role = \"cph\"), person(family = \"jQuery Foundation\", role = \"cph\", comment = \"jQuery library and jQuery UI library\"), person(family = \"jQuery contributors\", role = c(\"ctb\", \"cph\"), comment = \"jQuery library; authors listed in inst/lib/jquery-AUTHORS.txt\") )", - "Description": "Obtain any major version of 'jQuery' () and use it in any webpage generated by 'htmltools' (e.g. 'shiny', 'htmlwidgets', and 'rmarkdown'). Most R users don't need to use this package directly, but other R packages (e.g. 'shiny', 'rmarkdown', etc.) depend on this package to avoid bundling redundant copies of 'jQuery'.", - "License": "MIT + file LICENSE", - "Encoding": "UTF-8", - "Config/testthat/edition": "3", - "RoxygenNote": "7.0.2", - "Imports": [ - "htmltools" - ], - "Suggests": [ - "testthat" - ], - "NeedsCompilation": "no", - "Author": "Carson Sievert [aut, cre] (), Joe Cheng [aut], RStudio [cph], jQuery Foundation [cph] (jQuery library and jQuery UI library), jQuery contributors [ctb, cph] (jQuery library; authors listed in inst/lib/jquery-AUTHORS.txt)", - "Maintainer": "Carson Sievert ", - "Repository": "CRAN" - }, "jsonlite": { "Package": "jsonlite", "Version": "2.0.0", @@ -1784,91 +839,6 @@ "Author": "Jeroen Ooms [aut, cre] (), Duncan Temple Lang [ctb], Lloyd Hilaiel [cph] (author of bundled libyajl)", "Repository": "CRAN" }, - "knitr": { - "Package": "knitr", - "Version": "1.51", - "Source": "Repository", - "Type": "Package", - "Title": "A General-Purpose Package for Dynamic Report Generation in R", - "Authors@R": "c( person(\"Yihui\", \"Xie\", role = c(\"aut\", \"cre\"), email = \"xie@yihui.name\", comment = c(ORCID = \"0000-0003-0645-5666\", URL = \"https://yihui.org\")), person(\"Abhraneel\", \"Sarma\", role = \"ctb\"), person(\"Adam\", \"Vogt\", role = \"ctb\"), person(\"Alastair\", \"Andrew\", role = \"ctb\"), person(\"Alex\", \"Zvoleff\", role = \"ctb\"), person(\"Amar\", \"Al-Zubaidi\", role = \"ctb\"), person(\"Andre\", \"Simon\", role = \"ctb\", comment = \"the CSS files under inst/themes/ were derived from the Highlight package http://www.andre-simon.de\"), person(\"Aron\", \"Atkins\", role = \"ctb\"), person(\"Aaron\", \"Wolen\", role = \"ctb\"), person(\"Ashley\", \"Manton\", role = \"ctb\"), person(\"Atsushi\", \"Yasumoto\", role = \"ctb\", comment = c(ORCID = \"0000-0002-8335-495X\")), person(\"Ben\", \"Baumer\", role = \"ctb\"), person(\"Brian\", \"Diggs\", role = \"ctb\"), person(\"Brian\", \"Zhang\", role = \"ctb\"), person(\"Bulat\", \"Yapparov\", role = \"ctb\"), person(\"Cassio\", \"Pereira\", role = \"ctb\"), person(\"Christophe\", \"Dervieux\", role = \"ctb\"), person(\"David\", \"Hall\", role = \"ctb\"), person(\"David\", \"Hugh-Jones\", role = \"ctb\"), person(\"David\", \"Robinson\", role = \"ctb\"), person(\"Doug\", \"Hemken\", role = \"ctb\"), person(\"Duncan\", \"Murdoch\", role = \"ctb\"), person(\"Elio\", \"Campitelli\", role = \"ctb\"), person(\"Ellis\", \"Hughes\", role = \"ctb\"), person(\"Emily\", \"Riederer\", role = \"ctb\"), person(\"Fabian\", \"Hirschmann\", role = \"ctb\"), person(\"Fitch\", \"Simeon\", role = \"ctb\"), person(\"Forest\", \"Fang\", role = \"ctb\"), person(c(\"Frank\", \"E\", \"Harrell\", \"Jr\"), role = \"ctb\", comment = \"the Sweavel package at inst/misc/Sweavel.sty\"), person(\"Garrick\", \"Aden-Buie\", role = \"ctb\"), person(\"Gregoire\", \"Detrez\", role = \"ctb\"), person(\"Hadley\", \"Wickham\", role = \"ctb\"), person(\"Hao\", \"Zhu\", role = \"ctb\"), person(\"Heewon\", \"Jeon\", role = \"ctb\"), person(\"Henrik\", \"Bengtsson\", role = \"ctb\"), person(\"Hiroaki\", \"Yutani\", role = \"ctb\"), person(\"Ian\", \"Lyttle\", role = \"ctb\"), person(\"Hodges\", \"Daniel\", role = \"ctb\"), person(\"Jacob\", \"Bien\", role = \"ctb\"), person(\"Jake\", \"Burkhead\", role = \"ctb\"), person(\"James\", \"Manton\", role = \"ctb\"), person(\"Jared\", \"Lander\", role = \"ctb\"), person(\"Jason\", \"Punyon\", role = \"ctb\"), person(\"Javier\", \"Luraschi\", role = \"ctb\"), person(\"Jeff\", \"Arnold\", role = \"ctb\"), person(\"Jenny\", \"Bryan\", role = \"ctb\"), person(\"Jeremy\", \"Ashkenas\", role = c(\"ctb\", \"cph\"), comment = \"the CSS file at inst/misc/docco-classic.css\"), person(\"Jeremy\", \"Stephens\", role = \"ctb\"), person(\"Jim\", \"Hester\", role = \"ctb\"), person(\"Joe\", \"Cheng\", role = \"ctb\"), person(\"Johannes\", \"Ranke\", role = \"ctb\"), person(\"John\", \"Honaker\", role = \"ctb\"), person(\"John\", \"Muschelli\", role = \"ctb\"), person(\"Jonathan\", \"Keane\", role = \"ctb\"), person(\"JJ\", \"Allaire\", role = \"ctb\"), person(\"Johan\", \"Toloe\", role = \"ctb\"), person(\"Jonathan\", \"Sidi\", role = \"ctb\"), person(\"Joseph\", \"Larmarange\", role = \"ctb\"), person(\"Julien\", \"Barnier\", role = \"ctb\"), person(\"Kaiyin\", \"Zhong\", role = \"ctb\"), person(\"Kamil\", \"Slowikowski\", role = \"ctb\"), person(\"Karl\", \"Forner\", role = \"ctb\"), person(c(\"Kevin\", \"K.\"), \"Smith\", role = \"ctb\"), person(\"Kirill\", \"Mueller\", role = \"ctb\"), person(\"Kohske\", \"Takahashi\", role = \"ctb\"), person(\"Lorenz\", \"Walthert\", role = \"ctb\"), person(\"Lucas\", \"Gallindo\", role = \"ctb\"), person(\"Marius\", \"Hofert\", role = \"ctb\"), person(\"Martin\", \"Modrák\", role = \"ctb\"), person(\"Michael\", \"Chirico\", role = \"ctb\"), person(\"Michael\", \"Friendly\", role = \"ctb\"), person(\"Michal\", \"Bojanowski\", role = \"ctb\"), person(\"Michel\", \"Kuhlmann\", role = \"ctb\"), person(\"Miller\", \"Patrick\", role = \"ctb\"), person(\"Nacho\", \"Caballero\", role = \"ctb\"), person(\"Nick\", \"Salkowski\", role = \"ctb\"), person(\"Niels Richard\", \"Hansen\", role = \"ctb\"), person(\"Noam\", \"Ross\", role = \"ctb\"), person(\"Obada\", \"Mahdi\", role = \"ctb\"), person(\"Pavel N.\", \"Krivitsky\", role = \"ctb\", comment=c(ORCID = \"0000-0002-9101-3362\")), person(\"Pedro\", \"Faria\", role = \"ctb\"), person(\"Qiang\", \"Li\", role = \"ctb\"), person(\"Ramnath\", \"Vaidyanathan\", role = \"ctb\"), person(\"Richard\", \"Cotton\", role = \"ctb\"), person(\"Robert\", \"Krzyzanowski\", role = \"ctb\"), person(\"Rodrigo\", \"Copetti\", role = \"ctb\"), person(\"Romain\", \"Francois\", role = \"ctb\"), person(\"Ruaridh\", \"Williamson\", role = \"ctb\"), person(\"Sagiru\", \"Mati\", role = \"ctb\", comment = c(ORCID = \"0000-0003-1413-3974\")), person(\"Scott\", \"Kostyshak\", role = \"ctb\"), person(\"Sebastian\", \"Meyer\", role = \"ctb\"), person(\"Sietse\", \"Brouwer\", role = \"ctb\"), person(c(\"Simon\", \"de\"), \"Bernard\", role = \"ctb\"), person(\"Sylvain\", \"Rousseau\", role = \"ctb\"), person(\"Taiyun\", \"Wei\", role = \"ctb\"), person(\"Thibaut\", \"Assus\", role = \"ctb\"), person(\"Thibaut\", \"Lamadon\", role = \"ctb\"), person(\"Thomas\", \"Leeper\", role = \"ctb\"), person(\"Tim\", \"Mastny\", role = \"ctb\"), person(\"Tom\", \"Torsney-Weir\", role = \"ctb\"), person(\"Trevor\", \"Davis\", role = \"ctb\"), person(\"Viktoras\", \"Veitas\", role = \"ctb\"), person(\"Weicheng\", \"Zhu\", role = \"ctb\"), person(\"Wush\", \"Wu\", role = \"ctb\"), person(\"Zachary\", \"Foster\", role = \"ctb\"), person(\"Zhian N.\", \"Kamvar\", role = \"ctb\", comment = c(ORCID = \"0000-0003-1458-7108\")), person(given = \"Posit Software, PBC\", role = c(\"cph\", \"fnd\")) )", - "Description": "Provides a general-purpose tool for dynamic report generation in R using Literate Programming techniques.", - "Depends": [ - "R (>= 3.6.0)" - ], - "Imports": [ - "evaluate (>= 0.15)", - "highr (>= 0.11)", - "methods", - "tools", - "xfun (>= 0.52)", - "yaml (>= 2.1.19)" - ], - "Suggests": [ - "bslib", - "DBI (>= 0.4-1)", - "digest", - "formatR", - "gifski", - "gridSVG", - "htmlwidgets (>= 0.7)", - "jpeg", - "JuliaCall (>= 0.11.1)", - "magick", - "litedown", - "markdown (>= 1.3)", - "otel", - "otelsdk", - "png", - "ragg", - "reticulate (>= 1.4)", - "rgl (>= 0.95.1201)", - "rlang", - "rmarkdown", - "sass", - "showtext", - "styler (>= 1.2.0)", - "targets (>= 0.6.0)", - "testit", - "tibble", - "tikzDevice (>= 0.10)", - "tinytex (>= 0.56)", - "webshot", - "rstudioapi", - "svglite" - ], - "License": "GPL", - "URL": "https://yihui.org/knitr/", - "BugReports": "https://github.com/yihui/knitr/issues", - "Encoding": "UTF-8", - "VignetteBuilder": "litedown, knitr", - "SystemRequirements": "Package vignettes based on R Markdown v2 or reStructuredText require Pandoc (http://pandoc.org). The function rst2pdf() requires rst2pdf (https://github.com/rst2pdf/rst2pdf).", - "Collate": "'block.R' 'cache.R' 'citation.R' 'hooks-html.R' 'plot.R' 'utils.R' 'defaults.R' 'concordance.R' 'engine.R' 'highlight.R' 'themes.R' 'header.R' 'hooks-asciidoc.R' 'hooks-chunk.R' 'hooks-extra.R' 'hooks-latex.R' 'hooks-md.R' 'hooks-rst.R' 'hooks-textile.R' 'hooks.R' 'otel.R' 'output.R' 'package.R' 'pandoc.R' 'params.R' 'parser.R' 'pattern.R' 'rocco.R' 'spin.R' 'table.R' 'template.R' 'utils-conversion.R' 'utils-rd2html.R' 'utils-string.R' 'utils-sweave.R' 'utils-upload.R' 'utils-vignettes.R' 'zzz.R'", - "RoxygenNote": "7.3.3", - "NeedsCompilation": "no", - "Author": "Yihui Xie [aut, cre] (ORCID: , URL: https://yihui.org), Abhraneel Sarma [ctb], Adam Vogt [ctb], Alastair Andrew [ctb], Alex Zvoleff [ctb], Amar Al-Zubaidi [ctb], Andre Simon [ctb] (the CSS files under inst/themes/ were derived from the Highlight package http://www.andre-simon.de), Aron Atkins [ctb], Aaron Wolen [ctb], Ashley Manton [ctb], Atsushi Yasumoto [ctb] (ORCID: ), Ben Baumer [ctb], Brian Diggs [ctb], Brian Zhang [ctb], Bulat Yapparov [ctb], Cassio Pereira [ctb], Christophe Dervieux [ctb], David Hall [ctb], David Hugh-Jones [ctb], David Robinson [ctb], Doug Hemken [ctb], Duncan Murdoch [ctb], Elio Campitelli [ctb], Ellis Hughes [ctb], Emily Riederer [ctb], Fabian Hirschmann [ctb], Fitch Simeon [ctb], Forest Fang [ctb], Frank E Harrell Jr [ctb] (the Sweavel package at inst/misc/Sweavel.sty), Garrick Aden-Buie [ctb], Gregoire Detrez [ctb], Hadley Wickham [ctb], Hao Zhu [ctb], Heewon Jeon [ctb], Henrik Bengtsson [ctb], Hiroaki Yutani [ctb], Ian Lyttle [ctb], Hodges Daniel [ctb], Jacob Bien [ctb], Jake Burkhead [ctb], James Manton [ctb], Jared Lander [ctb], Jason Punyon [ctb], Javier Luraschi [ctb], Jeff Arnold [ctb], Jenny Bryan [ctb], Jeremy Ashkenas [ctb, cph] (the CSS file at inst/misc/docco-classic.css), Jeremy Stephens [ctb], Jim Hester [ctb], Joe Cheng [ctb], Johannes Ranke [ctb], John Honaker [ctb], John Muschelli [ctb], Jonathan Keane [ctb], JJ Allaire [ctb], Johan Toloe [ctb], Jonathan Sidi [ctb], Joseph Larmarange [ctb], Julien Barnier [ctb], Kaiyin Zhong [ctb], Kamil Slowikowski [ctb], Karl Forner [ctb], Kevin K. Smith [ctb], Kirill Mueller [ctb], Kohske Takahashi [ctb], Lorenz Walthert [ctb], Lucas Gallindo [ctb], Marius Hofert [ctb], Martin Modrák [ctb], Michael Chirico [ctb], Michael Friendly [ctb], Michal Bojanowski [ctb], Michel Kuhlmann [ctb], Miller Patrick [ctb], Nacho Caballero [ctb], Nick Salkowski [ctb], Niels Richard Hansen [ctb], Noam Ross [ctb], Obada Mahdi [ctb], Pavel N. Krivitsky [ctb] (ORCID: ), Pedro Faria [ctb], Qiang Li [ctb], Ramnath Vaidyanathan [ctb], Richard Cotton [ctb], Robert Krzyzanowski [ctb], Rodrigo Copetti [ctb], Romain Francois [ctb], Ruaridh Williamson [ctb], Sagiru Mati [ctb] (ORCID: ), Scott Kostyshak [ctb], Sebastian Meyer [ctb], Sietse Brouwer [ctb], Simon de Bernard [ctb], Sylvain Rousseau [ctb], Taiyun Wei [ctb], Thibaut Assus [ctb], Thibaut Lamadon [ctb], Thomas Leeper [ctb], Tim Mastny [ctb], Tom Torsney-Weir [ctb], Trevor Davis [ctb], Viktoras Veitas [ctb], Weicheng Zhu [ctb], Wush Wu [ctb], Zachary Foster [ctb], Zhian N. Kamvar [ctb] (ORCID: ), Posit Software, PBC [cph, fnd]", - "Maintainer": "Yihui Xie ", - "Repository": "CRAN" - }, - "labeling": { - "Package": "labeling", - "Version": "0.4.3", - "Source": "Repository", - "Type": "Package", - "Title": "Axis Labeling", - "Date": "2023-08-29", - "Author": "Justin Talbot,", - "Maintainer": "Nuno Sempere ", - "Description": "Functions which provide a range of axis labeling algorithms.", - "License": "MIT + file LICENSE | Unlimited", - "Collate": "'labeling.R'", - "NeedsCompilation": "no", - "Imports": [ - "stats", - "graphics" - ], - "Repository": "https://packagemanager.posit.co/cran/latest", - "Encoding": "UTF-8" - }, "lifecycle": { "Package": "lifecycle", "Version": "1.0.5", @@ -1984,36 +954,6 @@ "NeedsCompilation": "yes", "Author": "Stefan Milton Bache [aut, cph] (Original author and creator of magrittr), Hadley Wickham [aut], Lionel Henry [cre], Posit Software, PBC [cph, fnd] (ROR: )", "Maintainer": "Lionel Henry ", - "Repository": "https://packagemanager.posit.co/cran/latest" - }, - "memoise": { - "Package": "memoise", - "Version": "2.0.1", - "Source": "Repository", - "Title": "'Memoisation' of Functions", - "Authors@R": "c(person(given = \"Hadley\", family = \"Wickham\", role = \"aut\", email = \"hadley@rstudio.com\"), person(given = \"Jim\", family = \"Hester\", role = \"aut\"), person(given = \"Winston\", family = \"Chang\", role = c(\"aut\", \"cre\"), email = \"winston@rstudio.com\"), person(given = \"Kirill\", family = \"Müller\", role = \"aut\", email = \"krlmlr+r@mailbox.org\"), person(given = \"Daniel\", family = \"Cook\", role = \"aut\", email = \"danielecook@gmail.com\"), person(given = \"Mark\", family = \"Edmondson\", role = \"ctb\", email = \"r@sunholo.com\"))", - "Description": "Cache the results of a function so that when you call it again with the same arguments it returns the previously computed value.", - "License": "MIT + file LICENSE", - "URL": "https://memoise.r-lib.org, https://github.com/r-lib/memoise", - "BugReports": "https://github.com/r-lib/memoise/issues", - "Imports": [ - "rlang (>= 0.4.10)", - "cachem" - ], - "Suggests": [ - "digest", - "aws.s3", - "covr", - "googleAuthR", - "googleCloudStorageR", - "httr", - "testthat" - ], - "Encoding": "UTF-8", - "RoxygenNote": "7.1.2", - "NeedsCompilation": "no", - "Author": "Hadley Wickham [aut], Jim Hester [aut], Winston Chang [aut, cre], Kirill Müller [aut], Daniel Cook [aut], Mark Edmondson [ctb]", - "Maintainer": "Winston Chang ", "Repository": "CRAN" }, "mime": { @@ -2033,29 +973,9 @@ "RoxygenNote": "7.3.2", "Encoding": "UTF-8", "NeedsCompilation": "yes", - "Author": "Yihui Xie [aut, cre] (, https://yihui.org), Jeffrey Horner [ctb], Beilei Bian [ctb]", - "Maintainer": "Yihui Xie ", - "Repository": "CRAN" - }, - "numDeriv": { - "Package": "numDeriv", - "Version": "2016.8-1.1", - "Source": "Repository", - "Title": "Accurate Numerical Derivatives", - "Description": "Methods for calculating (usually) accurate numerical first and second order derivatives. Accurate calculations are done using 'Richardson''s' extrapolation or, when applicable, a complex step derivative is available. A simple difference method is also provided. Simple difference is (usually) less accurate but is much quicker than 'Richardson''s' extrapolation and provides a useful cross-check. Methods are provided for real scalar and vector valued functions.", - "Depends": [ - "R (>= 2.11.1)" - ], - "LazyLoad": "yes", - "ByteCompile": "yes", - "License": "GPL-2", - "Copyright": "2006-2011, Bank of Canada. 2012-2016, Paul Gilbert", - "Author": "Paul Gilbert and Ravi Varadhan", - "Maintainer": "Paul Gilbert ", - "URL": "http://optimizer.r-forge.r-project.org/", - "NeedsCompilation": "no", - "Repository": "https://packagemanager.posit.co/cran/latest", - "Encoding": "UTF-8" + "Author": "Yihui Xie [aut, cre] (, https://yihui.org), Jeffrey Horner [ctb], Beilei Bian [ctb]", + "Maintainer": "Yihui Xie ", + "Repository": "CRAN" }, "openssl": { "Package": "openssl", @@ -2145,7 +1065,7 @@ "NeedsCompilation": "no", "Author": "Kirill Müller [aut, cre] (ORCID: ), Hadley Wickham [aut], RStudio [cph]", "Maintainer": "Kirill Müller ", - "Repository": "https://packagemanager.posit.co/cran/latest" + "Repository": "CRAN" }, "pkgconfig": { "Package": "pkgconfig", @@ -2169,7 +1089,7 @@ "BugReports": "https://github.com/r-lib/pkgconfig/issues", "Encoding": "UTF-8", "NeedsCompilation": "no", - "Repository": "https://packagemanager.posit.co/cran/latest" + "Repository": "CRAN" }, "prettyunits": { "Package": "prettyunits", @@ -2302,25 +1222,7 @@ "NeedsCompilation": "yes", "Author": "Hadley Wickham [aut, cre] (ORCID: ), Lionel Henry [aut], Posit Software, PBC [cph, fnd] (ROR: )", "Maintainer": "Hadley Wickham ", - "Repository": "https://packagemanager.posit.co/cran/latest" - }, - "quadprog": { - "Package": "quadprog", - "Version": "1.5-8", - "Source": "Repository", - "Type": "Package", - "Title": "Functions to Solve Quadratic Programming Problems", - "Date": "2019-11-20", - "Author": "S original by Berwin A. Turlach R port by Andreas Weingessel Fortran contributions from Cleve Moler (dposl/LINPACK and (a modified version of) dpodi/LINPACK)", - "Maintainer": "Berwin A. Turlach ", - "Description": "This package contains routines and documentation for solving quadratic programming problems.", - "Depends": [ - "R (>= 3.1.0)" - ], - "License": "GPL (>= 2)", - "NeedsCompilation": "yes", - "Repository": "https://packagemanager.posit.co/cran/latest", - "Encoding": "UTF-8" + "Repository": "CRAN" }, "rappdirs": { "Package": "rappdirs", @@ -2353,82 +1255,6 @@ "Maintainer": "Hadley Wickham ", "Repository": "CRAN" }, - "reactR": { - "Package": "reactR", - "Version": "0.6.1", - "Source": "Repository", - "Type": "Package", - "Title": "React Helpers", - "Date": "2024-09-14", - "Authors@R": "c( person( \"Facebook\", \"Inc\" , role = c(\"aut\", \"cph\") , comment = \"React library in lib, https://reactjs.org/; see AUTHORS for full list of contributors\" ), person( \"Michel\",\"Weststrate\", , role = c(\"aut\", \"cph\") , comment = \"mobx library in lib, https://github.com/mobxjs\" ), person( \"Kent\", \"Russell\" , role = c(\"aut\", \"cre\") , comment = \"R interface\" , email = \"kent.russell@timelyportfolio.com\" ), person( \"Alan\", \"Dipert\" , role = c(\"aut\") , comment = \"R interface\" , email = \"alan@rstudio.com\" ), person( \"Greg\", \"Lin\" , role = c(\"aut\") , comment = \"R interface\" , email = \"glin@glin.io\" ) )", - "Maintainer": "Kent Russell ", - "Description": "Make it easy to use 'React' in R with 'htmlwidget' scaffolds, helper dependency functions, an embedded 'Babel' 'transpiler', and examples.", - "URL": "https://github.com/react-R/reactR", - "BugReports": "https://github.com/react-R/reactR/issues", - "License": "MIT + file LICENSE", - "Encoding": "UTF-8", - "Imports": [ - "htmltools" - ], - "Suggests": [ - "htmlwidgets (>= 1.5.3)", - "rmarkdown", - "shiny", - "V8", - "knitr", - "usethis", - "jsonlite" - ], - "RoxygenNote": "7.3.2", - "VignetteBuilder": "knitr", - "NeedsCompilation": "no", - "Author": "Facebook Inc [aut, cph] (React library in lib, https://reactjs.org/; see AUTHORS for full list of contributors), Michel Weststrate [aut, cph] (mobx library in lib, https://github.com/mobxjs), Kent Russell [aut, cre] (R interface), Alan Dipert [aut] (R interface), Greg Lin [aut] (R interface)", - "Repository": "https://packagemanager.posit.co/cran/latest" - }, - "reactable": { - "Package": "reactable", - "Version": "0.4.5", - "Source": "Repository", - "Type": "Package", - "Title": "Interactive Data Tables for R", - "Authors@R": "c( person(\"Greg\", \"Lin\", email = \"glin@glin.io\", role = c(\"aut\", \"cre\")), person(\"Tanner\", \"Linsley\", role = c(\"ctb\", \"cph\"), comment = \"React Table library\"), person(family = \"Emotion team and other contributors\", role = c(\"ctb\", \"cph\"), comment = \"Emotion library\"), person(\"Kent\", \"Russell\", role = c(\"ctb\", \"cph\"), comment = \"reactR package\"), person(\"Ramnath\", \"Vaidyanathan\", role = c(\"ctb\", \"cph\"), comment = \"htmlwidgets package\"), person(\"Joe\", \"Cheng\", role = c(\"ctb\", \"cph\"), comment = \"htmlwidgets package\"), person(\"JJ\", \"Allaire\", role = c(\"ctb\", \"cph\"), comment = \"htmlwidgets package\"), person(\"Yihui\", \"Xie\", role = c(\"ctb\", \"cph\"), comment = \"htmlwidgets package\"), person(\"Kenton\", \"Russell\", role = c(\"ctb\", \"cph\"), comment = \"htmlwidgets package\"), person(family = \"Facebook, Inc. and its affiliates\", role = c(\"ctb\", \"cph\"), comment = \"React library\"), person(family = \"FormatJS\", role = c(\"ctb\", \"cph\"), comment = \"FormatJS libraries\"), person(family = \"Feross Aboukhadijeh, and other contributors\", role = c(\"ctb\", \"cph\"), comment = \"buffer library\"), person(\"Roman\", \"Shtylman\", role = c(\"ctb\", \"cph\"), comment = \"process library\"), person(\"James\", \"Halliday\", role = c(\"ctb\", \"cph\"), comment = \"stream-browserify library\"), person(family = \"Posit Software, PBC\", role = c(\"fnd\", \"cph\")) )", - "Description": "Interactive data tables for R, based on the 'React Table' JavaScript library. Provides an HTML widget that can be used in 'R Markdown' or 'Quarto' documents, 'Shiny' applications, or viewed from an R console.", - "License": "MIT + file LICENSE", - "URL": "https://glin.github.io/reactable/, https://github.com/glin/reactable", - "BugReports": "https://github.com/glin/reactable/issues", - "Depends": [ - "R (>= 3.1)" - ], - "Imports": [ - "digest", - "htmltools (>= 0.5.2)", - "htmlwidgets (>= 1.5.3)", - "jsonlite", - "reactR" - ], - "Suggests": [ - "covr", - "crosstalk", - "dplyr", - "fontawesome", - "knitr", - "leaflet", - "MASS", - "rmarkdown", - "shiny", - "sparkline", - "testthat", - "tippy", - "V8" - ], - "Encoding": "UTF-8", - "RoxygenNote": "7.2.1", - "Config/testthat/edition": "3", - "NeedsCompilation": "no", - "Author": "Greg Lin [aut, cre], Tanner Linsley [ctb, cph] (React Table library), Emotion team and other contributors [ctb, cph] (Emotion library), Kent Russell [ctb, cph] (reactR package), Ramnath Vaidyanathan [ctb, cph] (htmlwidgets package), Joe Cheng [ctb, cph] (htmlwidgets package), JJ Allaire [ctb, cph] (htmlwidgets package), Yihui Xie [ctb, cph] (htmlwidgets package), Kenton Russell [ctb, cph] (htmlwidgets package), Facebook, Inc. and its affiliates [ctb, cph] (React library), FormatJS [ctb, cph] (FormatJS libraries), Feross Aboukhadijeh, and other contributors [ctb, cph] (buffer library), Roman Shtylman [ctb, cph] (process library), James Halliday [ctb, cph] (stream-browserify library), Posit Software, PBC [fnd, cph]", - "Maintainer": "Greg Lin ", - "Repository": "https://packagemanager.posit.co/cran/latest" - }, "readr": { "Package": "readr", "Version": "2.1.6", @@ -2591,62 +1417,6 @@ "Maintainer": "Lionel Henry ", "Repository": "CRAN" }, - "rmarkdown": { - "Package": "rmarkdown", - "Version": "2.30", - "Source": "Repository", - "Type": "Package", - "Title": "Dynamic Documents for R", - "Authors@R": "c( person(\"JJ\", \"Allaire\", , \"jj@posit.co\", role = \"aut\"), person(\"Yihui\", \"Xie\", , \"xie@yihui.name\", role = c(\"aut\", \"cre\"), comment = c(ORCID = \"0000-0003-0645-5666\")), person(\"Christophe\", \"Dervieux\", , \"cderv@posit.co\", role = \"aut\", comment = c(ORCID = \"0000-0003-4474-2498\")), person(\"Jonathan\", \"McPherson\", , \"jonathan@posit.co\", role = \"aut\"), person(\"Javier\", \"Luraschi\", role = \"aut\"), person(\"Kevin\", \"Ushey\", , \"kevin@posit.co\", role = \"aut\"), person(\"Aron\", \"Atkins\", , \"aron@posit.co\", role = \"aut\"), person(\"Hadley\", \"Wickham\", , \"hadley@posit.co\", role = \"aut\"), person(\"Joe\", \"Cheng\", , \"joe@posit.co\", role = \"aut\"), person(\"Winston\", \"Chang\", , \"winston@posit.co\", role = \"aut\"), person(\"Richard\", \"Iannone\", , \"rich@posit.co\", role = \"aut\", comment = c(ORCID = \"0000-0003-3925-190X\")), person(\"Andrew\", \"Dunning\", role = \"ctb\", comment = c(ORCID = \"0000-0003-0464-5036\")), person(\"Atsushi\", \"Yasumoto\", role = c(\"ctb\", \"cph\"), comment = c(ORCID = \"0000-0002-8335-495X\", cph = \"Number sections Lua filter\")), person(\"Barret\", \"Schloerke\", role = \"ctb\"), person(\"Carson\", \"Sievert\", role = \"ctb\", comment = c(ORCID = \"0000-0002-4958-2844\")), person(\"Devon\", \"Ryan\", , \"dpryan79@gmail.com\", role = \"ctb\", comment = c(ORCID = \"0000-0002-8549-0971\")), person(\"Frederik\", \"Aust\", , \"frederik.aust@uni-koeln.de\", role = \"ctb\", comment = c(ORCID = \"0000-0003-4900-788X\")), person(\"Jeff\", \"Allen\", , \"jeff@posit.co\", role = \"ctb\"), person(\"JooYoung\", \"Seo\", role = \"ctb\", comment = c(ORCID = \"0000-0002-4064-6012\")), person(\"Malcolm\", \"Barrett\", role = \"ctb\"), person(\"Rob\", \"Hyndman\", , \"Rob.Hyndman@monash.edu\", role = \"ctb\"), person(\"Romain\", \"Lesur\", role = \"ctb\"), person(\"Roy\", \"Storey\", role = \"ctb\"), person(\"Ruben\", \"Arslan\", , \"ruben.arslan@uni-goettingen.de\", role = \"ctb\"), person(\"Sergio\", \"Oller\", role = \"ctb\"), person(given = \"Posit Software, PBC\", role = c(\"cph\", \"fnd\")), person(, \"jQuery UI contributors\", role = c(\"ctb\", \"cph\"), comment = \"jQuery UI library; authors listed in inst/rmd/h/jqueryui/AUTHORS.txt\"), person(\"Mark\", \"Otto\", role = \"ctb\", comment = \"Bootstrap library\"), person(\"Jacob\", \"Thornton\", role = \"ctb\", comment = \"Bootstrap library\"), person(, \"Bootstrap contributors\", role = \"ctb\", comment = \"Bootstrap library\"), person(, \"Twitter, Inc\", role = \"cph\", comment = \"Bootstrap library\"), person(\"Alexander\", \"Farkas\", role = c(\"ctb\", \"cph\"), comment = \"html5shiv library\"), person(\"Scott\", \"Jehl\", role = c(\"ctb\", \"cph\"), comment = \"Respond.js library\"), person(\"Ivan\", \"Sagalaev\", role = c(\"ctb\", \"cph\"), comment = \"highlight.js library\"), person(\"Greg\", \"Franko\", role = c(\"ctb\", \"cph\"), comment = \"tocify library\"), person(\"John\", \"MacFarlane\", role = c(\"ctb\", \"cph\"), comment = \"Pandoc templates\"), person(, \"Google, Inc.\", role = c(\"ctb\", \"cph\"), comment = \"ioslides library\"), person(\"Dave\", \"Raggett\", role = \"ctb\", comment = \"slidy library\"), person(, \"W3C\", role = \"cph\", comment = \"slidy library\"), person(\"Dave\", \"Gandy\", role = c(\"ctb\", \"cph\"), comment = \"Font-Awesome\"), person(\"Ben\", \"Sperry\", role = \"ctb\", comment = \"Ionicons\"), person(, \"Drifty\", role = \"cph\", comment = \"Ionicons\"), person(\"Aidan\", \"Lister\", role = c(\"ctb\", \"cph\"), comment = \"jQuery StickyTabs\"), person(\"Benct Philip\", \"Jonsson\", role = c(\"ctb\", \"cph\"), comment = \"pagebreak Lua filter\"), person(\"Albert\", \"Krewinkel\", role = c(\"ctb\", \"cph\"), comment = \"pagebreak Lua filter\") )", - "Description": "Convert R Markdown documents into a variety of formats.", - "License": "GPL-3", - "URL": "https://github.com/rstudio/rmarkdown, https://pkgs.rstudio.com/rmarkdown/", - "BugReports": "https://github.com/rstudio/rmarkdown/issues", - "Depends": [ - "R (>= 3.0)" - ], - "Imports": [ - "bslib (>= 0.2.5.1)", - "evaluate (>= 0.13)", - "fontawesome (>= 0.5.0)", - "htmltools (>= 0.5.1)", - "jquerylib", - "jsonlite", - "knitr (>= 1.43)", - "methods", - "tinytex (>= 0.31)", - "tools", - "utils", - "xfun (>= 0.36)", - "yaml (>= 2.1.19)" - ], - "Suggests": [ - "digest", - "dygraphs", - "fs", - "rsconnect", - "downlit (>= 0.4.0)", - "katex (>= 1.4.0)", - "sass (>= 0.4.0)", - "shiny (>= 1.6.0)", - "testthat (>= 3.0.3)", - "tibble", - "vctrs", - "cleanrmd", - "withr (>= 2.4.2)", - "xml2" - ], - "VignetteBuilder": "knitr", - "Config/Needs/website": "rstudio/quillt, pkgdown", - "Config/testthat/edition": "3", - "Encoding": "UTF-8", - "RoxygenNote": "7.3.2", - "SystemRequirements": "pandoc (>= 1.14) - http://pandoc.org", - "NeedsCompilation": "no", - "Author": "JJ Allaire [aut], Yihui Xie [aut, cre] (ORCID: ), Christophe Dervieux [aut] (ORCID: ), Jonathan McPherson [aut], Javier Luraschi [aut], Kevin Ushey [aut], Aron Atkins [aut], Hadley Wickham [aut], Joe Cheng [aut], Winston Chang [aut], Richard Iannone [aut] (ORCID: ), Andrew Dunning [ctb] (ORCID: ), Atsushi Yasumoto [ctb, cph] (ORCID: , cph: Number sections Lua filter), Barret Schloerke [ctb], Carson Sievert [ctb] (ORCID: ), Devon Ryan [ctb] (ORCID: ), Frederik Aust [ctb] (ORCID: ), Jeff Allen [ctb], JooYoung Seo [ctb] (ORCID: ), Malcolm Barrett [ctb], Rob Hyndman [ctb], Romain Lesur [ctb], Roy Storey [ctb], Ruben Arslan [ctb], Sergio Oller [ctb], Posit Software, PBC [cph, fnd], jQuery UI contributors [ctb, cph] (jQuery UI library; authors listed in inst/rmd/h/jqueryui/AUTHORS.txt), Mark Otto [ctb] (Bootstrap library), Jacob Thornton [ctb] (Bootstrap library), Bootstrap contributors [ctb] (Bootstrap library), Twitter, Inc [cph] (Bootstrap library), Alexander Farkas [ctb, cph] (html5shiv library), Scott Jehl [ctb, cph] (Respond.js library), Ivan Sagalaev [ctb, cph] (highlight.js library), Greg Franko [ctb, cph] (tocify library), John MacFarlane [ctb, cph] (Pandoc templates), Google, Inc. [ctb, cph] (ioslides library), Dave Raggett [ctb] (slidy library), W3C [cph] (slidy library), Dave Gandy [ctb, cph] (Font-Awesome), Ben Sperry [ctb] (Ionicons), Drifty [cph] (Ionicons), Aidan Lister [ctb, cph] (jQuery StickyTabs), Benct Philip Jonsson [ctb, cph] (pagebreak Lua filter), Albert Krewinkel [ctb, cph] (pagebreak Lua filter)", - "Maintainer": "Yihui Xie ", - "Repository": "CRAN" - }, "rvest": { "Package": "rvest", "Version": "1.0.5", @@ -2734,86 +1504,6 @@ "Maintainer": "Edzer Pebesma ", "Repository": "CRAN" }, - "sass": { - "Package": "sass", - "Version": "0.4.10", - "Source": "Repository", - "Type": "Package", - "Title": "Syntactically Awesome Style Sheets ('Sass')", - "Description": "An 'SCSS' compiler, powered by the 'LibSass' library. With this, R developers can use variables, inheritance, and functions to generate dynamic style sheets. The package uses the 'Sass CSS' extension language, which is stable, powerful, and CSS compatible.", - "Authors@R": "c( person(\"Joe\", \"Cheng\", , \"joe@rstudio.com\", \"aut\"), person(\"Timothy\", \"Mastny\", , \"tim.mastny@gmail.com\", \"aut\"), person(\"Richard\", \"Iannone\", , \"rich@rstudio.com\", \"aut\", comment = c(ORCID = \"0000-0003-3925-190X\")), person(\"Barret\", \"Schloerke\", , \"barret@rstudio.com\", \"aut\", comment = c(ORCID = \"0000-0001-9986-114X\")), person(\"Carson\", \"Sievert\", , \"carson@rstudio.com\", c(\"aut\", \"cre\"), comment = c(ORCID = \"0000-0002-4958-2844\")), person(\"Christophe\", \"Dervieux\", , \"cderv@rstudio.com\", c(\"ctb\"), comment = c(ORCID = \"0000-0003-4474-2498\")), person(family = \"RStudio\", role = c(\"cph\", \"fnd\")), person(family = \"Sass Open Source Foundation\", role = c(\"ctb\", \"cph\"), comment = \"LibSass library\"), person(\"Greter\", \"Marcel\", role = c(\"ctb\", \"cph\"), comment = \"LibSass library\"), person(\"Mifsud\", \"Michael\", role = c(\"ctb\", \"cph\"), comment = \"LibSass library\"), person(\"Hampton\", \"Catlin\", role = c(\"ctb\", \"cph\"), comment = \"LibSass library\"), person(\"Natalie\", \"Weizenbaum\", role = c(\"ctb\", \"cph\"), comment = \"LibSass library\"), person(\"Chris\", \"Eppstein\", role = c(\"ctb\", \"cph\"), comment = \"LibSass library\"), person(\"Adams\", \"Joseph\", role = c(\"ctb\", \"cph\"), comment = \"json.cpp\"), person(\"Trifunovic\", \"Nemanja\", role = c(\"ctb\", \"cph\"), comment = \"utf8.h\") )", - "License": "MIT + file LICENSE", - "URL": "https://rstudio.github.io/sass/, https://github.com/rstudio/sass", - "BugReports": "https://github.com/rstudio/sass/issues", - "Encoding": "UTF-8", - "RoxygenNote": "7.3.2", - "SystemRequirements": "GNU make", - "Imports": [ - "fs (>= 1.2.4)", - "rlang (>= 0.4.10)", - "htmltools (>= 0.5.1)", - "R6", - "rappdirs" - ], - "Suggests": [ - "testthat", - "knitr", - "rmarkdown", - "withr", - "shiny", - "curl" - ], - "VignetteBuilder": "knitr", - "Config/testthat/edition": "3", - "NeedsCompilation": "yes", - "Author": "Joe Cheng [aut], Timothy Mastny [aut], Richard Iannone [aut] (), Barret Schloerke [aut] (), Carson Sievert [aut, cre] (), Christophe Dervieux [ctb] (), RStudio [cph, fnd], Sass Open Source Foundation [ctb, cph] (LibSass library), Greter Marcel [ctb, cph] (LibSass library), Mifsud Michael [ctb, cph] (LibSass library), Hampton Catlin [ctb, cph] (LibSass library), Natalie Weizenbaum [ctb, cph] (LibSass library), Chris Eppstein [ctb, cph] (LibSass library), Adams Joseph [ctb, cph] (json.cpp), Trifunovic Nemanja [ctb, cph] (utf8.h)", - "Maintainer": "Carson Sievert ", - "Repository": "CRAN" - }, - "scales": { - "Package": "scales", - "Version": "1.4.0", - "Source": "Repository", - "Title": "Scale Functions for Visualization", - "Authors@R": "c( person(\"Hadley\", \"Wickham\", , \"hadley@posit.co\", role = \"aut\"), person(\"Thomas Lin\", \"Pedersen\", , \"thomas.pedersen@posit.co\", role = c(\"cre\", \"aut\"), comment = c(ORCID = \"0000-0002-5147-4711\")), person(\"Dana\", \"Seidel\", role = \"aut\"), person(\"Posit Software, PBC\", role = c(\"cph\", \"fnd\"), comment = c(ROR = \"03wc8by49\")) )", - "Description": "Graphical scales map data to aesthetics, and provide methods for automatically determining breaks and labels for axes and legends.", - "License": "MIT + file LICENSE", - "URL": "https://scales.r-lib.org, https://github.com/r-lib/scales", - "BugReports": "https://github.com/r-lib/scales/issues", - "Depends": [ - "R (>= 4.1)" - ], - "Imports": [ - "cli", - "farver (>= 2.0.3)", - "glue", - "labeling", - "lifecycle", - "R6", - "RColorBrewer", - "rlang (>= 1.1.0)", - "viridisLite" - ], - "Suggests": [ - "bit64", - "covr", - "dichromat", - "ggplot2", - "hms (>= 0.5.0)", - "stringi", - "testthat (>= 3.0.0)" - ], - "Config/Needs/website": "tidyverse/tidytemplate", - "Config/testthat/edition": "3", - "Config/usethis/last-upkeep": "2025-04-23", - "Encoding": "UTF-8", - "LazyLoad": "yes", - "RoxygenNote": "7.3.2", - "NeedsCompilation": "no", - "Author": "Hadley Wickham [aut], Thomas Lin Pedersen [cre, aut] (), Dana Seidel [aut], Posit Software, PBC [cph, fnd] (03wc8by49)", - "Maintainer": "Thomas Lin Pedersen ", - "Repository": "https://packagemanager.posit.co/cran/latest" - }, "selectr": { "Package": "selectr", "Version": "0.5-1", @@ -2987,7 +1677,7 @@ "Author": "Marek Gagolewski [aut, cre, cph] (), Bartek Tartanus [ctb], Unicode, Inc. and others [ctb] (ICU4C source code, Unicode Character Database)", "Maintainer": "Marek Gagolewski ", "License_is_FOSS": "yes", - "Repository": "https://packagemanager.posit.co/cran/latest" + "Repository": "CRAN" }, "stringr": { "Package": "stringr", @@ -3032,7 +1722,7 @@ "NeedsCompilation": "no", "Author": "Hadley Wickham [aut, cre, cph], Posit Software, PBC [cph, fnd]", "Maintainer": "Hadley Wickham ", - "Repository": "https://packagemanager.posit.co/cran/latest" + "Repository": "CRAN" }, "sys": { "Package": "sys", @@ -3058,54 +1748,6 @@ "Maintainer": "Jeroen Ooms ", "Repository": "CRAN" }, - "systemfonts": { - "Package": "systemfonts", - "Version": "1.3.1", - "Source": "Repository", - "Type": "Package", - "Title": "System Native Font Finding", - "Authors@R": "c( person(\"Thomas Lin\", \"Pedersen\", , \"thomas.pedersen@posit.co\", role = c(\"aut\", \"cre\"), comment = c(ORCID = \"0000-0002-5147-4711\")), person(\"Jeroen\", \"Ooms\", , \"jeroen@berkeley.edu\", role = \"aut\", comment = c(ORCID = \"0000-0002-4035-0289\")), person(\"Devon\", \"Govett\", role = \"aut\", comment = \"Author of font-manager\"), person(\"Posit Software, PBC\", role = c(\"cph\", \"fnd\"), comment = c(ROR = \"03wc8by49\")) )", - "Description": "Provides system native access to the font catalogue. As font handling varies between systems it is difficult to correctly locate installed fonts across different operating systems. The 'systemfonts' package provides bindings to the native libraries on Windows, macOS and Linux for finding font files that can then be used further by e.g. graphic devices. The main use is intended to be from compiled code but 'systemfonts' also provides access from R.", - "License": "MIT + file LICENSE", - "URL": "https://github.com/r-lib/systemfonts, https://systemfonts.r-lib.org", - "BugReports": "https://github.com/r-lib/systemfonts/issues", - "Depends": [ - "R (>= 3.2.0)" - ], - "Imports": [ - "base64enc", - "grid", - "jsonlite", - "lifecycle", - "tools", - "utils" - ], - "Suggests": [ - "covr", - "farver", - "ggplot2", - "graphics", - "knitr", - "ragg", - "rmarkdown", - "svglite", - "testthat (>= 2.1.0)" - ], - "LinkingTo": [ - "cpp11 (>= 0.2.1)" - ], - "VignetteBuilder": "knitr", - "Config/build/compilation-database": "true", - "Config/Needs/website": "tidyverse/tidytemplate", - "Config/usethis/last-upkeep": "2025-04-23", - "Encoding": "UTF-8", - "RoxygenNote": "7.3.2", - "SystemRequirements": "fontconfig, freetype2", - "NeedsCompilation": "yes", - "Author": "Thomas Lin Pedersen [aut, cre] (ORCID: ), Jeroen Ooms [aut] (ORCID: ), Devon Govett [aut] (Author of font-manager), Posit Software, PBC [cph, fnd] (ROR: )", - "Maintainer": "Thomas Lin Pedersen ", - "Repository": "CRAN" - }, "tibble": { "Package": "tibble", "Version": "3.3.1", @@ -3169,7 +1811,7 @@ "NeedsCompilation": "yes", "Author": "Kirill Müller [aut, cre] (ORCID: ), Hadley Wickham [aut], Romain Francois [ctb], Jennifer Bryan [ctb], Posit Software, PBC [cph, fnd] (ROR: )", "Maintainer": "Kirill Müller ", - "Repository": "https://packagemanager.posit.co/cran/latest" + "Repository": "CRAN" }, "tidycensus": { "Package": "tidycensus", @@ -3384,31 +2026,6 @@ "Maintainer": "Vitalie Spinu ", "Repository": "CRAN" }, - "tinytex": { - "Package": "tinytex", - "Version": "0.58", - "Source": "Repository", - "Type": "Package", - "Title": "Helper Functions to Install and Maintain TeX Live, and Compile LaTeX Documents", - "Authors@R": "c( person(\"Yihui\", \"Xie\", role = c(\"aut\", \"cre\", \"cph\"), email = \"xie@yihui.name\", comment = c(ORCID = \"0000-0003-0645-5666\")), person(given = \"Posit Software, PBC\", role = c(\"cph\", \"fnd\")), person(\"Christophe\", \"Dervieux\", role = \"ctb\", comment = c(ORCID = \"0000-0003-4474-2498\")), person(\"Devon\", \"Ryan\", role = \"ctb\", email = \"dpryan79@gmail.com\", comment = c(ORCID = \"0000-0002-8549-0971\")), person(\"Ethan\", \"Heinzen\", role = \"ctb\"), person(\"Fernando\", \"Cagua\", role = \"ctb\"), person() )", - "Description": "Helper functions to install and maintain the 'LaTeX' distribution named 'TinyTeX' (), a lightweight, cross-platform, portable, and easy-to-maintain version of 'TeX Live'. This package also contains helper functions to compile 'LaTeX' documents, and install missing 'LaTeX' packages automatically.", - "Imports": [ - "xfun (>= 0.48)" - ], - "Suggests": [ - "testit", - "rstudioapi" - ], - "License": "MIT + file LICENSE", - "URL": "https://github.com/rstudio/tinytex", - "BugReports": "https://github.com/rstudio/tinytex/issues", - "Encoding": "UTF-8", - "RoxygenNote": "7.3.3", - "NeedsCompilation": "no", - "Author": "Yihui Xie [aut, cre, cph] (ORCID: ), Posit Software, PBC [cph, fnd], Christophe Dervieux [ctb] (ORCID: ), Devon Ryan [ctb] (ORCID: ), Ethan Heinzen [ctb], Fernando Cagua [ctb]", - "Maintainer": "Yihui Xie ", - "Repository": "CRAN" - }, "tzdb": { "Package": "tzdb", "Version": "0.5.0", @@ -3483,54 +2100,6 @@ "Maintainer": "Edzer Pebesma ", "Repository": "CRAN" }, - "urbnthemes": { - "Package": "urbnthemes", - "Version": "0.0.3", - "Source": "GitHub", - "Type": "Package", - "Title": "Additional theme and utilities for \"ggplot2\" in the Urban Institute style", - "Authors@R": "c( person(given = \"Aaron\", family = \"Williams\", middle = \"R.\", email = \"awilliams@urban.org\", role = c(\"aut\", \"cre\")), person(given = \"Kyle\", family = \"Ueyama\", email = \"kueyama@urban.org\", role = \"aut\"), person(given = \"Ajjit\", family = \"Narayanan\", email = \"anarayanan@urban.org\", role = \"aut\"), person(given = \"Ben\", family = \"Chartoff\", email = \"bchartoff@urban.org\", role = \"aut\") )", - "Description": "Align \"ggplot2\" output more closely with the Urban Institute Data Visualization style guide .", - "Depends": [ - "R (>= 3.1.0)" - ], - "Imports": [ - "extrafont", - "ggplot2 (>= 3.3.0)", - "ggrepel", - "grid", - "gridExtra", - "lifecycle", - "scales", - "conflicted", - "tibble", - "purrr", - "stringr", - "systemfonts" - ], - "License": "GPL-3", - "URL": "https://github.com/UrbanInstitute/urbnthemes", - "BugReports": "https://github.com/UrbanInstitute/urbnthemes/issues", - "Encoding": "UTF-8", - "LazyData": "true", - "RoxygenNote": "7.3.2", - "Suggests": [ - "knitr", - "rmarkdown", - "testthat" - ], - "VignetteBuilder": "knitr", - "Roxygen": "list(markdown = TRUE)", - "Author": "Aaron R. Williams [aut, cre], Kyle Ueyama [aut], Ajjit Narayanan [aut], Ben Chartoff [aut]", - "Maintainer": "Aaron R. Williams ", - "RemoteType": "github", - "RemoteHost": "api.github.com", - "RemoteUsername": "UrbanInstitute", - "RemoteRepo": "urbnthemes", - "RemoteRef": "main", - "RemoteSha": "c7c37dd1ce8d1fee7eb7e1aed7f4eb7dcaf4d5b4", - "Remotes": "extrafont=github::wch/extrafont" - }, "utf8": { "Package": "utf8", "Version": "1.2.6", @@ -3560,7 +2129,7 @@ "NeedsCompilation": "yes", "Author": "Patrick O. Perry [aut, cph], Kirill Müller [cre] (ORCID: ), Unicode, Inc. [cph, dtc] (Unicode Character Database)", "Maintainer": "Kirill Müller ", - "Repository": "https://packagemanager.posit.co/cran/latest" + "Repository": "CRAN" }, "uuid": { "Package": "uuid", @@ -3628,35 +2197,7 @@ "NeedsCompilation": "yes", "Author": "Hadley Wickham [aut], Lionel Henry [aut], Davis Vaughan [aut, cre], data.table team [cph] (Radix sort based on data.table's forder() and their contribution to R's order()), Posit Software, PBC [cph, fnd]", "Maintainer": "Davis Vaughan ", - "Repository": "https://packagemanager.posit.co/cran/latest" - }, - "viridisLite": { - "Package": "viridisLite", - "Version": "0.4.3", - "Source": "Repository", - "Type": "Package", - "Title": "Colorblind-Friendly Color Maps (Lite Version)", - "Date": "2026-02-03", - "Authors@R": "c( person(\"Simon\", \"Garnier\", email = \"garnier@njit.edu\", role = c(\"aut\", \"cre\")), person(\"Noam\", \"Ross\", email = \"noam.ross@gmail.com\", role = c(\"ctb\", \"cph\")), person(\"Bob\", \"Rudis\", email = \"bob@rud.is\", role = c(\"ctb\", \"cph\")), person(\"Marco\", \"Sciaini\", email = \"sciaini.marco@gmail.com\", role = c(\"ctb\", \"cph\")), person(\"Antônio Pedro\", \"Camargo\", role = c(\"ctb\", \"cph\")), person(\"Cédric\", \"Scherer\", email = \"scherer@izw-berlin.de\", role = c(\"ctb\", \"cph\")) )", - "Maintainer": "Simon Garnier ", - "Description": "Color maps designed to improve graph readability for readers with common forms of color blindness and/or color vision deficiency. The color maps are also perceptually-uniform, both in regular form and also when converted to black-and-white for printing. This is the 'lite' version of the 'viridis' package that also contains 'ggplot2' bindings for discrete and continuous color and fill scales and can be found at .", - "License": "MIT + file LICENSE", - "Encoding": "UTF-8", - "Depends": [ - "R (>= 2.10)" - ], - "Suggests": [ - "hexbin (>= 1.27.0)", - "ggplot2 (>= 1.0.1)", - "testthat", - "covr" - ], - "URL": "https://sjmgarnier.github.io/viridisLite/, https://github.com/sjmgarnier/viridisLite/", - "BugReports": "https://github.com/sjmgarnier/viridisLite/issues/", - "RoxygenNote": "7.3.3", - "NeedsCompilation": "no", - "Author": "Simon Garnier [aut, cre], Noam Ross [ctb, cph], Bob Rudis [ctb, cph], Marco Sciaini [ctb, cph], Antônio Pedro Camargo [ctb, cph], Cédric Scherer [ctb, cph]", - "Repository": "https://packagemanager.posit.co/cran/latest" + "Repository": "CRAN" }, "vroom": { "Package": "vroom", @@ -3766,7 +2307,7 @@ "NeedsCompilation": "no", "Author": "Jim Hester [aut], Lionel Henry [aut, cre], Kirill Müller [aut], Kevin Ushey [aut], Hadley Wickham [aut], Winston Chang [aut], Jennifer Bryan [ctb], Richard Cotton [ctb], Posit Software, PBC [cph, fnd]", "Maintainer": "Lionel Henry ", - "Repository": "https://packagemanager.posit.co/cran/latest" + "Repository": "CRAN" }, "wk": { "Package": "wk", @@ -3797,54 +2338,6 @@ "Author": "Dewey Dunnington [aut, cre] (ORCID: ), Edzer Pebesma [aut] (ORCID: ), Anthony North [ctb]", "Repository": "CRAN" }, - "xfun": { - "Package": "xfun", - "Version": "0.56", - "Source": "Repository", - "Type": "Package", - "Title": "Supporting Functions for Packages Maintained by 'Yihui Xie'", - "Authors@R": "c( person(\"Yihui\", \"Xie\", role = c(\"aut\", \"cre\", \"cph\"), email = \"xie@yihui.name\", comment = c(ORCID = \"0000-0003-0645-5666\", URL = \"https://yihui.org\")), person(\"Wush\", \"Wu\", role = \"ctb\"), person(\"Daijiang\", \"Li\", role = \"ctb\"), person(\"Xianying\", \"Tan\", role = \"ctb\"), person(\"Salim\", \"Brüggemann\", role = \"ctb\", email = \"salim-b@pm.me\", comment = c(ORCID = \"0000-0002-5329-5987\")), person(\"Christophe\", \"Dervieux\", role = \"ctb\"), person() )", - "Description": "Miscellaneous functions commonly used in other packages maintained by 'Yihui Xie'.", - "Depends": [ - "R (>= 3.2.0)" - ], - "Imports": [ - "grDevices", - "stats", - "tools" - ], - "Suggests": [ - "testit", - "parallel", - "codetools", - "methods", - "rstudioapi", - "tinytex (>= 0.30)", - "mime", - "litedown (>= 0.6)", - "commonmark", - "knitr (>= 1.50)", - "remotes", - "pak", - "curl", - "xml2", - "jsonlite", - "magick", - "yaml", - "data.table", - "qs2" - ], - "License": "MIT + file LICENSE", - "URL": "https://github.com/yihui/xfun", - "BugReports": "https://github.com/yihui/xfun/issues", - "Encoding": "UTF-8", - "RoxygenNote": "7.3.3", - "VignetteBuilder": "litedown", - "NeedsCompilation": "yes", - "Author": "Yihui Xie [aut, cre, cph] (ORCID: , URL: https://yihui.org), Wush Wu [ctb], Daijiang Li [ctb], Xianying Tan [ctb], Salim Brüggemann [ctb] (ORCID: ), Christophe Dervieux [ctb]", - "Maintainer": "Yihui Xie ", - "Repository": "CRAN" - }, "xml2": { "Package": "xml2", "Version": "1.5.2", @@ -3884,32 +2377,6 @@ "Author": "Hadley Wickham [aut], Jim Hester [aut], Jeroen Ooms [aut, cre], Posit Software, PBC [cph, fnd], R Foundation [ctb] (Copy of R-project homepage cached as example)", "Maintainer": "Jeroen Ooms ", "Repository": "CRAN" - }, - "yaml": { - "Package": "yaml", - "Version": "2.3.12", - "Source": "Repository", - "Type": "Package", - "Title": "Methods to Convert R Data to YAML and Back", - "Authors@R": "c( person(\"Hadley\", \"Wickham\", , \"hadley@posit.co\", role = \"cre\", comment = c(ORCID = \"0000-0003-4757-117X\")), person(\"Shawn\", \"Garbett\", , \"shawn.garbett@vumc.org\", role = \"ctb\", comment = c(ORCID = \"0000-0003-4079-5621\")), person(\"Jeremy\", \"Stephens\", role = c(\"aut\", \"ctb\")), person(\"Kirill\", \"Simonov\", role = \"aut\"), person(\"Yihui\", \"Xie\", role = \"ctb\", comment = c(ORCID = \"0000-0003-0645-5666\")), person(\"Zhuoer\", \"Dong\", role = \"ctb\"), person(\"Jeffrey\", \"Horner\", role = \"ctb\"), person(\"reikoch\", role = \"ctb\"), person(\"Will\", \"Beasley\", role = \"ctb\", comment = c(ORCID = \"0000-0002-5613-5006\")), person(\"Brendan\", \"O'Connor\", role = \"ctb\"), person(\"Michael\", \"Quinn\", role = \"ctb\"), person(\"Charlie\", \"Gao\", role = \"ctb\"), person(c(\"Gregory\", \"R.\"), \"Warnes\", role = \"ctb\"), person(c(\"Zhian\", \"N.\"), \"Kamvar\", role = \"ctb\") )", - "Description": "Implements the 'libyaml' 'YAML' 1.1 parser and emitter () for R.", - "License": "BSD_3_clause + file LICENSE", - "URL": "https://yaml.r-lib.org, https://github.com/r-lib/yaml/", - "BugReports": "https://github.com/r-lib/yaml/issues", - "Suggests": [ - "knitr", - "rmarkdown", - "testthat (>= 3.0.0)" - ], - "Config/testthat/edition": "3", - "Config/Needs/website": "tidyverse/tidytemplate", - "Encoding": "UTF-8", - "RoxygenNote": "7.3.3", - "VignetteBuilder": "knitr", - "NeedsCompilation": "yes", - "Author": "Hadley Wickham [cre] (ORCID: ), Shawn Garbett [ctb] (ORCID: ), Jeremy Stephens [aut, ctb], Kirill Simonov [aut], Yihui Xie [ctb] (ORCID: ), Zhuoer Dong [ctb], Jeffrey Horner [ctb], reikoch [ctb], Will Beasley [ctb] (ORCID: ), Brendan O'Connor [ctb], Michael Quinn [ctb], Charlie Gao [ctb], Gregory R. Warnes [ctb], Zhian N. Kamvar [ctb]", - "Maintainer": "Hadley Wickham ", - "Repository": "CRAN" } } } diff --git a/tests/testthat/test-calculate_custom_geographies.R b/tests/testthat/test-calculate_custom_geographies.R deleted file mode 100644 index 7d95591..0000000 --- a/tests/testthat/test-calculate_custom_geographies.R +++ /dev/null @@ -1,430 +0,0 @@ -####----Tests for calculate_custom_geographies()----#### - -testthat::test_that( - "Input validation: requires codebook attribute", - { - # Create a dataframe without codebook attribute - fake_data = data.frame( - GEOID = c("1", "2"), - group_id = c("A", "A"), - total_population_universe = c(100, 200) - ) - - testthat::expect_error( - calculate_custom_geographies(fake_data, group_id = "group_id"), - "codebook attribute" - ) - } -) - -testthat::test_that( - "Input validation: requires group_id column", - { - # Load test data - testthat::skip_if_not( - file.exists(testthat::test_path("test-data", "test_data_2025-11-06.rds")), - "Test fixture not available") - df = readRDS(testthat::test_path("test-data", "test_data_2025-11-06.rds")) - - testthat::expect_error( - calculate_custom_geographies(df, group_id = "nonexistent_column"), - "not found" - ) - } -) - -testthat::test_that( - "Input validation: requires weight_variable column", - { - # Load test data - testthat::skip_if_not( - file.exists(testthat::test_path("test-data", "test_data_2025-11-06.rds")), - "Test fixture not available") - df = readRDS(testthat::test_path("test-data", "test_data_2025-11-06.rds")) - - # Add a group column - df$custom_group = "A" - - testthat::expect_error( - calculate_custom_geographies(df, group_id = "custom_group", weight_variable = "nonexistent"), - "not found" - ) - } -) - -testthat::test_that( - "Codebook: aggregation_strategy column has correct values by variable type", - { - # Load actual codebook from disk (with pre-parsed columns) - testthat::skip_if_not( - file.exists(testthat::test_path("test-data", "codebook_2025-11-06.rds")), - "Test fixture not available") - codebook = readRDS(testthat::test_path("test-data", "codebook_2025-11-06.rds")) - - # Check that codebook has the aggregation_strategy column - testthat::expect_true("aggregation_strategy" %in% colnames(codebook)) - - # Check that each variable type gets the expected aggregation strategy - count_strategy = codebook %>% - dplyr::filter(variable_type == "Count") %>% - dplyr::pull(aggregation_strategy) %>% - unique() - testthat::expect_equal(count_strategy, "sum") - - percent_strategy = codebook %>% - dplyr::filter(variable_type == "Percent") %>% - dplyr::pull(aggregation_strategy) %>% - unique() - testthat::expect_equal(percent_strategy, "recalculate_percent") - - median_strategy = codebook %>% - dplyr::filter(variable_type == "Median ($)") %>% - dplyr::pull(aggregation_strategy) %>% - unique() - testthat::expect_equal(median_strategy, "weighted_average") - - metadata_strategy = codebook %>% - dplyr::filter(variable_type == "Metadata") %>% - dplyr::pull(aggregation_strategy) %>% - unique() - testthat::expect_equal(metadata_strategy, "metadata") - } -) - -testthat::test_that( - "Single-tract custom geography equals original tract values for count variables", - { - # Load test data - testthat::skip_if_not( - file.exists(testthat::test_path("test-data", "test_data_2025-11-06.rds")), - "Test fixture not available") - df = readRDS(testthat::test_path("test-data", "test_data_2025-11-06.rds")) - - # Take first 3 tracts and assign each to its own custom geography - df_subset = df %>% - dplyr::slice(1:3) %>% - dplyr::mutate(custom_group = GEOID) - - # Aggregate (should return same values since each tract is its own group) - result = calculate_custom_geographies(df_subset, group_id = "custom_group") - - # Check that count variables match - original_snap = df_subset$snap_received - result_snap = result$snap_received - - testthat::expect_equal(result_snap, original_snap) - } -) - -testthat::test_that( - "Aggregated percentages are within valid range [0, 1]", - { - # Load test data - testthat::skip_if_not( - file.exists(testthat::test_path("test-data", "test_data_2025-11-06.rds")), - "Test fixture not available") - df = readRDS(testthat::test_path("test-data", "test_data_2025-11-06.rds")) - - # Assign first 10 tracts to 2 custom geographies - df_subset = df %>% - dplyr::slice(1:10) %>% - dplyr::mutate(custom_group = rep(c("A", "B"), each = 5)) - - # Aggregate - result = calculate_custom_geographies(df_subset, group_id = "custom_group") - - # Check that all percentage variables are in [0, 1] - pct_vars = result %>% - dplyr::select(dplyr::matches("_percent$")) %>% - colnames() - - for (var in pct_vars) { - vals = result[[var]] - testthat::expect_true( - all(is.na(vals) | (vals >= 0 & vals <= 1)), - info = paste0("Variable ", var, " has values outside [0,1]") - ) - } - } -) - -testthat::test_that( - "Summing tracts produces correct total for count variable", - { - # Load test data - testthat::skip_if_not( - file.exists(testthat::test_path("test-data", "test_data_2025-11-06.rds")), - "Test fixture not available") - df = readRDS(testthat::test_path("test-data", "test_data_2025-11-06.rds")) - - # Assign first 5 tracts to one group - df_subset = df %>% - dplyr::slice(1:5) %>% - dplyr::mutate(custom_group = "A") - - # Calculate expected sum - expected_snap = sum(df_subset$snap_received, na.rm = TRUE) - - # Aggregate - result = calculate_custom_geographies(df_subset, group_id = "custom_group") - - # Check - testthat::expect_equal(result$snap_received, expected_snap) - } -) - -testthat::test_that( - "MOEs are present for aggregated sum variables", - { - # Load test data - testthat::skip_if_not( - file.exists(testthat::test_path("test-data", "test_data_2025-11-06.rds")), - "Test fixture not available") - df = readRDS(testthat::test_path("test-data", "test_data_2025-11-06.rds")) - - # Assign first 5 tracts to one group - df_subset = df %>% - dplyr::slice(1:5) %>% - dplyr::mutate(custom_group = "A") - - # Aggregate - result = calculate_custom_geographies(df_subset, group_id = "custom_group") - - # Check that MOE exists for snap_received - testthat::expect_true("snap_received_M" %in% colnames(result)) - - # MOE should be positive - testthat::expect_true(result$snap_received_M > 0) - } -) - -testthat::test_that( - "GEOID column is renamed from group_id", - { - # Load test data - testthat::skip_if_not( - file.exists(testthat::test_path("test-data", "test_data_2025-11-06.rds")), - "Test fixture not available") - df = readRDS(testthat::test_path("test-data", "test_data_2025-11-06.rds")) - - # Assign first 5 tracts to one group - df_subset = df %>% - dplyr::slice(1:5) %>% - dplyr::mutate(neighborhood_id = "Neighborhood_A") - - # Aggregate - result = calculate_custom_geographies(df_subset, group_id = "neighborhood_id") - - # Check that GEOID column exists and contains the custom geography ID - testthat::expect_true("GEOID" %in% colnames(result)) - testthat::expect_equal(result$GEOID, "Neighborhood_A") - } -) - -testthat::test_that( - "Codebook attribute is preserved and updated", - { - # Load test data - testthat::skip_if_not( - file.exists(testthat::test_path("test-data", "test_data_2025-11-06.rds")), - "Test fixture not available") - df = readRDS(testthat::test_path("test-data", "test_data_2025-11-06.rds")) - - # Assign first 5 tracts to one group - df_subset = df %>% - dplyr::slice(1:5) %>% - dplyr::mutate(custom_group = "A") - - # Aggregate - result = calculate_custom_geographies(df_subset, group_id = "custom_group") - - # Check that codebook attribute exists - codebook = attr(result, "codebook") - testthat::expect_false(is.null(codebook)) - - # Check that aggregation notes were added - testthat::expect_true( - any(stringr::str_detect(codebook$definition, "Aggregated")) - ) - } -) - -testthat::test_that( - "Weighted average variables have reasonable values", - { - # Load test data - testthat::skip_if_not( - file.exists(testthat::test_path("test-data", "test_data_2025-11-06.rds")), - "Test fixture not available") - df = readRDS(testthat::test_path("test-data", "test_data_2025-11-06.rds")) - - # Assign first 10 tracts to 2 groups - df_subset = df %>% - dplyr::slice(1:10) %>% - dplyr::mutate(custom_group = rep(c("A", "B"), each = 5)) - - # Aggregate - result = calculate_custom_geographies(df_subset, group_id = "custom_group") - - # Check that median household income exists and is positive - if ("median_household_income_universe_allraces" %in% colnames(result)) { - testthat::expect_true( - all(result$median_household_income_universe_allraces > 0, na.rm = TRUE) - ) - } - } -) - -testthat::test_that( - "NA values in group_id are handled with warning", - { - # Load test data - testthat::skip_if_not( - file.exists(testthat::test_path("test-data", "test_data_2025-11-06.rds")), - "Test fixture not available") - df = readRDS(testthat::test_path("test-data", "test_data_2025-11-06.rds")) - - # Assign first 5 tracts with some NAs - df_subset = df %>% - dplyr::slice(1:5) %>% - dplyr::mutate(custom_group = c("A", "A", NA, "B", "B")) - - # Should produce a warning about NA values - testthat::expect_message( - calculate_custom_geographies(df_subset, group_id = "custom_group"), - "NA values" - ) - } -) - -testthat::test_that( - "MOE for count variable matches manual se_sum calculation", - { - # Load test data and sample 100 rows - testthat::skip_if_not( - file.exists(testthat::test_path("test-data", "test_data_2025-11-06.rds")), - "Test fixture not available") - set.seed(42) - df = readRDS(testthat::test_path("test-data", "test_data_2025-11-06.rds")) - df_sample = df %>% - dplyr::slice_sample(n = 100) %>% - dplyr::mutate( - row_id = dplyr::row_number(), - group_id = dplyr::if_else(row_id %% 2 == 0, "even", "odd")) - - # Run calculate_custom_geographies - result = calculate_custom_geographies(df_sample, group_id = "group_id") - - # Test variable: snap_received (raw ACS count) - var_name = "snap_received" - moe_name = paste0(var_name, "_M") - - # Manual calculation for "odd" group - group_odd = df_sample %>% dplyr::filter(group_id == "odd") - manual_se = se_sum( - as.list(group_odd[[moe_name]]), - as.list(group_odd[[var_name]])) - manual_moe = manual_se * 1.645 - - # Get result from calculate_custom_geographies - auto_moe = result %>% - dplyr::filter(GEOID == "odd") %>% - dplyr::pull(!!moe_name) - - testthat::expect_equal(manual_moe, auto_moe, tolerance = 0.001) - } -) - -testthat::test_that( - "MOE for sum variable matches manual se_sum calculation", - { - # Load test data and sample 100 rows - testthat::skip_if_not( - file.exists(testthat::test_path("test-data", "test_data_2025-11-06.rds")), - "Test fixture not available") - set.seed(42) - df = readRDS(testthat::test_path("test-data", "test_data_2025-11-06.rds")) - df_sample = df %>% - dplyr::slice_sample(n = 100) %>% - dplyr::mutate( - row_id = dplyr::row_number(), - group_id = dplyr::if_else(row_id %% 2 == 0, "even", "odd")) - - # Run calculate_custom_geographies - result = calculate_custom_geographies(df_sample, group_id = "group_id") - - # Test variable: age_10_14_years (derived sum of male + female) - var_name = "age_10_14_years" - moe_name = paste0(var_name, "_M") - - # Manual calculation for "odd" group - group_odd = df_sample %>% dplyr::filter(group_id == "odd") - manual_se = se_sum( - as.list(group_odd[[moe_name]]), - as.list(group_odd[[var_name]])) - manual_moe = manual_se * 1.645 - - # Get result from calculate_custom_geographies - auto_moe = result %>% - dplyr::filter(GEOID == "odd") %>% - dplyr::pull(!!moe_name) - - testthat::expect_equal(manual_moe, auto_moe, tolerance = 0.001) - } -) - -testthat::test_that( - "SE/MOE for percent variable matches manual calculation using se_sum and se_proportion_ratio", - { - # Load test data and sample 100 rows - testthat::skip_if_not( - file.exists(testthat::test_path("test-data", "test_data_2025-11-06.rds")), - "Test fixture not available") - set.seed(42) - df = readRDS(testthat::test_path("test-data", "test_data_2025-11-06.rds")) - df_sample = df %>% - dplyr::slice_sample(n = 100) %>% - dplyr::mutate( - row_id = dplyr::row_number(), - group_id = dplyr::if_else(row_id %% 2 == 0, "even", "odd")) - - # Run calculate_custom_geographies - result = calculate_custom_geographies(df_sample, group_id = "group_id") - - # Test variable: snap_received_percent - # Definition: Numerator = snap_received. Denominator = snap_universe. - pct_var = "snap_received_percent" - num_var = "snap_received" - denom_var = "snap_universe" - - # Manual calculation for "odd" group - group_odd = df_sample %>% dplyr::filter(group_id == "odd") - - # Step 1: Calculate aggregated estimates - num_est = sum(group_odd[[num_var]], na.rm = TRUE) - denom_est = sum(group_odd[[denom_var]], na.rm = TRUE) - - # Step 2: Calculate SEs for numerator and denominator using se_sum - num_se = se_sum( - as.list(group_odd[[paste0(num_var, "_M")]]), - as.list(group_odd[[num_var]])) - denom_se = se_sum( - as.list(group_odd[[paste0(denom_var, "_M")]]), - as.list(group_odd[[denom_var]])) - - # Step 3: Calculate SE for the proportion using se_proportion_ratio - manual_se = se_proportion_ratio( - estimate_numerator = num_est, - estimate_denominator = denom_est, - se_numerator = num_se, - se_denominator = denom_se) - manual_moe = manual_se * 1.645 - - # Get result from calculate_custom_geographies - auto_moe = result %>% - dplyr::filter(GEOID == "odd") %>% - dplyr::pull(paste0(pct_var, "_M")) - - testthat::expect_equal(manual_moe, auto_moe, tolerance = 0.001) - } -) diff --git a/tests/testthat/test-interpolate_acs.R b/tests/testthat/test-interpolate_acs.R new file mode 100644 index 0000000..a1fbb55 --- /dev/null +++ b/tests/testthat/test-interpolate_acs.R @@ -0,0 +1,770 @@ +####----Tests for interpolate_acs()----#### + +test_data_path = testthat::test_path("fixtures", "test_data_2026-02-08.rds") +test_codebook_path = testthat::test_path("fixtures", "codebook_2026-02-08.rds") + +## Helper: subset data to a small set of tables so tests run in seconds +## rather than minutes. Subsets both columns and codebook. +fast_tables = c("total_population", "snap", "race", "median_household_income") + +slim_for_test = function(df, tables = fast_tables) { + codebook = attr(df, "codebook") + all_cb_vars = codebook$calculated_variable + + table_prefixes = c("total_population", "snap", "race", "median_household_income") + keep_vars = all_cb_vars[purrr::map_lgl(all_cb_vars, function(v) { + any(purrr::map_lgl(table_prefixes, ~ stringr::str_starts(v, .x))) + })] + keep_vars = c(keep_vars, "data_source_year", "GEOID", "NAME") + keep_cols = intersect(c(keep_vars, paste0(keep_vars, "_M")), colnames(df)) + + df_slim = df[, keep_cols, drop = FALSE] + attr(df_slim, "codebook") = codebook %>% + dplyr::filter(calculated_variable %in% keep_vars) + attr(df_slim, "resolved_tables") = tables + df_slim +} + +####----Input Validation Tests----#### + +testthat::test_that( + "Input validation: requires codebook attribute", + { + fake_data = data.frame( + GEOID = c("1", "2"), + target = c("A", "A"), + w = c(0.5, 0.5), + total_population_universe = c(100, 200) + ) + + testthat::expect_error( + interpolate_acs(fake_data, target_geoid = "target", weight = "w"), + "codebook attribute" + ) + + testthat::expect_error( + interpolate_acs(fake_data, target_geoid = "target"), + "codebook attribute" + ) + } +) + +testthat::test_that( + "Input validation: requires source_geoid column in .data", + { + testthat::skip_if_not(file.exists(test_data_path), "Test fixture not available") + df = readRDS(test_data_path) + + testthat::expect_error( + interpolate_acs(df, source_geoid = "nonexistent", + target_geoid = "tgt", weight = "w"), + "not found in .data" + ) + } +) + +testthat::test_that( + "Input validation: requires target_geoid column when no crosswalk", + { + testthat::skip_if_not(file.exists(test_data_path), "Test fixture not available") + df = readRDS(test_data_path) + + testthat::expect_error( + interpolate_acs(df, target_geoid = "nonexistent", weight = "w"), + "not found in .data" + ) + + testthat::expect_error( + interpolate_acs(df, target_geoid = "nonexistent"), + "not found in .data" + ) + } +) + +testthat::test_that( + "Input validation: requires weight column when no crosswalk (fractional mode)", + { + testthat::skip_if_not(file.exists(test_data_path), "Test fixture not available") + df = readRDS(test_data_path) + df$target = "A" + + testthat::expect_error( + interpolate_acs(df, target_geoid = "target", weight = "nonexistent"), + "not found in .data" + ) + } +) + +testthat::test_that( + "Input validation: crosswalk must be a data frame", + { + testthat::skip_if_not(file.exists(test_data_path), "Test fixture not available") + df = readRDS(test_data_path) + + testthat::expect_error( + interpolate_acs(df, crosswalk = "not_a_df", + target_geoid = "tgt", weight = "w"), + "must be a data frame" + ) + } +) + +testthat::test_that( + "Input validation: crosswalk must contain required columns", + { + testthat::skip_if_not(file.exists(test_data_path), "Test fixture not available") + df = readRDS(test_data_path) + + bad_xwalk = data.frame(GEOID = "1", tgt = "A") + + testthat::expect_error( + interpolate_acs(df, crosswalk = bad_xwalk, + target_geoid = "tgt", weight = "w"), + "not found in crosswalk" + ) + } +) + +testthat::test_that( + "Input validation: negative weights produce error", + { + testthat::skip_if_not(file.exists(test_data_path), "Test fixture not available") + df = readRDS(test_data_path) %>% dplyr::slice(1:3) + df$target = "A" + df$w = c(-0.5, 0.5, 1.0) + + testthat::expect_error( + interpolate_acs(df, target_geoid = "target", weight = "w"), + "non-negative" + ) + } +) + +testthat::test_that( + "Input validation: non-unity weights produce warning", + { + testthat::skip_if_not(file.exists(test_data_path), "Test fixture not available") + df = readRDS(test_data_path) %>% dplyr::slice(1:3) %>% slim_for_test() + + ## crosswalk that only sends 50% of each tract to a target + xwalk = data.frame( + GEOID = df$GEOID, + target = "A", + w = 0.5 + ) + + testthat::expect_warning( + interpolate_acs(df, crosswalk = xwalk, + target_geoid = "target", weight = "w"), + "do not sum to 1" + ) + } +) + +####----Fractional Allocation (weight != NULL) Tests----#### + +testthat::test_that( + "Identity crosswalk (weight=1, 1:1 mapping) returns original count values", + { + testthat::skip_if_not(file.exists(test_data_path), "Test fixture not available") + df = readRDS(test_data_path) %>% dplyr::slice(1:5) %>% slim_for_test() + + ## Each tract maps to itself with weight 1 + xwalk = data.frame( + GEOID = df$GEOID, + target = df$GEOID, + w = 1.0 + ) + + result = interpolate_acs(df, crosswalk = xwalk, + target_geoid = "target", weight = "w") + + ## Count variables should match original values + testthat::expect_equal( + result$snap_received, + df$snap_received + ) + testthat::expect_equal( + result$total_population_universe, + df$total_population_universe + ) + } +) + +testthat::test_that( + "Identity crosswalk preserves MOE values for count variables", + { + testthat::skip_if_not(file.exists(test_data_path), "Test fixture not available") + df = readRDS(test_data_path) %>% dplyr::slice(1:5) %>% slim_for_test() + + xwalk = data.frame( + GEOID = df$GEOID, + target = df$GEOID, + w = 1.0 + ) + + result = interpolate_acs(df, crosswalk = xwalk, + target_geoid = "target", weight = "w") + + testthat::expect_equal( + result$snap_received_M, + df$snap_received_M, + tolerance = 0.01 + ) + } +) + +testthat::test_that( + "Proportional split: weight=0.5 produces half the count", + { + testthat::skip_if_not(file.exists(test_data_path), "Test fixture not available") + df = readRDS(test_data_path) %>% dplyr::slice(1) %>% slim_for_test() + + ## Split one tract evenly into two targets + xwalk = data.frame( + GEOID = rep(df$GEOID, 2), + target = c("A", "B"), + w = c(0.5, 0.5) + ) + + result = interpolate_acs(df, crosswalk = xwalk, + target_geoid = "target", weight = "w") + + ## Each target should get half the population + testthat::expect_equal( + result$total_population_universe[result$GEOID == "A"], + df$total_population_universe * 0.5 + ) + testthat::expect_equal( + result$total_population_universe[result$GEOID == "B"], + df$total_population_universe * 0.5 + ) + } +) + +testthat::test_that( + "Proportional split: MOE scales by weight", + { + testthat::skip_if_not(file.exists(test_data_path), "Test fixture not available") + df = readRDS(test_data_path) %>% dplyr::slice(1) %>% slim_for_test() + + xwalk = data.frame( + GEOID = rep(df$GEOID, 2), + target = c("A", "B"), + w = c(0.6, 0.4) + ) + + result = interpolate_acs(df, crosswalk = xwalk, + target_geoid = "target", weight = "w") + + ## With a single source, MOE(w*X) = w * MOE(X), and se_sum of a single + ## element just returns that element's SE * 1.645 + testthat::expect_equal( + result$snap_received_M[result$GEOID == "A"], + df$snap_received_M * 0.6, + tolerance = 0.01 + ) + testthat::expect_equal( + result$snap_received_M[result$GEOID == "B"], + df$snap_received_M * 0.4, + tolerance = 0.01 + ) + } +) + +testthat::test_that( + "Aggregated count MOE matches manual se_sum calculation", + { + testthat::skip_if_not(file.exists(test_data_path), "Test fixture not available") + set.seed(42) + df = readRDS(test_data_path) %>% dplyr::slice_sample(n = 10) %>% slim_for_test() + + ## Assign each tract fully to one of two groups (weight = 1) + xwalk = data.frame( + GEOID = df$GEOID, + target = rep(c("A", "B"), each = 5), + w = 1.0 + ) + + result = interpolate_acs(df, crosswalk = xwalk, + target_geoid = "target", weight = "w") + + ## Manual calculation for group "A" + var_name = "snap_received" + moe_name = paste0(var_name, "_M") + + group_a = df %>% dplyr::slice(1:5) + manual_se = se_sum(as.list(group_a[[moe_name]]), + as.list(group_a[[var_name]])) + manual_moe = manual_se * 1.645 + + auto_moe = result %>% + dplyr::filter(GEOID == "A") %>% + dplyr::pull(!!moe_name) + + testthat::expect_equal(manual_moe, auto_moe, tolerance = 0.001) + } +) + +testthat::test_that( + "Aggregated count MOE with fractional weights matches manual calculation", + { + testthat::skip_if_not(file.exists(test_data_path), "Test fixture not available") + df = readRDS(test_data_path) %>% dplyr::slice(1:3) %>% slim_for_test() + + ## Two tracts go to target "A" with different weights + xwalk = data.frame( + GEOID = df$GEOID[1:2], + target = "A", + w = c(0.7, 0.3) + ) + + result = suppressWarnings( + interpolate_acs(df, crosswalk = xwalk, + target_geoid = "target", weight = "w")) + + ## Manual: allocated MOEs are w * original MOE, then se_sum + var_name = "snap_received" + moe_name = paste0(var_name, "_M") + + allocated_moes = df[[moe_name]][1:2] * c(0.7, 0.3) + allocated_ests = df[[var_name]][1:2] * c(0.7, 0.3) + manual_se = se_sum(as.list(allocated_moes), as.list(allocated_ests)) + manual_moe = manual_se * 1.645 + + auto_moe = result %>% dplyr::pull(!!moe_name) + + testthat::expect_equal(manual_moe, auto_moe, tolerance = 0.001) + } +) + +testthat::test_that( + "Interpolated percentages are within valid range [0, 1]", + { + testthat::skip_if_not(file.exists(test_data_path), "Test fixture not available") + df = readRDS(test_data_path) %>% dplyr::slice(1:20) %>% slim_for_test() + + ## Split into target geographies with varying weights + set.seed(123) + geoids = df$GEOID + xwalk = purrr::map(geoids, function(g) { + n_targets = sample(1:3, 1) + weights = runif(n_targets) + weights = weights / sum(weights) + data.frame( + GEOID = g, + target = paste0("T", seq_len(n_targets)), + w = weights + ) + }) %>% purrr::list_rbind() + + result = interpolate_acs(df, crosswalk = xwalk, + target_geoid = "target", weight = "w") + + pct_vars = result %>% + dplyr::select(dplyr::matches("_percent$")) %>% + colnames() + + for (var in pct_vars) { + vals = result[[var]] + testthat::expect_true( + all(is.na(vals) | (vals >= 0 & vals <= 1)), + info = paste0("Variable ", var, " has values outside [0,1]") + ) + } + } +) + +testthat::test_that( + "Percent MOE matches manual se_proportion_ratio on interpolated components", + { + testthat::skip_if_not(file.exists(test_data_path), "Test fixture not available") + set.seed(42) + df = readRDS(test_data_path) %>% dplyr::slice_sample(n = 10) %>% slim_for_test() + cb = attr(df, "codebook") + testthat::skip_if_not( + all(c("numerator_vars", "denominator_vars") %in% colnames(cb)), + "Test fixture codebook lacks parsed columns for percent MOE") + + ## All tracts go to a single target with weight = 1 (sum all tracts) + xwalk = data.frame( + GEOID = df$GEOID, + target = "ALL", + w = 1.0 + ) + + result = interpolate_acs(df, crosswalk = xwalk, + target_geoid = "target", weight = "w") + + ## Test snap_received_percent + num_var = "snap_received" + denom_var = "snap_universe" + pct_var = "snap_received_percent" + + ## Manual: sum the components, then calculate proportion SE + num_est = sum(df[[num_var]], na.rm = TRUE) + denom_est = sum(df[[denom_var]], na.rm = TRUE) + + num_se = se_sum(as.list(df[[paste0(num_var, "_M")]]), + as.list(df[[num_var]])) + denom_se = se_sum(as.list(df[[paste0(denom_var, "_M")]]), + as.list(df[[denom_var]])) + + manual_se = se_proportion_ratio( + estimate_numerator = num_est, + estimate_denominator = denom_est, + se_numerator = num_se, + se_denominator = denom_se) + manual_moe = manual_se * 1.645 + + auto_moe = result %>% + dplyr::pull(paste0(pct_var, "_M")) + + testthat::expect_equal(manual_moe, auto_moe, tolerance = 0.001) + } +) + +testthat::test_that( + "Crosswalk as separate data frame produces same result as columns in .data", + { + testthat::skip_if_not(file.exists(test_data_path), "Test fixture not available") + df = readRDS(test_data_path) %>% dplyr::slice(1:5) %>% slim_for_test() + + xwalk = data.frame( + GEOID = df$GEOID, + target = c("A", "A", "B", "B", "B"), + w = 1.0 + ) + + ## Method 1: separate crosswalk + result1 = interpolate_acs(df, crosswalk = xwalk, + target_geoid = "target", weight = "w") + + ## Method 2: columns in .data + df_with_xwalk = df %>% + dplyr::left_join(xwalk, by = "GEOID") + result2 = interpolate_acs(df_with_xwalk, + target_geoid = "target", weight = "w") + + testthat::expect_equal(result1$total_population_universe, + result2$total_population_universe) + testthat::expect_equal(result1$snap_received, + result2$snap_received) + } +) + +testthat::test_that( + "GEOID column is populated from target_geoid (fractional mode)", + { + testthat::skip_if_not(file.exists(test_data_path), "Test fixture not available") + df = readRDS(test_data_path) %>% dplyr::slice(1:5) %>% slim_for_test() + + xwalk = data.frame( + GEOID = df$GEOID, + neighborhood = c("Downtown", "Downtown", "Uptown", "Uptown", "Uptown"), + alloc = 1.0 + ) + + result = interpolate_acs(df, crosswalk = xwalk, + target_geoid = "neighborhood", weight = "alloc") + + testthat::expect_true("GEOID" %in% colnames(result)) + testthat::expect_setequal(result$GEOID, c("Downtown", "Uptown")) + } +) + +testthat::test_that( + "Codebook attribute is preserved and updated with interpolation notes", + { + testthat::skip_if_not(file.exists(test_data_path), "Test fixture not available") + df = readRDS(test_data_path) %>% dplyr::slice(1:5) %>% slim_for_test() + + xwalk = data.frame( + GEOID = df$GEOID, + target = "A", + w = 1.0 + ) + + result = interpolate_acs(df, crosswalk = xwalk, + target_geoid = "target", weight = "w") + + codebook = attr(result, "codebook") + testthat::expect_false(is.null(codebook)) + testthat::expect_true( + any(stringr::str_detect(codebook$definition, "Interpolated")) + ) + } +) + +testthat::test_that( + "NA values in target_geoid are excluded with message", + { + testthat::skip_if_not(file.exists(test_data_path), "Test fixture not available") + df = readRDS(test_data_path) %>% dplyr::slice(1:5) %>% slim_for_test() + + xwalk = data.frame( + GEOID = df$GEOID, + target = c("A", "A", NA, "B", "B"), + w = 1.0 + ) + + df_with_xwalk = df %>% + dplyr::left_join(xwalk, by = "GEOID") + + testthat::expect_message( + suppressWarnings( + interpolate_acs(df_with_xwalk, + target_geoid = "target", weight = "w")), + "NA values" + ) + } +) + +####----Complete Nesting (weight = NULL) Tests----#### + +testthat::test_that( + "weight = NULL: simple grouping produces correct count sums", + { + testthat::skip_if_not(file.exists(test_data_path), "Test fixture not available") + df = readRDS(test_data_path) %>% dplyr::slice(1:5) %>% slim_for_test() + + df$custom_group = "A" + + expected_snap = sum(df$snap_received, na.rm = TRUE) + + result = interpolate_acs(df, target_geoid = "custom_group") + + testthat::expect_equal(result$snap_received, expected_snap) + } +) + +testthat::test_that( + "weight = NULL: single-tract group equals original values", + { + testthat::skip_if_not(file.exists(test_data_path), "Test fixture not available") + df = readRDS(test_data_path) %>% dplyr::slice(1:3) %>% slim_for_test() + + df$custom_group = df$GEOID + + result = interpolate_acs(df, target_geoid = "custom_group") + + testthat::expect_equal(result$snap_received, df$snap_received) + testthat::expect_equal(result$total_population_universe, df$total_population_universe) + } +) + +testthat::test_that( + "weight = NULL: GEOID column is populated from target_geoid", + { + testthat::skip_if_not(file.exists(test_data_path), "Test fixture not available") + df = readRDS(test_data_path) %>% dplyr::slice(1:5) %>% slim_for_test() + + df$neighborhood_id = "Neighborhood_A" + + result = interpolate_acs(df, target_geoid = "neighborhood_id") + + testthat::expect_true("GEOID" %in% colnames(result)) + testthat::expect_equal(result$GEOID, "Neighborhood_A") + } +) + +testthat::test_that( + "weight = NULL: codebook has Aggregated notes", + { + testthat::skip_if_not(file.exists(test_data_path), "Test fixture not available") + df = readRDS(test_data_path) %>% dplyr::slice(1:5) %>% slim_for_test() + + df$custom_group = "A" + + result = interpolate_acs(df, target_geoid = "custom_group") + + codebook = attr(result, "codebook") + testthat::expect_false(is.null(codebook)) + testthat::expect_true( + any(stringr::str_detect(codebook$definition, "Aggregated")) + ) + } +) + +testthat::test_that( + "weight = NULL: MOEs are present for aggregated sum variables", + { + testthat::skip_if_not(file.exists(test_data_path), "Test fixture not available") + df = readRDS(test_data_path) %>% dplyr::slice(1:5) %>% slim_for_test() + + df$custom_group = "A" + + result = interpolate_acs(df, target_geoid = "custom_group") + + testthat::expect_true("snap_received_M" %in% colnames(result)) + testthat::expect_true(result$snap_received_M > 0) + } +) + +testthat::test_that( + "weight = NULL: aggregated percentages are within valid range [0, 1]", + { + testthat::skip_if_not(file.exists(test_data_path), "Test fixture not available") + df = readRDS(test_data_path) %>% dplyr::slice(1:10) %>% slim_for_test() + + df$custom_group = rep(c("A", "B"), each = 5) + + result = interpolate_acs(df, target_geoid = "custom_group") + + pct_vars = result %>% + dplyr::select(dplyr::matches("_percent$")) %>% + colnames() + + for (var in pct_vars) { + vals = result[[var]] + testthat::expect_true( + all(is.na(vals) | (vals >= 0 & vals <= 1)), + info = paste0("Variable ", var, " has values outside [0,1]") + ) + } + } +) + +testthat::test_that( + "weight = NULL: NA values in target_geoid excluded with message", + { + testthat::skip_if_not(file.exists(test_data_path), "Test fixture not available") + df = readRDS(test_data_path) %>% dplyr::slice(1:5) %>% slim_for_test() + + df$custom_group = c("A", "A", NA, "B", "B") + + testthat::expect_message( + interpolate_acs(df, target_geoid = "custom_group"), + "NA values" + ) + } +) + +testthat::test_that( + "weight = NULL: MOE for count matches manual se_sum", + { + testthat::skip_if_not(file.exists(test_data_path), "Test fixture not available") + set.seed(42) + df = readRDS(test_data_path) %>% dplyr::slice_sample(n = 10) %>% slim_for_test() + + df$group_id = rep(c("A", "B"), each = 5) + + result = interpolate_acs(df, target_geoid = "group_id") + + var_name = "snap_received" + moe_name = paste0(var_name, "_M") + + group_a = df %>% dplyr::slice(1:5) + manual_se = se_sum(as.list(group_a[[moe_name]]), + as.list(group_a[[var_name]])) + manual_moe = manual_se * 1.645 + + auto_moe = result %>% + dplyr::filter(GEOID == "A") %>% + dplyr::pull(!!moe_name) + + testthat::expect_equal(manual_moe, auto_moe, tolerance = 0.001) + } +) + +testthat::test_that( + "weight = NULL: percent MOE matches manual calculation", + { + testthat::skip_if_not(file.exists(test_data_path), "Test fixture not available") + set.seed(42) + df = readRDS(test_data_path) %>% dplyr::slice_sample(n = 10) %>% slim_for_test() + cb = attr(df, "codebook") + testthat::skip_if_not( + all(c("numerator_vars", "denominator_vars") %in% colnames(cb)), + "Test fixture codebook lacks parsed columns for percent MOE") + + df$group_id = rep(c("A", "B"), each = 5) + + result = interpolate_acs(df, target_geoid = "group_id") + + pct_var = "snap_received_percent" + num_var = "snap_received" + denom_var = "snap_universe" + + group_a = df %>% dplyr::slice(1:5) + + num_est = sum(group_a[[num_var]], na.rm = TRUE) + denom_est = sum(group_a[[denom_var]], na.rm = TRUE) + + num_se = se_sum(as.list(group_a[[paste0(num_var, "_M")]]), + as.list(group_a[[num_var]])) + denom_se = se_sum(as.list(group_a[[paste0(denom_var, "_M")]]), + as.list(group_a[[denom_var]])) + + manual_se = se_proportion_ratio( + estimate_numerator = num_est, + estimate_denominator = denom_est, + se_numerator = num_se, + se_denominator = denom_se) + manual_moe = manual_se * 1.645 + + auto_moe = result %>% + dplyr::filter(GEOID == "A") %>% + dplyr::pull(paste0(pct_var, "_M")) + + testthat::expect_equal(manual_moe, auto_moe, tolerance = 0.001) + } +) + +testthat::test_that( + "weight = NULL: crosswalk join works without weight column", + { + testthat::skip_if_not(file.exists(test_data_path), "Test fixture not available") + df = readRDS(test_data_path) %>% dplyr::slice(1:5) %>% slim_for_test() + + xwalk = data.frame( + GEOID = df$GEOID, + target = c("A", "A", "B", "B", "B") + ) + + result = interpolate_acs(df, crosswalk = xwalk, target_geoid = "target") + + testthat::expect_true("GEOID" %in% colnames(result)) + testthat::expect_setequal(result$GEOID, c("A", "B")) + testthat::expect_equal( + sum(result$total_population_universe), + sum(df$total_population_universe) + ) + } +) + +testthat::test_that( + "weight = NULL with weight = 1 crosswalk matches for counts", + { + testthat::skip_if_not(file.exists(test_data_path), "Test fixture not available") + df = readRDS(test_data_path) %>% dplyr::slice(1:10) %>% slim_for_test() + + ## weight = NULL: direct grouping + df_grouped = df + df_grouped$custom_group = rep(c("A", "B"), each = 5) + result_null = interpolate_acs(df_grouped, target_geoid = "custom_group") + + ## weight = "w" with w = 1: fractional mode with identity weights + xwalk = data.frame( + GEOID = df$GEOID, + target = rep(c("A", "B"), each = 5), + w = 1.0 + ) + result_weighted = interpolate_acs(df, crosswalk = xwalk, + target_geoid = "target", weight = "w") + + testthat::expect_equal( + result_null$total_population_universe, + result_weighted$total_population_universe + ) + testthat::expect_equal( + result_null$snap_received, + result_weighted$snap_received + ) + testthat::expect_equal( + result_null$snap_received_M, + result_weighted$snap_received_M, + tolerance = 0.001 + ) + } +) diff --git a/vignettes/custom-geographies.Rmd b/vignettes/custom-geographies.Rmd index e1932d2..e5fb093 100644 --- a/vignettes/custom-geographies.Rmd +++ b/vignettes/custom-geographies.Rmd @@ -1,8 +1,8 @@ --- -title: "Aggregating to Custom Geographies" +title: "Translating ACS Data to Custom Geographies" output: rmarkdown::html_vignette vignette: > - %\VignetteIndexEntry{custom-geographies} + %\VignetteIndexEntry{Translating ACS Data to Custom Geographies} %\VignetteEncoding{UTF-8} %\VignetteEngine{knitr::rmarkdown} editor_options: @@ -23,6 +23,7 @@ knitr::opts_chunk$set( ```{r setup, echo = FALSE} library(dplyr) library(ggplot2) +library(scales) library(stringr) library(urbnindicators) library(sf) @@ -30,21 +31,20 @@ library(urbnthemes) library(tidycensus) ``` -Census tracts are useful geographic units, but their small populations -often produce estimates with large margins of error. When analyzing -data, these imprecise estimates make it difficult to detect meaningful -differences between areas---even when real differences exist. +For many ACS-supported geographies and variables, sample sizes can lead to problematically large margins of error (MOEs). For example, census tracts are useful geographic units because they reveal spatial nuance, but in many cases, their MOEs +are larger than the actual estimates they accompany. (Note that similar issues arise with small-population places and counties, among many other geographies.) While it's easy to simply ignore these MOEs, estimates with large MOEs can offer a very misleading sense of nuance and precision, leading to incorrect inferences and decisionmaking. And for more rigorous analysis that looks for statistically significant differences, large MOEs at, for example, the tract level, make it difficult to detect meaningful differences between areas---even when real differences exist. -`calculate_custom_geographies()` addresses this by aggregating -tract-level (or really any level of data) data to user-defined geographies (e.g., neighborhoods, -planning districts, or school zones). This aggregation increases sample -sizes, reduces coefficients of variation, and enables more reliable -statistical inference. +Another challenge is that much decisionmaking occurs at geographies that are not +supported directly by the ACS. Think, for example, of school districts, political wards, a city's neighborhoods, or the unincorporated area governed by a county. To translate (or "interpolate") data to these geographies can require significant work, and the process of accurately interpolating not just estimates but also their MOEs is both error-prone and time-intensive to the point where very few analysts do so. But MOEs are as fundamental to interpreting ACS data as are ACS estimates themselves. + +`interpolate_acs()` addresses some of these challenges by supporting users in interpolating both ACS estimates and MOEs to user-defined geographies. It requires a crosswalk that specifies the share of source geography values that should be allocated to each target geography, though for the special case where source geographies are perfectly nested within target geographies (e.g., tracts within counties), the function assigns weights of one to every observation (the case when `weight = NULL`). Note that `interpolate_acs()` always returns a non-spatial dataframe, so geometry must be managed separately. # Example: DC Quadrants -We'll demonstrate using tract-level data for Washington, DC, comparing -the share of population receiving SNAP benefits across areas "quadrants". +We'll demonstrate `interpolate_acs()` first using tract-level data for Washington, DC, comparing the share of population receiving SNAP benefits across "quadrants". +We pull tract data with `spatial = TRUE` because we need the geometry +both for assigning tracts to quadrants (via centroids) and for +dissolving tract boundaries into quadrant boundaries later. Note that this is the simple case of interpolation where every source geography maps to one and only one target geography. We address the more complex case later. ```{r, message = FALSE} dc_tracts = compile_acs_data( @@ -57,11 +57,8 @@ dc_tracts = compile_acs_data( # Creating Custom Geographies -For illustration, we'll create four quadrants of DC by grouping tracts -based on their centroid coordinates. In practice, you would join tracts -to meaningful boundaries like neighborhoods or planning areas, or you could -group arbitrary numbers of adjacent tracts to form pseudo-neighborhoods with -larger populations than are captured in any single tract. +For illustration, we'll create four quadrants of DC by assigning tracts to a quadrant +based on their centroid coordinates. ```{r} # Calculate tract centroids and assign to quadrants @@ -76,50 +73,97 @@ dc_tracts = dc_tracts %>% longitude < median(longitude) & latitude < median(latitude) ~ "Southwest", longitude >= median(longitude) & latitude < median(latitude) ~ "Southeast")) %>% select(-centroid, -longitude, -latitude) +``` + +Next, dissolve the tract geometries into quadrant boundaries. We do +this *before* calling `interpolate_acs()` because the function drops +geometry from its output. + +```{r} +# Dissolve tract boundaries into quadrant polygons +quadrant_geometry = dc_tracts %>% + group_by(quadrant) %>% + summarise(geometry = st_union(geometry), .groups = "drop") +``` + +Now aggregate the data with `interpolate_acs()`. With `weight = NULL` +(the default), it sums count variables, recalculates percentages from +the summed components, and propagates margins of error. +```{r} # Aggregate to quadrants -dc_quadrants = calculate_custom_geographies( +dc_quadrants = interpolate_acs( .data = dc_tracts, - group_id = "quadrant", - spatial = TRUE) + target_geoid = "quadrant") +``` + +Finally, rejoin the dissolved geometry to the aggregated data for +mapping. + +```{r} +# Rejoin quadrant geometry for mapping +dc_quadrants_sf = quadrant_geometry %>% + left_join(dc_quadrants, by = c("quadrant" = "GEOID")) ``` # Comparing Precision The maps below show the share of households receiving SNAP benefits. -Notice how aggregating to quadrants produces more precise estimates with -smaller margins of error. Indeed, the median coefficient of variation -(derived from the MOE) for tract level is greater than 30, a common -upper bound for "reliable" estimates. +Notice how aggregating to quadrants produces more precise estimates +with smaller margins of error. ```{r, fig.height = 4} - bind_rows( - dc_tracts %>% mutate(geography = "Tract"), - dc_quadrants %>% mutate(geography = "Quadrant")) %>% +map_tracts = dc_tracts %>% + ggplot() + + geom_sf(aes(fill = snap_received_percent), color = "white", linewidth = 0.1) + + scale_fill_gradientn( + colours = palette_urbn_cyan[c(3, 5, 7)], + labels = percent) + + theme_urbn_map() + + labs(fill = "SNAP Receipt (%)", subtitle = "Tract-level") + +map_quadrants = dc_quadrants_sf %>% + ggplot() + + geom_sf(aes(fill = snap_received_percent), color = "white", linewidth = 0.3) + + scale_fill_gradientn( + colours = palette_urbn_cyan[c(3, 5, 7)], + labels = percent) + + theme_urbn_map() + + labs(fill = "SNAP Receipt (%)", subtitle = "Quadrant-level") + +gridExtra::grid.arrange(map_tracts, map_quadrants, ncol = 2) +``` + +Indeed, the median Coefficient of Variation (CV; which reflects the +size of the MOE relative to the size of the estimate) at +the tract level exceeds 30---a common upper bound for "reliable" +estimates---while quadrant-level CVs are substantially lower. + +```{r} +cv_tracts = dc_tracts %>% + st_drop_geometry() %>% mutate( - .by = geography, cv = (snap_received_percent_M / 1.645) / snap_received_percent * 100, - cv = if_else(is.infinite(cv), NA_real_, cv), - median_cv = round(median(cv, na.rm = TRUE)), - label = str_c(geography, " - median CV: ", median_cv)) %>% - ggplot() + - geom_sf(aes(fill = snap_received_percent), color = "white", linewidth = 0.1) + - scale_fill_continuous(palette = palette_urbn_cyan[c(3, 5, 7)], labels = scales::percent) + - theme_urbn_map() + - labs(fill = "SNAP Receipt (%)") + - facet_wrap(~ label) + cv = if_else(is.infinite(cv), NA_real_, cv)) %>% + summarise(median_cv = round(median(cv, na.rm = TRUE))) + +cv_quadrants = dc_quadrants %>% + mutate( + cv = (snap_received_percent_M / 1.645) / snap_received_percent * 100, + cv = if_else(is.infinite(cv), NA_real_, cv)) %>% + summarise(median_cv = round(median(cv, na.rm = TRUE))) ``` -The quadrant-level estimates have substantially lower margins of error, -indicating more reliable estimates. +Tract-level median CV: `r cv_tracts$median_cv`. Quadrant-level median +CV: `r cv_quadrants$median_cv`. # Detecting Statistically Significant Differences -By aggregating our tract observations, we can also calculate -statistically significant differences at greater geographic scales. -This enables analysis for more policy-relevant areas and helps mitigate -shortcomings associated with high measures of error for smaller-population -observations, which can lead to findings of no statistically significant differences. +At the tract level, many tracts are not statistically significantly different as +compared to the mean across all DC tracts. By aggregating our tract observations, we can detect statistically significant differences at greater geographic scales. This enables +analysis for more policy-relevant areas and helps mitigate shortcomings +associated with high measures of error for smaller-population +observations. ```{r, fig.height = 4} # Calculate DC-wide SNAP rate for comparison @@ -141,7 +185,7 @@ tracts_sig = dc_tracts %>% snap_received_percent < dc_snap_rate ~ "Lower than DC average")) # Test significance at quadrant level -quadrants_sig = dc_quadrants %>% +quadrants_sig = dc_quadrants_sf %>% mutate( significant = tidycensus::significance( est1 = snap_received_percent, @@ -185,18 +229,139 @@ gridExtra::grid.arrange( gp = grid::gpar(fontsize = 12, fontface = "bold"))) ``` +# Weighted Interpolation to Imperfectly-Nested Geographies + +The examples above reflect direct aggregation: each tract belongs +entirely to one quadrant because tracts are perfectly nested in the +quadrants we define. But target geographies don't always align +neatly with source geography boundaries. When source geographies partially overlap +targets, you can use a crosswalk with weights to allocate +data from source geographies to target geographies proportionally. For example. +a common use-case is aligning 2010-vintage tracts and 2020-vintage tracts. Because +the Census Bureau redefines tract boundaries as part of the decennial census process, +a given tract in 2010 frequently does not map 1:1 to a single tract in 2020 (even when there is a 2020 tract with the same GEOID!) This scenario, among many others, +requires some form of proportional allocation. + +We'll demonstrate using the +[`crosswalk`](https://github.com/UI-Research/crosswalk) package, which +provides programmatic access to crosswalks from NHGIS, Geocorr, and +other sources. We'll pull tract-level SNAP data for both 2019 (which uses +2010-vintage tract boundaries) and 2024 (which uses 2020-vintage +boundaries), crosswalk the 2019 data to 2020-vintage tracts, and then +map the change in SNAP receipt---all expressed in a consistent set of +tract boundaries. + +```{r, eval = requireNamespace("crosswalk", quietly = TRUE) && nchar(Sys.getenv("IPUMS_API_KEY")) > 0, fig.height = 5} +# renv::install("UI-Research/crosswalk") + +# Pull 2019 ACS data (2010-vintage tracts) and 2024 ACS data (2020-vintage tracts) +dc_tracts = compile_acs_data( + years = c(2019, 2024), + tables = "snap", + geography = "tract", + states = "DC", + spatial = TRUE) + +# Get 2010→2020 tract crosswalk (population-weighted) +tract_crosswalk = crosswalk::get_crosswalk( + source_geography = "tract", + target_geography = "tract", + source_year = 2010, + target_year = 2020, + weight = "population") + +# Extract the crosswalk tibble and rename columns for interpolate_acs() +tract_xwalk = tract_crosswalk$crosswalks$step_1 %>% + filter(weighting_factor == "weight_population") %>% + select( + GEOID = source_geoid, + target_tract = target_geoid, + weight = allocation_factor_source_to_target) + +# Interpolate 2019 data to 2020-vintage tract boundaries +dc_2019_in_2020_tracts = interpolate_acs( + .data = dc_tracts %>% filter(data_source_year == 2019) %>% st_drop_geometry(), + target_geoid = "target_tract", + weight = "weight", + crosswalk = tract_xwalk) + +# Join crosswalked 2019 estimates to 2024 sf data and calculate change. +# The 2019 (2015-2019) and 2024 (2020-2024) 5-year ACS don't overlap, +# so estimates are independent and MOE_diff = sqrt(MOE_1^2 + MOE_2^2). +snap_change = dc_tracts %>% + filter(data_source_year == 2024) %>% + left_join( + dc_2019_in_2020_tracts %>% + select(GEOID, + snap_received_percent_2019 = snap_received_percent, + snap_received_percent_2019_M = snap_received_percent_M), + by = "GEOID") %>% + mutate( + snap_ppt_change = (snap_received_percent - snap_received_percent_2019) * 100, + snap_ppt_change_M = sqrt(snap_received_percent_M^2 + snap_received_percent_2019_M^2) * 100, + significant = abs(snap_ppt_change) > snap_ppt_change_M * 1.645 / 1.645, + change_category = case_when( + !significant ~ "Not significant", + snap_ppt_change > 0 ~ "Significant increase", + snap_ppt_change < 0 ~ "Significant decrease")) + +# Map 1: percentage-point change in SNAP receipt +map_change = ggplot(snap_change) + + geom_sf(aes(fill = snap_ppt_change), color = "white", linewidth = 0.1) + + scale_fill_gradient2( + low = "#FDBF11", + mid = "#D2D2D2", + high = "#1696D2", + midpoint = 0) + + theme_urbn_map() + + theme(legend.position = "bottom") + + guides(fill = guide_colorbar(direction = "horizontal", title.position = "top")) + + labs(fill = "Change (ppt)", subtitle = "Percentage-point change") + +# Map 2: statistical significance of the change +change_colors = c( + "Significant increase" = "#1696D2", + "Significant decrease" = "#FDBF11", + "Not significant" = "#D2D2D2") + +map_sig = ggplot(snap_change) + + geom_sf(aes(fill = change_category), color = "white", linewidth = 0.1) + + scale_fill_manual(values = change_colors, na.value = "grey80") + + theme_urbn_map() + + theme(legend.position = "bottom") + + guides(fill = guide_legend(direction = "horizontal", title.position = "top")) + + labs(fill = "", subtitle = "Statistical significance (relative to zero change)") + +gridExtra::grid.arrange( + map_change, map_sig, + ncol = 2, + top = grid::textGrob( + "Change in SNAP receipt, 2019 to 2024", + gp = grid::gpar(fontsize = 12, fontface = "bold"))) +``` + +With `weight = NULL` (the default used in the quadrant example), +`interpolate_acs()` assumes perfect nesting---each source geography's +values are entirely attributed to the target geography. Providing a +`weight` column enables proportional allocation for partial-overlap +crosswalks. Count variables and their MOEs are multiplied by the +crosswalk weight before summing; percentages are then recalculated +from the allocated components. # Key Takeaways -1. **Aggregation improves precision**: Combining tracts into larger - geographies reduces CVs and margins of error. +`interpolate_acs()` helps make your analysis more precise: + +1. **Aggregation improves precision**: Combining geographies into larger + geographies reduces margins of error. Got some very-small-population + tracts or counties? Aggregate these with adjacent geographies to get + estimates with smaller (relative) MOEs. -2. **Better inference**: More precise estimates enable detection of - statistically significant differences that would otherwise be - obscured by sampling error. +2. **Better inference**: More precise estimates--those with smaller relative + MOES--enable detection of statistically significant differences that would + otherwise be obscured by sampling error. -3. **More relevant units of analysis**: The ACS reports estimates at - many geographies, but there are many others that are not supported. - Think neighborhoods, wards, continuums of care, school districts, and more. - To robustly calculate errors and draw reliable inferences for these - other geographies is critical but challenging. +3. **Flexible target geographies**: The ACS reports estimates at many + geographies, but there are many others that are not supported. + Provide a crosswalk and you can interpolate both estimates and MOES + to any geography you want--wards, school districts, etc. diff --git a/vignettes/quantified-survey-error.Rmd b/vignettes/quantified-survey-error.Rmd index a73411e..ce235d6 100644 --- a/vignettes/quantified-survey-error.Rmd +++ b/vignettes/quantified-survey-error.Rmd @@ -115,7 +115,7 @@ As shown below, variables that rely on larger sample sizes tend to have smaller MOEs. Typically, there are two strategies to reduce error: (1) aggregate estimates, either across geographies or across variables, or (2) use larger geographies. For the first strategy, -`calculate_custom_geographies()` can aggregate tract-level data to +`interpolate_acs()` can aggregate tract-level data to user-defined geographies (e.g., neighborhoods or planning districts) while properly propagating margins of error. See `vignette("custom-geographies")` for a worked example. diff --git a/vignettes/urbnindicators.Rmd b/vignettes/urbnindicators.Rmd index 538cbe8..caf83ed 100644 --- a/vignettes/urbnindicators.Rmd +++ b/vignettes/urbnindicators.Rmd @@ -255,7 +255,7 @@ codebook %>% ACS data are available for standard geographies, but many analyses require non-standard areas like neighborhoods or planning districts. -`calculate_custom_geographies()` aggregates tract-level data to any +`interpolate_acs()` aggregates tract-level data to any user-defined geography, properly re-deriving percentages and propagating margins of error. See [Aggregating to Custom Geographies](custom-geographies.html) for a