Skip to content

Conversation

@Aeilert
Copy link

@Aeilert Aeilert commented Nov 24, 2021

Hi @tonyfujs

I made an attempt to simplify and improve md_compute_lorenz(). I think this makes it more clear what's going on. But not sure that is actually better. Computation speed is marginally less, but memory allocation is higher.

E.g.

data("md_ABC_2000_income")
df <- wbpip:::md_clean_data(md_ABC_2000_income,
                    welfare = 'welfare',
                    weight = 'weight')$data
# A tibble: 2 x 13
  expression                                            min   median `itr/sec` mem_alloc `gc/sec` n_itr  n_gc
  <bch:expr>                                       <bch:tm> <bch:tm>     <dbl> <bch:byt>    <dbl> <int> <dbl>
1 wbpip:::md_compute_lorenz(df$welfare, df$weight)    782us   1.11ms      712.    18.2KB     2.86   996     4
2 md_compute_lorenz_3(df$welfare, df$weight)          806us  875.7us     1033.    82.5KB     5.19   995     5
Unit: microseconds
                                             expr   min     lq     mean median      uq     max neval cld
 wbpip:::md_compute_lorenz(df$welfare, df$weight) 842.2 929.25 1289.384 1005.5 1348.15 24203.3  1000   a
       md_compute_lorenz_3(df$welfare, df$weight) 821.5 907.35 1297.613  977.3 1232.10 36638.2  1000   a

@Aeilert Aeilert requested a review from tonyfujs November 24, 2021 12:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant