You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Subshell calls are slow. Collect all the changes for each round and do
a single call to $(tr ...), instead of doing one per line.
From +5 minutes to under 10 seconds for both parts, so it's now faster than awk.
Copy file name to clipboardExpand all lines: 2015/README.md
+3Lines changed: 3 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -29,6 +29,9 @@ This is where bash sucks. The best I could do was an array of strings, and subst
29
29
1. Switch to select what to do. Use substring index magic. Fixed strings of 1000 ones and zeros to make life easier. Swap is a bother, and uses string substitution.
30
30
2. A million countable items in bash are always a bother. Using $(tr ...) allows me to store 62 values per character in a string, but it's extremely slow. Awk is around 30 times faster.
31
31
32
+
*Update:* By collecting all the changes in an array and doing single $(tr ...) subshell call per input line, part 1 is over 2x faster.
33
+
Part 2 is 40x faster, and outperforms awk.
34
+
32
35
### 07.sh
33
36
1. After setting up, loop through all values and evaluate it to a number if all parameters have been evaluated.
34
37
Takes about 100 iterations through all values. Run in a subshell to keep the namespace clean for part 2.
0 commit comments