diff --git a/TODO b/TODO
new file mode 100644
index 0000000000000..92e9d8b516f5a
--- /dev/null
+++ b/TODO
@@ -0,0 +1,20 @@
+questions
+- how does algebraic statistics relate to my post about representations in linear algebra?
+-
+
+
+
+ideas
+- what would it take to fabricate a microchip at home?
+- imagine a person who would look at my life and be very jealous
+- Diagnosing countries and their mental disorders
+- Abolish mathematics. Mathematics was invented to help rich people keep track of their assets.
+- Pair tools and mathematical theorems. The hammer: ???,
+- Open problems for my back pocket. To carry around with me all the time.
+- CAPTCHAs. history / arms race. https://github.com/jpraychev/google-recaptcha
+- Generating pseudorandom numbers. Intuition, make a chaotic system. But how do you ensure it gives a uniform distribution!?
+- how many different types of memory are there?
+- are markets really efficient? What are the alternatives?
+- the importance of good audio design?! A set of 3d scenarios. With the player turning their head / moving.
+With poor / good audio design
+-
\ No newline at end of file
diff --git a/_bibliography/lm-chem.bib b/_bibliography/lm-chem.bib
new file mode 100644
index 0000000000000..27f5eeefef6d1
--- /dev/null
+++ b/_bibliography/lm-chem.bib
@@ -0,0 +1,100 @@
+---
+---
+References
+==========
+
+@article{Kuhn2015FacilitatingQC,
+ title={Facilitating quality control for spectra assignments of small organic molecules: nmrshiftdb2 – a free in‐house NMR database with integrated LIMS for academic service laboratories},
+ author={Stefan Kuhn and Nils E. Schl{\"o}rer},
+ journal={Magnetic Resonance in Chemistry},
+ year={2015},
+ volume={53},
+ pages={582 - 589},
+ url={https://api.semanticscholar.org/CorpusID:2571037}
+}
+
+@article{Rae2021ScalingLM,
+ title={Scaling Language Models: Methods, Analysis \& Insights from Training Gopher},
+ author={Jack W. Rae and Sebastian Borgeaud and Trevor Cai and Katie Millican and Jordan Hoffmann and Francis Song and John Aslanides and Sarah Henderson and Roman Ring and Susannah Young and Eliza Rutherford and Tom Hennigan and Jacob Menick and Albin Cassirer and Richard Powell and George van den Driessche and Lisa Anne Hendricks and Maribeth Rauh and Po-Sen Huang and Amelia Glaese and Johannes Welbl and Sumanth Dathathri and Saffron Huang and Jonathan Uesato and John F. J. Mellor and Irina Higgins and Antonia Creswell and Nat McAleese and Amy Wu and Erich Elsen and Siddhant M. Jayakumar and Elena Buchatskaya and David Budden and Esme Sutherland and Karen Simonyan and Michela Paganini and L. Sifre and Lena Martens and Xiang Lorraine Li and Adhiguna Kuncoro and Aida Nematzadeh and Elena Gribovskaya and Domenic Donato and Angeliki Lazaridou and Arthur Mensch and Jean-Baptiste Lespiau and Maria Tsimpoukelli and N. K. Grigorev and Doug Fritz and Thibault Sottiaux and Mantas Pajarskas and Tobias Pohlen and Zhitao Gong and Daniel Toyama and Cyprien de Masson d'Autume and Yujia Li and Tayfun Terzi and Vladimir Mikulik and Igor Babuschkin and Aidan Clark and Diego de Las Casas and Aurelia Guy and Chris Jones and James Bradbury and Matthew G. Johnson and Blake A. Hechtman and Laura Weidinger and Iason Gabriel and William S. Isaac and Edward Lockhart and Simon Osindero and Laura Rimell and Chris Dyer and Oriol Vinyals and Kareem W. Ayoub and Jeff Stanway and L. L. Bennett and Demis Hassabis and Koray Kavukcuoglu and Geoffrey Irving},
+ journal={ArXiv},
+ year={2021},
+ volume={abs/2112.11446},
+ url={https://api.semanticscholar.org/CorpusID:245353475}
+}
+
+@article{Chowdhery2022PaLMSL,
+ title={PaLM: Scaling Language Modeling with Pathways},
+ author={Aakanksha Chowdhery and Sharan Narang and Jacob Devlin and Maarten Bosma and Gaurav Mishra and Adam Roberts and Paul Barham and Hyung Won Chung and Charles Sutton and Sebastian Gehrmann and Parker Schuh and Kensen Shi and Sasha Tsvyashchenko and Joshua Maynez and Abhishek Rao and Parker Barnes and Yi Tay and Noam M. Shazeer and Vinodkumar Prabhakaran and Emily Reif and Nan Du and Ben Hutchinson and Reiner Pope and James Bradbury and Jacob Austin and Michael Isard and Guy Gur-Ari and Pengcheng Yin and Toju Duke and Anselm Levskaya and Sanjay Ghemawat and Sunipa Dev and Henryk Michalewski and Xavier Garc{\'i}a and Vedant Misra and Kevin Robinson and Liam Fedus and Denny Zhou and Daphne Ippolito and David Luan and Hyeontaek Lim and Barret Zoph and Alexander Spiridonov and Ryan Sepassi and David Dohan and Shivani Agrawal and Mark Omernick and Andrew M. Dai and Thanumalayan Sankaranarayana Pillai and Marie Pellat and Aitor Lewkowycz and Erica Moreira and Rewon Child and Oleksandr Polozov and Katherine Lee and Zongwei Zhou and Xuezhi Wang and Brennan Saeta and Mark D{\'i}az and Orhan Firat and Michele Catasta and Jason Wei and Kathleen S. Meier-Hellstern and Douglas Eck and Jeff Dean and Slav Petrov and Noah Fiedel},
+ journal={ArXiv},
+ year={2022},
+ volume={abs/2204.02311},
+ url={https://api.semanticscholar.org/CorpusID:247951931}
+}
+
+@article{Fedus2021SwitchTS,
+ title={Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity},
+ author={William Fedus and Barret Zoph and Noam M. Shazeer},
+ journal={ArXiv},
+ year={2021},
+ volume={abs/2101.03961},
+ url={https://api.semanticscholar.org/CorpusID:231573431}
+}
+
+@article{Rae2021ScalingLM,
+ title={Scaling Language Models: Methods, Analysis \& Insights from Training Gopher},
+ author={Jack W. Rae and Sebastian Borgeaud and Trevor Cai and Katie Millican and Jordan Hoffmann and Francis Song and John Aslanides and Sarah Henderson and Roman Ring and Susannah Young and Eliza Rutherford and Tom Hennigan and Jacob Menick and Albin Cassirer and Richard Powell and George van den Driessche and Lisa Anne Hendricks and Maribeth Rauh and Po-Sen Huang and Amelia Glaese and Johannes Welbl and Sumanth Dathathri and Saffron Huang and Jonathan Uesato and John F. J. Mellor and Irina Higgins and Antonia Creswell and Nat McAleese and Amy Wu and Erich Elsen and Siddhant M. Jayakumar and Elena Buchatskaya and David Budden and Esme Sutherland and Karen Simonyan and Michela Paganini and L. Sifre and Lena Martens and Xiang Lorraine Li and Adhiguna Kuncoro and Aida Nematzadeh and Elena Gribovskaya and Domenic Donato and Angeliki Lazaridou and Arthur Mensch and Jean-Baptiste Lespiau and Maria Tsimpoukelli and N. K. Grigorev and Doug Fritz and Thibault Sottiaux and Mantas Pajarskas and Tobias Pohlen and Zhitao Gong and Daniel Toyama and Cyprien de Masson d'Autume and Yujia Li and Tayfun Terzi and Vladimir Mikulik and Igor Babuschkin and Aidan Clark and Diego de Las Casas and Aurelia Guy and Chris Jones and James Bradbury and Matthew G. Johnson and Blake A. Hechtman and Laura Weidinger and Iason Gabriel and William S. Isaac and Edward Lockhart and Simon Osindero and Laura Rimell and Chris Dyer and Oriol Vinyals and Kareem W. Ayoub and Jeff Stanway and L. L. Bennett and Demis Hassabis and Koray Kavukcuoglu and Geoffrey Irving},
+ journal={ArXiv},
+ year={2021},
+ volume={abs/2112.11446},
+ url={https://api.semanticscholar.org/CorpusID:245353475}
+}
+
+@misc{schuhmann2021laion400mopendatasetclipfiltered,
+ title={LAION-400M: Open Dataset of CLIP-Filtered 400 Million Image-Text Pairs},
+ author={Christoph Schuhmann and Richard Vencu and Romain Beaumont and Robert Kaczmarczyk and Clayton Mullis and Aarush Katta and Theo Coombes and Jenia Jitsev and Aran Komatsuzaki},
+ year={2021},
+ eprint={2111.02114},
+ archivePrefix={arXiv},
+ primaryClass={cs.CV},
+ url={https://arxiv.org/abs/2111.02114},
+}
+
+@misc{reed2022generalistagent,
+ title={A Generalist Agent},
+ author={Scott Reed and Konrad Zolna and Emilio Parisotto and Sergio Gomez Colmenarejo and Alexander Novikov and Gabriel Barth-Maron and Mai Gimenez and Yury Sulsky and Jackie Kay and Jost Tobias Springenberg and Tom Eccles and Jake Bruce and Ali Razavi and Ashley Edwards and Nicolas Heess and Yutian Chen and Raia Hadsell and Oriol Vinyals and Mahyar Bordbar and Nando de Freitas},
+ year={2022},
+ eprint={2205.06175},
+ archivePrefix={arXiv},
+ primaryClass={cs.AI},
+ url={https://arxiv.org/abs/2205.06175},
+}
+
+@misc{rae2022scalinglanguagemodelsmethods,
+ title={Scaling Language Models: Methods, Analysis & Insights from Training Gopher},
+ author={Jack W. Rae and Sebastian Borgeaud and Trevor Cai and Katie Millican and Jordan Hoffmann and Francis Song and John Aslanides and Sarah Henderson and Roman Ring and Susannah Young and Eliza Rutherford and Tom Hennigan and Jacob Menick and Albin Cassirer and Richard Powell and George van den Driessche and Lisa Anne Hendricks and Maribeth Rauh and Po-Sen Huang and Amelia Glaese and Johannes Welbl and Sumanth Dathathri and Saffron Huang and Jonathan Uesato and John Mellor and Irina Higgins and Antonia Creswell and Nat McAleese and Amy Wu and Erich Elsen and Siddhant Jayakumar and Elena Buchatskaya and David Budden and Esme Sutherland and Karen Simonyan and Michela Paganini and Laurent Sifre and Lena Martens and Xiang Lorraine Li and Adhiguna Kuncoro and Aida Nematzadeh and Elena Gribovskaya and Domenic Donato and Angeliki Lazaridou and Arthur Mensch and Jean-Baptiste Lespiau and Maria Tsimpoukelli and Nikolai Grigorev and Doug Fritz and Thibault Sottiaux and Mantas Pajarskas and Toby Pohlen and Zhitao Gong and Daniel Toyama and Cyprien de Masson d'Autume and Yujia Li and Tayfun Terzi and Vladimir Mikulik and Igor Babuschkin and Aidan Clark and Diego de Las Casas and Aurelia Guy and Chris Jones and James Bradbury and Matthew Johnson and Blake Hechtman and Laura Weidinger and Iason Gabriel and William Isaac and Ed Lockhart and Simon Osindero and Laura Rimell and Chris Dyer and Oriol Vinyals and Kareem Ayoub and Jeff Stanway and Lorrayne Bennett and Demis Hassabis and Koray Kavukcuoglu and Geoffrey Irving},
+ year={2022},
+ eprint={2112.11446},
+ archivePrefix={arXiv},
+ primaryClass={cs.CL},
+ url={https://arxiv.org/abs/2112.11446},
+}
+
+@misc{namazifar2020languagemodelneednatural,
+ title={Language Model is All You Need: Natural Language Understanding as Question Answering},
+ author={Mahdi Namazifar and Alexandros Papangelis and Gokhan Tur and Dilek Hakkani-Tür},
+ year={2020},
+ eprint={2011.03023},
+ archivePrefix={arXiv},
+ primaryClass={cs.CL},
+ url={https://arxiv.org/abs/2011.03023},
+}
+
+@misc{srivastava2023imitationgamequantifyingextrapolating,
+ title={Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models},
+ author={Aarohi Srivastava and Abhinav Rastogi and Abhishek Rao and Abu Awal Md Shoeb and Abubakar Abid and Adam Fisch and Adam R. Brown and Adam Santoro and Aditya Gupta and Adrià Garriga-Alonso and Agnieszka Kluska and Aitor Lewkowycz and Akshat Agarwal and Alethea Power and Alex Ray and Alex Warstadt and Alexander W. Kocurek and Ali Safaya and Ali Tazarv and Alice Xiang and Alicia Parrish and Allen Nie and Aman Hussain and Amanda Askell and Amanda Dsouza and Ambrose Slone and Ameet Rahane and Anantharaman S. Iyer and Anders Andreassen and Andrea Madotto and Andrea Santilli and Andreas Stuhlmüller and Andrew Dai and Andrew La and Andrew Lampinen and Andy Zou and Angela Jiang and Angelica Chen and Anh Vuong and Animesh Gupta and Anna Gottardi and Antonio Norelli and Anu Venkatesh and Arash Gholamidavoodi and Arfa Tabassum and Arul Menezes and Arun Kirubarajan and Asher Mullokandov and Ashish Sabharwal and Austin Herrick and Avia Efrat and Aykut Erdem and Ayla Karakaş and B. Ryan Roberts and Bao Sheng Loe and Barret Zoph and Bartłomiej Bojanowski and Batuhan Özyurt and Behnam Hedayatnia and Behnam Neyshabur and Benjamin Inden and Benno Stein and Berk Ekmekci and Bill Yuchen Lin and Blake Howald and Bryan Orinion and Cameron Diao and Cameron Dour and Catherine Stinson and Cedrick Argueta and César Ferri Ramírez and Chandan Singh and Charles Rathkopf and Chenlin Meng and Chitta Baral and Chiyu Wu and Chris Callison-Burch and Chris Waites and Christian Voigt and Christopher D. Manning and Christopher Potts and Cindy Ramirez and Clara E. Rivera and Clemencia Siro and Colin Raffel and Courtney Ashcraft and Cristina Garbacea and Damien Sileo and Dan Garrette and Dan Hendrycks and Dan Kilman and Dan Roth and Daniel Freeman and Daniel Khashabi and Daniel Levy and Daniel Moseguí González and Danielle Perszyk and Danny Hernandez and Danqi Chen and Daphne Ippolito and Dar Gilboa and David Dohan and David Drakard and David Jurgens and Debajyoti Datta and Deep Ganguli and Denis Emelin and Denis Kleyko and Deniz Yuret and Derek Chen and Derek Tam and Dieuwke Hupkes and Diganta Misra and Dilyar Buzan and Dimitri Coelho Mollo and Diyi Yang and Dong-Ho Lee and Dylan Schrader and Ekaterina Shutova and Ekin Dogus Cubuk and Elad Segal and Eleanor Hagerman and Elizabeth Barnes and Elizabeth Donoway and Ellie Pavlick and Emanuele Rodola and Emma Lam and Eric Chu and Eric Tang and Erkut Erdem and Ernie Chang and Ethan A. Chi and Ethan Dyer and Ethan Jerzak and Ethan Kim and Eunice Engefu Manyasi and Evgenii Zheltonozhskii and Fanyue Xia and Fatemeh Siar and Fernando Martínez-Plumed and Francesca Happé and Francois Chollet and Frieda Rong and Gaurav Mishra and Genta Indra Winata and Gerard de Melo and Germán Kruszewski and Giambattista Parascandolo and Giorgio Mariani and Gloria Wang and Gonzalo Jaimovitch-López and Gregor Betz and Guy Gur-Ari and Hana Galijasevic and Hannah Kim and Hannah Rashkin and Hannaneh Hajishirzi and Harsh Mehta and Hayden Bogar and Henry Shevlin and Hinrich Schütze and Hiromu Yakura and Hongming Zhang and Hugh Mee Wong and Ian Ng and Isaac Noble and Jaap Jumelet and Jack Geissinger and Jackson Kernion and Jacob Hilton and Jaehoon Lee and Jaime Fernández Fisac and James B. Simon and James Koppel and James Zheng and James Zou and Jan Kocoń and Jana Thompson and Janelle Wingfield and Jared Kaplan and Jarema Radom and Jascha Sohl-Dickstein and Jason Phang and Jason Wei and Jason Yosinski and Jekaterina Novikova and Jelle Bosscher and Jennifer Marsh and Jeremy Kim and Jeroen Taal and Jesse Engel and Jesujoba Alabi and Jiacheng Xu and Jiaming Song and Jillian Tang and Joan Waweru and John Burden and John Miller and John U. Balis and Jonathan Batchelder and Jonathan Berant and Jörg Frohberg and Jos Rozen and Jose Hernandez-Orallo and Joseph Boudeman and Joseph Guerr and Joseph Jones and Joshua B. Tenenbaum and Joshua S. Rule and Joyce Chua and Kamil Kanclerz and Karen Livescu and Karl Krauth and Karthik Gopalakrishnan and Katerina Ignatyeva and Katja Markert and Kaustubh D. Dhole and Kevin Gimpel and Kevin Omondi and Kory Mathewson and Kristen Chiafullo and Ksenia Shkaruta and Kumar Shridhar and Kyle McDonell and Kyle Richardson and Laria Reynolds and Leo Gao and Li Zhang and Liam Dugan and Lianhui Qin and Lidia Contreras-Ochando and Louis-Philippe Morency and Luca Moschella and Lucas Lam and Lucy Noble and Ludwig Schmidt and Luheng He and Luis Oliveros Colón and Luke Metz and Lütfi Kerem Şenel and Maarten Bosma and Maarten Sap and Maartje ter Hoeve and Maheen Farooqi and Manaal Faruqui and Mantas Mazeika and Marco Baturan and Marco Marelli and Marco Maru and Maria Jose Ramírez Quintana and Marie Tolkiehn and Mario Giulianelli and Martha Lewis and Martin Potthast and Matthew L. Leavitt and Matthias Hagen and Mátyás Schubert and Medina Orduna Baitemirova and Melody Arnaud and Melvin McElrath and Michael A. Yee and Michael Cohen and Michael Gu and Michael Ivanitskiy and Michael Starritt and Michael Strube and Michał Swędrowski and Michele Bevilacqua and Michihiro Yasunaga and Mihir Kale and Mike Cain and Mimee Xu and Mirac Suzgun and Mitch Walker and Mo Tiwari and Mohit Bansal and Moin Aminnaseri and Mor Geva and Mozhdeh Gheini and Mukund Varma T and Nanyun Peng and Nathan A. Chi and Nayeon Lee and Neta Gur-Ari Krakover and Nicholas Cameron and Nicholas Roberts and Nick Doiron and Nicole Martinez and Nikita Nangia and Niklas Deckers and Niklas Muennighoff and Nitish Shirish Keskar and Niveditha S. Iyer and Noah Constant and Noah Fiedel and Nuan Wen and Oliver Zhang and Omar Agha and Omar Elbaghdadi and Omer Levy and Owain Evans and Pablo Antonio Moreno Casares and Parth Doshi and Pascale Fung and Paul Pu Liang and Paul Vicol and Pegah Alipoormolabashi and Peiyuan Liao and Percy Liang and Peter Chang and Peter Eckersley and Phu Mon Htut and Pinyu Hwang and Piotr Miłkowski and Piyush Patil and Pouya Pezeshkpour and Priti Oli and Qiaozhu Mei and Qing Lyu and Qinlang Chen and Rabin Banjade and Rachel Etta Rudolph and Raefer Gabriel and Rahel Habacker and Ramon Risco and Raphaël Millière and Rhythm Garg and Richard Barnes and Rif A. Saurous and Riku Arakawa and Robbe Raymaekers and Robert Frank and Rohan Sikand and Roman Novak and Roman Sitelew and Ronan LeBras and Rosanne Liu and Rowan Jacobs and Rui Zhang and Ruslan Salakhutdinov and Ryan Chi and Ryan Lee and Ryan Stovall and Ryan Teehan and Rylan Yang and Sahib Singh and Saif M. Mohammad and Sajant Anand and Sam Dillavou and Sam Shleifer and Sam Wiseman and Samuel Gruetter and Samuel R. Bowman and Samuel S. Schoenholz and Sanghyun Han and Sanjeev Kwatra and Sarah A. Rous and Sarik Ghazarian and Sayan Ghosh and Sean Casey and Sebastian Bischoff and Sebastian Gehrmann and Sebastian Schuster and Sepideh Sadeghi and Shadi Hamdan and Sharon Zhou and Shashank Srivastava and Sherry Shi and Shikhar Singh and Shima Asaadi and Shixiang Shane Gu and Shubh Pachchigar and Shubham Toshniwal and Shyam Upadhyay and Shyamolima and Debnath and Siamak Shakeri and Simon Thormeyer and Simone Melzi and Siva Reddy and Sneha Priscilla Makini and Soo-Hwan Lee and Spencer Torene and Sriharsha Hatwar and Stanislas Dehaene and Stefan Divic and Stefano Ermon and Stella Biderman and Stephanie Lin and Stephen Prasad and Steven T. Piantadosi and Stuart M. Shieber and Summer Misherghi and Svetlana Kiritchenko and Swaroop Mishra and Tal Linzen and Tal Schuster and Tao Li and Tao Yu and Tariq Ali and Tatsu Hashimoto and Te-Lin Wu and Théo Desbordes and Theodore Rothschild and Thomas Phan and Tianle Wang and Tiberius Nkinyili and Timo Schick and Timofei Kornev and Titus Tunduny and Tobias Gerstenberg and Trenton Chang and Trishala Neeraj and Tushar Khot and Tyler Shultz and Uri Shaham and Vedant Misra and Vera Demberg and Victoria Nyamai and Vikas Raunak and Vinay Ramasesh and Vinay Uday Prabhu and Vishakh Padmakumar and Vivek Srikumar and William Fedus and William Saunders and William Zhang and Wout Vossen and Xiang Ren and Xiaoyu Tong and Xinran Zhao and Xinyi Wu and Xudong Shen and Yadollah Yaghoobzadeh and Yair Lakretz and Yangqiu Song and Yasaman Bahri and Yejin Choi and Yichi Yang and Yiding Hao and Yifu Chen and Yonatan Belinkov and Yu Hou and Yufang Hou and Yuntao Bai and Zachary Seid and Zhuoye Zhao and Zijian Wang and Zijie J. Wang and Zirui Wang and Ziyi Wu},
+ year={2023},
+ eprint={2206.04615},
+ archivePrefix={arXiv},
+ primaryClass={cs.CL},
+ url={https://arxiv.org/abs/2206.04615},
+}
\ No newline at end of file
diff --git a/_drafts/inbetween-posts/2020-10-12-mechanical-components.md b/_drafts/inbetween-posts/2020-10-12-mechanical-components.md
index d4e5ed9fb0387..10887e98832a5 100644
--- a/_drafts/inbetween-posts/2020-10-12-mechanical-components.md
+++ b/_drafts/inbetween-posts/2020-10-12-mechanical-components.md
@@ -3,6 +3,8 @@ layout: post
title: Mechanical components
---
+https://www.youtube.com/watch?v=M1-YeqGynlw
+
You may already be familiar with electrical components like; transistors, capacitors, operational-amplifiers, flip-flops, etc... Electrical components are defined as; ...?

diff --git a/_drafts/inbetween-posts/2024-03-10-brilliant.md b/_drafts/inbetween-posts/2024-03-10-brilliant.md
new file mode 100644
index 0000000000000..82d54f0e210ad
--- /dev/null
+++ b/_drafts/inbetween-posts/2024-03-10-brilliant.md
@@ -0,0 +1,22 @@
+---
+title: "Brilliant"
+subtitle: ""
+layout: post
+permalink: /brilliant/
+categories:
+ - sci-fi
+---
+
+
+
+The gods have not spoken in a week.
+Abas lies
\ No newline at end of file
diff --git a/_drafts/inbetween-posts/2024-03-10-the-conductor.md b/_drafts/inbetween-posts/2024-03-10-the-conductor.md
new file mode 100644
index 0000000000000..5db9daebadf3e
--- /dev/null
+++ b/_drafts/inbetween-posts/2024-03-10-the-conductor.md
@@ -0,0 +1,19 @@
+---
+title: "The conductor"
+subtitle: "Consciousness is the conductor of the orchestra of the mind"
+layout: post
+permalink: /conductor/
+categories:
+ - philosophy
+---
+
+
+
+When I look at a conductor, I don't see a person in control of the orchestra. I see a delusion.
+
+It seems to me that the music would happen with or without the conductor.
+However, the conductor believes that they are in control of the music. They believe that the music would not happen without them.
+Confusing correlation with causation, they raise their hands, and the crescendo begins.
+
+I believe this is the perfect metaphor for consciousness. Like the conductor, we believe that we are in control of our thoughts and actions.
+That we are making the music that is our life.
\ No newline at end of file
diff --git a/_posts/inbetween-posts/2019-02-22-fusion-pollution.md b/_posts/inbetween-posts/2019-02-22-fusion-pollution.md
index e5032db279a00..dea06c5b0bf74 100644
--- a/_posts/inbetween-posts/2019-02-22-fusion-pollution.md
+++ b/_posts/inbetween-posts/2019-02-22-fusion-pollution.md
@@ -16,8 +16,8 @@ We achieve an astonishing fusion reaction output of 30:1 output to input, named
(20 years later) A report is written on environmental impact of BARY. Li may pollute our atmosphere. Lithium is highly reactive and if released in sufficient quantities it could significantly alter the amount of oxygen in the atmosphere.
-(30 years later) The new norm is cheap energy. Young thrill-seekers swap their automotive rides for miniaturized spacecraft. Movies and video games are rendered in `65536p` (despite the fact we can't tell the difference between `2048p` and `4096p`). People even take 1hr long showers!
+(30 years later) The new norm is cheap energy. Young thrill-seekers swap their automotive rides for miniaturized spacecraft. Movies and video games are rendered in `65536p` (despite the fact we can't tell the difference between `2048p` and `4096p`). People even take 1hr long showers (cos who does that now???)!
(50 years later) Oxygen levels are declining. And we are seeing the effects of it! Some marine biologists have noticed large populations of fish missing. Separately, entomologists are documenting a decline in ant populations! Some scientists link waste lithium from BARY to the reduction in oxygen levels, but according to others there is little consensus on the issue. The conclusion is that the climate is changing. But is it anthropogenic?
-Same idiots, similar story, different physics.
\ No newline at end of file
+Same idiots, different physics.
\ No newline at end of file
diff --git a/_posts/inbetween-posts/2020-07-03-outsourced.md b/_posts/inbetween-posts/2020-07-03-outsourced.md
index 5dd30cde19ff7..02faa79bb8b41 100644
--- a/_posts/inbetween-posts/2020-07-03-outsourced.md
+++ b/_posts/inbetween-posts/2020-07-03-outsourced.md
@@ -18,16 +18,16 @@ People are removing their gastrointestinal organs and buying Gastro which regula
### Physio
-> Exercising (or simply walking) takes attention, and attention is a valuable resource. Don't waste another glance on such unimportant tasks, let us handle it for you.
-> The act of walking, running or simply exercising require attention, a valuable resource. What if you wouldn't have to spend another moment attending to these tasks? 'Physio' has you covered.
+> The act of walking, running or exercising require attention, a valuable resource. What if you wouldn't have to spend another moment attending to these tasks? 'Physio' has you covered.
When switched on: control of your limbs will be outsourced to our state-of-the-art motor control policies. They have some great features like; auto-pilot, self-preservation, efficiency, workout, ... It cant get lost, it won't injure you, it won't harm others!
### Psydy
-> Learning a new language, sport, ..., can be confusing, intimidaing and hardas easily as selecting the topic and waiting for a week? Well, it is that easy with Psydy!
+> Learning a new language, sport, ..., can be confusing, intimidaing and just hard.
+What if it was as easy as selecting the topic and waiting for a week? Well, it is that easy with Psydy!
-While you live your life as usual, we train a copy of your brain on the knowledge or skills you wish to aquire; taekwondo, violin, political science, 1920's art history, ... . We then update your brain to match the trained copy, imparting the learning done.
+While you live your life as usual, we train a copy of your brain on the knowledge or skills you wish to aquire; taekwondo, violin, political science, 1920's art history, ... . We then update your brain to include the trained copy, imparting the new skills.
### Emote
@@ -35,7 +35,7 @@ While you live your life as usual, we train a copy of your brain on the knowledg
'Emote' offers a way to modulate our emotional states like never before.
-Emote is a small device that sits on your brain and modulates the activity of your amygdala. It can be used to increase or decrease the intensity of emotions. It can also be used to induce emotions, like happiness, sadness, anger, ... .
+Emote is a implant that help modulate the activity of your amygdala, hypothalamus, Pituitary gland, hippocampus, Pineal gland. It can also be used to induce emotions, like happiness, sadness, anger, ... To make you more productive, more social, more creative, ...
* * *
@@ -43,7 +43,7 @@ These advertisements are supposed to highlight of the dangers of outsourcing our
> Every convenience a piece of our humanity relinquished?
-Some famous examples include;
+Inspired by the following examples of outsourcing:
In the early '80s, IBM outsourced their software needs to a then-small company, Microsoft. Microsoft retain the rights to the software and sold it to other companies, effectively giving them a professional head start. This decision is hindsight is noted as one of the main factors contributing to Microsoft becoming a global tech giant. [ref](https://spectrum.ieee.org/how-the-ibm-pc-won-then-lost-the-personal-computer-market)
diff --git a/_posts/inbetween-posts/2024-08-10-utilitarianism.md b/_posts/inbetween-posts/2024-08-10-utilitarianism.md
index ceec6900ff402..f51844fef1910 100644
--- a/_posts/inbetween-posts/2024-08-10-utilitarianism.md
+++ b/_posts/inbetween-posts/2024-08-10-utilitarianism.md
@@ -21,14 +21,16 @@ This thought experiment is used to explore the ethical dilemma of whether it is
The correct answer appears to be to pull the lever (or transplant the organs), saving the five people at the expense of the one person, maximizing the number of lives saved.
However, there exist many arguments against this conclusion. Each as nonsensical as the last.
-What is often ignored in this thought experiment is __(un)certainty__. In this case, we are certain that pulling the lever (or transplanting) will save the five people. However, in the real world, we are often uncertain about the consequences of our actions. This uncertainty is the problem with utilitarianism in practice.
+What is often ignored in this thought experiment is __(un)certainty__! In this case, we are certain that pulling the lever (or transplanting) will save the five people. However, in the real world, we are always uncertain about the consequences of our actions.
-We cannot predict the future. We cannot be sure that transplanting a heart will lead to a full recovery.
+Some will argue that we can work with risk. We can evaluate the probability of different outcomes and act in a way that maximizes the expected value. However, this is also not possible, for a similar reason. We can never know the true probability of different outcomes given our actions.
-Consider the following scenario:
+Because we cannot know the future with certainty we can never evaluate one action as better than another.
+
+
\ No newline at end of file
diff --git a/_posts/personal-posts/2015-12-29-future-smarter-telf.md b/_posts/personal-posts/2015-12-29-future-smarter-telf.md
index 24f71ab2b1e29..a8dd8a7b20da5 100644
--- a/_posts/personal-posts/2015-12-29-future-smarter-telf.md
+++ b/_posts/personal-posts/2015-12-29-future-smarter-telf.md
@@ -1,10 +1,5 @@
---
title: "Future Telf, 2016"
-date: "2015-12-29"
-tags:
- - "dreams"
- - "future"
- - "inspiration"
layout: post
subtitle: Dreaming about projects
---
diff --git a/_posts/inbetween-posts/2018-11-24-open-minded-a-game.md b/_posts/personal-posts/2018-11-24-open-minded-a-game.md
similarity index 98%
rename from _posts/inbetween-posts/2018-11-24-open-minded-a-game.md
rename to _posts/personal-posts/2018-11-24-open-minded-a-game.md
index 523c0dff94eda..f38f5afc75f3a 100644
--- a/_posts/inbetween-posts/2018-11-24-open-minded-a-game.md
+++ b/_posts/personal-posts/2018-11-24-open-minded-a-game.md
@@ -3,7 +3,7 @@ title: "Being open minded"
date: "2018-11-24"
coverImage: "screenshot-2018-11-24-at-8-11-41-pm.png"
layout: post
-subtitle: Play a 'game' with me
+subtitle: Let's take turns reading books
categories:
- "interact"
---
diff --git a/_posts/personal-posts/2020-07-03-adversarial-collaboration-contest.md b/_posts/personal-posts/2020-07-03-adversarial-collaboration-contest.md
index 95b28dab6ae42..81dd88ea8a652 100644
--- a/_posts/personal-posts/2020-07-03-adversarial-collaboration-contest.md
+++ b/_posts/personal-posts/2020-07-03-adversarial-collaboration-contest.md
@@ -1,9 +1,10 @@
---
title: "Adversarial collaboration contest"
-date: "2020-07-03"
coverImage: "woman-yelling-at-cat.jpg"
layout: post
-subtitle: Play a game with me.
+subtitle: Let's have a tolerant debate
+categories:
+ - "interact"
---

diff --git a/_drafts/technical-posts/2022-11-10-lm-chem.md b/_posts/technical-posts/2022-11-10-lm-chem.md
similarity index 66%
rename from _drafts/technical-posts/2022-11-10-lm-chem.md
rename to _posts/technical-posts/2022-11-10-lm-chem.md
index 076c7de45532e..2e153d5673b5c 100644
--- a/_drafts/technical-posts/2022-11-10-lm-chem.md
+++ b/_posts/technical-posts/2022-11-10-lm-chem.md
@@ -1,7 +1,12 @@
---
layout: post
-title: Language models are all you need (for SMILES-based chemistry)
+title: Language models are all you need
+subtitle: for SMILES-based chemistry
permalink: lm-chem
+categories:
+ - proposal
+scholar:
+ bibliography: "lm-chem.bib"
---
> Let's train a single large language model on for as many different chemistry tasks as possible.
@@ -32,7 +37,7 @@ Or;
- _"Molecular formula: C21H22N2O2. Observed 13C NMR peaks: 1.22, 1.41, 1.83, 1.84, ... Elucidated structure: \_\_\_\_\_\_\_\_\_\_"_
- Completion: _"O=C1CC2OCC=C3C4C2C2N1c1ccccc1C12CCN(C1C4)C3"_
-We could use data from [NMRShiftDB](https://nmrshiftdb.nmr.uni-koeln.de/) for this task.
+We could use data from NMRShiftDB {% cite Kuhn2015FacilitatingQC %} for this task.
### Translation from natural language to synthetic 'program'
@@ -45,17 +50,17 @@ A dataset for this could be sourced from [orgsyn](http://www.orgsyn.org/) or [Pi
***
-> All of chemistry in a LM.
-
What are other tasks we could include?
- reactivity?
- drug design?
- retrosynthesis?
-How to include other types of chemical data?
+> All of chemistry in a LLM!
+
+Future work could investigate how to include other types of chemical data. For example;
-- [QM9](http://quantum-machine.org/datasets/) (smiles, 3D positions, electron densities)
+- [QM9](http://quantum-machine.org/datasets/) (3D positions, electron densities)
- [Open catalyst project](https://opencatalystproject.org/) (trajectories, forces and energies)
- Theoretical knowledge. Like the dataset used [here](https://chemrxiv.org/engage/api-gateway/chemrxiv/assets/orp/resource/item/6393827c836cebbc757aedeb/original/assessment-of-chemistry-knowledge-in-large-language-models-that-generate-code.pdf) or what about past exams?
@@ -64,34 +69,59 @@ How to include other types of chemical data?
Recently there have been two important observations with AI;
-- Deep learning works when done at scale. The bigger the better.
-- The ability of strings to represent many different kinds of information allows great flexibility which tasks a LM is trained to do.
+1. Deep learning works when done at scale. The bigger the better.
+2. The ability of strings to represent many different kinds of information allows great flexibility which tasks a LLM can be trained to do.
-This trend of BIGGER deep learning can be seen by;
+### Deep learning at scale
- More parameters
- - [Gopher](https://arxiv.org/abs/2112.11446). 280B parameters
- - [PaLM](https://arxiv.org/abs/2204.02311). 540B parameters
- - [Switch Transformers](https://arxiv.org/abs/2101.03961). 1T parameters
+ - Gopher {% cite Rae2021ScalingLM %}. 280B parameters
+ - PaLM {% cite Chowdhery2022PaLMSL %}. 540B parameters
+ - Switch Transformers {% cite Fedus2021SwitchTS %}. 1T parameters
- More data
- - [MassiveText](https://arxiv.org/abs/2112.11446). 2 trillion tokens
- - [LTIP](https://arxiv.org/abs/2111.02114). 400 million image-caption pairs
+ - MassiveText {% cite Rae2021ScalingLM %}. 2 trillion tokens
+ - LTIP {% cite schuhmann2021laion400mopendatasetclipfiltered %}. 400 million image-caption pairs
- More tasks
- - A generalist agent [GATO](https://www.deepmind.com/publications/a-generalist-agent). Trained on;
+ - A generalist agent GATO {% cite reed2022generalistagent %}. Trained on;
- Simulated control tasks (596 tasks) ([DM Control](https://github.com/deepmind/dm_control), [DM lab](https://www.deepmind.com/open-source/deepmind-lab), [Procgen](https://openai.com/blog/procgen-benchmark/), [Atari ALE](https://github.com/mgbellemare/Arcade-Learning-Environment), [playroom](https://arxiv.org/abs/1707.03300), ... and more)
- vision and language (>204 tasks) ([MassiveText](https://arxiv.org/abs/2112.11446), [MultiModal MassiveWeb](https://arxiv.org/abs/2204.14198), [LTIP](https://arxiv.org/abs/2111.02114), [OKVQA](https://okvqa.allenai.org/), ... and more)
-Language models
+### The flexibility of strings and power of LLMs
-A 'model' of language. Given context predict a distribution over the likely next token.
+A language model is a 'model' of language. It models language as a prediction problem: given context predict a distribution over the likely next token.
- "The cat in the _" __->__ "hat" (probably)
- "Monday, Tuesday, Wednesday, _" __->__ Thursday (probably)
- "The most populous city in India is _" __->__ Mumbai (probably)
+Given a language model, we can frame many NLP tasks as predicting the next token, given a prompt.
+This lead some to claim _"Language models are all you need"_ (for NLP tasks) {% cite namazifar2020languagemodelneednatural %}.
-## Aside: Few-shot meta learning vs fine-tuning
+For example; narrative understanding, textual entailment, entity resolution, question answering, POS tagging, grammatic parsing... can all be framed as predicting the next token. And thus can be done with a LLM.
+
+- textual entailment: "text: If you help the needy, God will reward you. hypothesis: Giving money to a poor man has good consequences." -> "positive" (text entails hypothesis)
+- POS tagging: "text: Bob made a book collector happy." -> "subject verb object(article adjective noun) verb-modifier"
+- sentiment analysis: "text: I love this movie!" -> "positive"
+- question answering: "text: The capital of France is _" -> "Paris"
+
+## Aside: more fun tasks LMs can do
+
+_Big-Bench_ {% cite srivastava2023imitationgamequantifyingextrapolating %}
+
+Analyses a LM's ability to do 204 different tasks including.
+
+- auto_debugging
+ - "'\\nfor i in range(10):\\n\\ti' What is the value of i the third time line 2 is executed?" __->__ "2"
+- color matching
+ - "What is the color most closely matching this RGB representation: rgb(128, 2, 198)?" __->__ "purple"
+- chess_state_tracking_legal_moves
+ - "e2e4 g7g6 d2d4 f8g7 c1e3 g8f6 f2f3 d7d6 d1" __->__ "c1, d2, d3, e2"
+
+others
+- ascii mnist, solve riddles, play sudoku, translate hindi proverbs, idendify math thorems, vitamin C fact verification, etc...
+
+
-- textual entailment: "text: If you help the needy, God will reward you. hypothesis: Giving money to a poor man has good consequences." -> "positive" (text entails hypothesis)
-- POS tagging: "text: Bob made a book collector happy." -> "subject verb object(article adjective noun) verb-modifier"
+## Conclusion
-## Aside: more fun tasks LMs can do
+Molecules and analytical data can be represented as strings. This allows us to use LLMs to do chemistry tasks.
+But how can we cram enough knowledge into a LLM to make it useful for chemistry?
+Let's train a single large language model on for as many different chemistry tasks as possible.
-_Big-Bench_ [@Srivastava2022]
-
-Analyses a LM's ability to do 204 different tasks including.
-
-- auto_debugging
- - "'\\nfor i in range(10):\\n\\ti' What is the value of i the third time line 2 is executed?" __->__ "2"
-- color
- - "What is the color most closely matching this RGB representation: rgb(128, 2, 198)?" __->__ "purple"
-- chess_state_tracking_legal_moves
- - "e2e4 g7g6 d2d4 f8g7 c1e3 g8f6 f2f3 d7d6 d1" __->__ "c1, d2, d3, e2"
-
-others
-- ascii mnist, solve riddles, play sudoku, translate hindi proverbs, idendify math thorems, vitamin C fact verification, etc...
+## Bibliography
+{% bibliography --cited %}
\ No newline at end of file
diff --git a/assets/conductor.jpeg b/assets/conductor.jpeg
new file mode 100644
index 0000000000000..6e1eb514e4d82
Binary files /dev/null and b/assets/conductor.jpeg differ
diff --git a/index.md b/index.md
index c08b744e353ea..8f36068d67854 100644
--- a/index.md
+++ b/index.md
@@ -8,12 +8,11 @@ Here are some links to help you navigate:
- [Personal]({{site.baseurl}}/personal/)
- my [experiences]({{site.baseurl}}/experiences/)
- - (my poor) [mental health]({{site.baseurl}}/mental-health/)
+ - invitations for [interaction]({{site.baseurl}}/interact/)
- and more
- [Opinions and thoughts]({{site.baseurl}}/inbetween/)
- [economics and politics]({{site.baseurl}}/economics-politics/)
- [philosophising about ...]({{site.baseurl}}/philosophy/)
- - [with invitations for interaction]({{site.baseurl}}/interact/)
- [speculation]({{site.baseurl}}/speculation/)
- [sci-fi story ideas]({{site.baseurl}}/sci-fi/)
- and more
diff --git a/pages/interact.md b/pages/interact.md
new file mode 100644
index 0000000000000..051528106834a
--- /dev/null
+++ b/pages/interact.md
@@ -0,0 +1,13 @@
+---
+layout: page
+title: Intract with me
+permalink: /interact/
+---
+
+
{{ post.title }}
{{ post.subtitle }}