diff --git a/_drafts/inbetween-posts/2024-10-10-math-origins.md b/_drafts/inbetween-posts/2024-10-10-math-origins.md index 423f0dbd6be39..a5dcf683420aa 100644 --- a/_drafts/inbetween-posts/2024-10-10-math-origins.md +++ b/_drafts/inbetween-posts/2024-10-10-math-origins.md @@ -182,3 +182,8 @@ References - https://en.wikipedia.org/wiki/Al-Jabr - Mathematics in ancient Iraq: a social history. Eleanor Robson. 2008. + + + + \ No newline at end of file diff --git a/_drafts/inbetween-posts/2024-12-01-memory.md b/_drafts/inbetween-posts/2024-12-01-memory.md new file mode 100644 index 0000000000000..74d31b75603e0 --- /dev/null +++ b/_drafts/inbetween-posts/2024-12-01-memory.md @@ -0,0 +1,18 @@ +--- +title: Memory +--- + + \ No newline at end of file diff --git a/_drafts/inbetween-posts/2024-12-01-pols-tools.md b/_drafts/inbetween-posts/2024-12-01-pols-tools.md new file mode 100644 index 0000000000000..ba713c17374bd --- /dev/null +++ b/_drafts/inbetween-posts/2024-12-01-pols-tools.md @@ -0,0 +1,32 @@ +--- +title: Ideas for a better world +subtitle: +layout: post +permalink: better-world +categories: + - "inbetween" +--- + +## Fight back against online radicalisation + +How are people radicalised online? + + + +## Detecting deep fakes + +There do exist tools to detect (some kinds of) deep fakes. +However, they are expensive tools (require a lot of computational power). + +Main question here is; + +- what heuristics can we use to help us detect deep fakes? + + +## Automated corruption detection + +Using public data, can we detect corruption in government? +We could estimate how much money officials are likely to have, based on their job title and history. +The difference between this and their actual wealth (estimated from open sources) could be an indicator of corruption? + + \ No newline at end of file diff --git a/_drafts/pits/2024-08-10-pits-arbitrary-typical-experiments.md b/_drafts/pits/2024-08-10-pits-arbitrary-typical-experiments.md new file mode 100644 index 0000000000000..1384ff3758b66 --- /dev/null +++ b/_drafts/pits/2024-08-10-pits-arbitrary-typical-experiments.md @@ -0,0 +1,12 @@ +--- +title: "Typical set of arbitrary distributions" +subtitle: "Exploring the flow-based typical set approximation" +layout: post +permalink: /pits/arbitrary-typical-experiments +scholar: + bibliography: "pits.bib" +--- + +A line of work explores the 'linearisation' / 'rectification' of flows. + +Experimentally, we find that the rectification procedure allows us to approximately compute the typical set of the target distribution. diff --git a/_drafts/pits/2024-08-10-pits-flow.md b/_drafts/pits/2024-08-10-pits-flow.md deleted file mode 100644 index 335795690838c..0000000000000 --- a/_drafts/pits/2024-08-10-pits-flow.md +++ /dev/null @@ -1,45 +0,0 @@ ---- -title: "PITS plus flows" -subtitle: "A method to apply PITS to arbitrary distributions using neural flows" -layout: post -permalink: /pits/flow -scholar: - bibliography: "pits.bib" ---- - -Constriained search in the typical set is intractable for arbitrary distributions. -We propose a method to apply PITS to arbitrary distributions using neural flows. - - - - - -% TODO in which cases does $x - f^{-1}(\alpha f(x))$ approximate $\nabla_x p_f(x)$?? - -In general, the typical set, $\mathcal T_{p(x)}^{\epsilon}$, is intractable to compute for arbitrary continuous distributions. -However, we assume we have access to a flow that maps from clean data to a Gaussian source distribution, $f_{push}: P(X) \to \mathcal N(Y)$. - -% (needs proof) -We conjecture that is it possible to use a flow to sample from the typical set of arbitrary distributions (see future work \ref{futurePOTS}). -This can be achieved by exploiting the structure of the flow-based models Gaussian source distribution. - -For Gaussian distributions, the typical set has a simple closed-form solution, an annulus, with radius and thickness dependent on the dimension and standard deviation of the Gaussian. - -% (needs proof) -Projection into the typical set for a Gaussian can be approximated via a - -Thus, we implement POTS as: - -\begin{align*} -h = f(y) \tag{forward flow}\\ -\hat h = \text{proj}(h) \tag{project onto typical set}\\ -\hat x = f^{-1}(\hat h) \tag{backward flow} -\end{align*} - - -\begin{figure}[H] - \centering - \includegraphics[width=0.75\textwidth]{assets/pots-diagram.png} - \vspace{-1em} - \caption{A diagram of the POTS method. We start with the clean signal $x$, shown as a blue dot. The clean signal is then corrupted to produce the observed signal $y$, shown as a red dot. Next, we project the corrupted signal into the typical set to produce our denoised signal $\hat x$, shown as a green dot. The typical set is shown as a teal annulus. \label{f.A}} -\end{figure} diff --git a/_drafts/pits/2024-08-10-pits-inverse.md b/_drafts/pits/2024-08-10-pits-inverse.md new file mode 100644 index 0000000000000..432596dd34abf --- /dev/null +++ b/_drafts/pits/2024-08-10-pits-inverse.md @@ -0,0 +1,23 @@ +--- +title: "PITS for solving inverse problems" +subtitle: "Using a flow to correct for noise" +layout: post +permalink: /pits/inverse +scholar: + bibliography: "pits.bib" +--- + +Thus, we implement PITS as: + +$$ +\begin{align*} +h = f(y) \tag{forward flow}\\ +\hat h = \text{proj}(h) \tag{project into typical set}\\ +\hat x = f^{-1}(\hat h) \tag{backward flow} +\end{align*} +$$ + +![]({{site.baseurl}}/assets/pits/pots-diagram.png) + + +A diagram of the POTS method. We start with the clean signal $x$, shown as a blue dot. The clean signal is then corrupted to produce the observed signal $y$, shown as a red dot. Next, we project the corrupted signal into the typical set to produce our denoised signal $\hat x$, shown as a green dot. The typical set is shown as a teal annulus. diff --git a/on_hold/2018-9-17-det-lap.md b/_drafts/technical-posts/2018-9-17-det-lap.md similarity index 94% rename from on_hold/2018-9-17-det-lap.md rename to _drafts/technical-posts/2018-9-17-det-lap.md index a6c6e82f43260..90071a5aae37c 100644 --- a/on_hold/2018-9-17-det-lap.md +++ b/_drafts/technical-posts/2018-9-17-det-lap.md @@ -1,8 +1,22 @@ --- layout: post -title: The determinant and the Laplacian +title: Probability chain rule +subtitle: The determinant and the Laplacian +categories: + - "tutorial" --- +The probability chain rule is a useful tool. + +$$ +\begin{align} +y &= f(x) \\ +p(y) &= p(x) \cdot \log \det \Big( \frac{dy}{dx} \Big) \\ +p(y) &= e^{-\Delta(x)} \cdot p(x) \\ +\end{align} +$$ + + The __Determinant__ is a measure of how a linear function scales a vector space (_does it get stretched or contracted?_). The __Laplacian__ is a measure of how a gradient field expands/contracts space. These two notions have similar interpretations, yet their equations look different. What is the connection? (_If you are stuggling to care, skip to Why should we care?_) @@ -152,6 +166,7 @@ $$ potentially useful? $$ tr(A) = log(det(exp(A))) \\ +exp(tr(A)) = det(exp(A)) \\ $$ Nah. that's just using exp to turn multiplication into summation. diff --git a/_drafts/technical-posts/2023-02-11-jax.md b/_drafts/technical-posts/2023-02-11-jax.md index ba3914439d5a7..1981b9ceb25ec 100644 --- a/_drafts/technical-posts/2023-02-11-jax.md +++ b/_drafts/technical-posts/2023-02-11-jax.md @@ -60,35 +60,6 @@ https://github.com/google/jax/blob/main/cloud_tpu_colabs/Wave_Equation.ipynb -## PRNG - -(is this really that important to cover here?) - -```python -from jax import random -key = random.PRNGKey(0) -``` -*** - -In Numpy, you are used to errors being thrown when you index an array outside of its bounds, like this: - -np.arange(10)[11] - -IndexError: index 11 is out of bounds for axis 0 with size 10 - -However, raising an error from code running on an accelerator can be difficult or impossible. - - -*** - -``` -make_jaxpr(permissive_sum)(x) -``` - -*** - -To get a view of your Python code that is valid for many different argument values, JAX traces it on abstract values that represent sets of possible inputs. There are multiple different levels of abstraction, and different transformations use different abstraction levels. - Resources - https://jax.readthedocs.io/en/latest/pytrees.html diff --git a/_drafts/technical-posts/2024-12-01-cov-entropy.md b/_drafts/technical-posts/2024-12-01-cov-entropy.md new file mode 100644 index 0000000000000..da47b7dbe9bd5 --- /dev/null +++ b/_drafts/technical-posts/2024-12-01-cov-entropy.md @@ -0,0 +1,134 @@ +--- +title: A change of variables formula for entropy +subtitle: The propagation of uncertainty +layout: post +permalink: cov-entropy +categories: + - "technical" +--- + +One way to quantify the uncertainty in a random variable is through its entropy. +A random variable with more possible outcomes has higher entropy. +For example, a dice with 6 sides (a cube) has higher entropy than a dice with 4 sides (a tetrahedron). + +Ideally we would like a formula for the entropy of $Y$ in terms of the entropy of $X$ and the properties of $f$. + + +Given a function $f: \mathbb R^n \to \mathbb R^m$, let's compute the entropy of the output, $H(f(x))$. + +We assume; + +- n = m +- $f$ is invertible +- $f$ is differentiable +- The Jacobian determinant is non-zero + + +The entropy of a random variable $x$ is defined as + +$$ +H(x) = - \int p(x) \log p(x) dx +$$ + +The change of variables formula for probability distributions is + +$$ +p(y) = p(x) |\det J_{f^{-1}}| = \frac{p(x)}{|\det J_f|} +$$ + +Therefore, the entropy of $y = f(x)$ is + +$$ +\begin{aligned} +H(y) &= - \int p(y) \log p(y) dy \\ +dy &= |\det J_f| dx \\ +&= - \int \frac{p(x)}{|\det J_f|} \log \left(\frac{p(x)}{|\det J_f|}\right) |\det J_f| dx \\ +&= - \int p(x) \log \left(\frac{p(x)}{|\det J_f|}\right) dx \\ +&= - \int p(x) \left(\log p(x) - \log |\det J_f|\right) dx \\ +&= - \int p(x) \log p(x) dx + \int p(x) \log |\det J_f| dx \\ +&= H(x) + \mathbb{E}[\log |\det J_f|] +\end{aligned} +$$ + +If $f$ is linear, then + +$$ +\mathbb{E}[\log |\det J_f|] = \log |\det J_f| \\ +H(y) = H(x) + \log |\det J_f| +$$ + +If $f(x) = e^x$, then + +$$ +\begin{aligned} +\mathbb{E}[\log |\det J_f|] &= \mathbb{E}[\log e^x] \\ +&= \mathbb{E}[x] \\ +&= \mu_x \\ +H(y) &= H(x) + \mu_x +\end{aligned} +$$ + +If $f(x) = x^a$, then + +$$ +\begin{aligned} +\mathbb{E}[\log |\det J_f|] &= \mathbb{E}[\log a x^{a-1}] \\ +&= \mathbb{E}[\log a + (a-1) \log x] \\ +&= \log a + (a-1) \mathbb{E}[\log x] \\ +\end{aligned} +$$ + +Cannot simplfy further without knowing the distribution of $x$. +Or evaluating the integral in the expectation (which we don't want to do). + +If $f(x) = \log x$, then + +$$ +\begin{aligned} +\mathbb{E}[\log |\det J_f|] &= \mathbb{E}[\log \frac{1}{x}] \\ +&= \mathbb{E}[-\log x] \\ +&= -\mathbb{E}[\log x] \\ +\end{aligned} +$$ + +What??? + + + + + + + + +What if f = relu? + +$$ +\begin{aligned} +f(x) &= \begin{cases} + x, \text{if } x > 0 \\ + 0, \text{otherwise} +\end{cases} \\ +J_f &= \begin{cases} + 1, \text{if } x > 0 \\ + 0, \text{otherwise} +\end{cases} \\ +\end{aligned} +$$ + +$$ +\begin{aligned} +\mathbb{E}[\log |\det J_f|] &= \mathbb{E}[\log J_f] \\ +&= \mathbb{E}[\log 1] \\ +&= ??? +\end{aligned} +$$ + +If we knew the cdf of x, then could calculate p(x>0) = 1 - cdf(0). +E[log 1] = p(x>0) log 1 \ No newline at end of file diff --git a/_drafts/technical-posts/2024-12-01-lit-review.md b/_drafts/technical-posts/2024-12-01-lit-review.md new file mode 100644 index 0000000000000..2f5c1167f8755 --- /dev/null +++ b/_drafts/technical-posts/2024-12-01-lit-review.md @@ -0,0 +1,33 @@ +--- +title: Keeping up to date with the literature +subtitle: Interesting topics in the field +layout: post +categories: + - "technical" +--- + +A few of my pet interests in the field of machine learning and AI. +Which I keep track of loosely, and read papers on when I have time. + +[Teaching]({{ site.baseurl }}/teaching-lit) + + + +https://openreview.net/forum?id=UdxpjKO2F9 + + + + +[Neural wave functions]({{ site.baseurl }}/neural-wave-lit) + + + + + +[Associative memory]({{ site.baseurl }}/memory-lit) + + + + + +[Bio-plausible backprop]({{ site.baseurl }}/bio-backprop-lit) \ No newline at end of file diff --git a/_drafts/technical-posts/2024-12-01-neural-wave.md b/_drafts/technical-posts/2024-12-01-neural-wave.md new file mode 100644 index 0000000000000..652ae50c6fa8e --- /dev/null +++ b/_drafts/technical-posts/2024-12-01-neural-wave.md @@ -0,0 +1,14 @@ +--- +title: Neural wave functions +subtitle: A lit review +layout: post +permalink: neural-wave +categories: + - "technical" +--- + + + + +ferminet, pauli net, and their descendants. +Neural Pfaffians: https://openreview.net/pdf?id=HRkniCWM3E \ No newline at end of file diff --git a/_drafts/technical-posts/2024-12-01-uncert-prop.md b/_drafts/technical-posts/2024-12-01-uncert-prop.md new file mode 100644 index 0000000000000..d62adea44c23b --- /dev/null +++ b/_drafts/technical-posts/2024-12-01-uncert-prop.md @@ -0,0 +1,97 @@ +--- +title: The propagation of uncertainty +subtitle: +layout: post +permalink: uncert-prop +categories: + - "technical" +--- + +What do we mean by uncertainty? + +- uncertainty in the data +- uncertainty in the model +- uncertainty in the parameters + + + +Aleotric + +$$ +y = f(x, \theta) \\ +p(y | x, \theta) = \frac{p(y, x, \theta)}{p(x, \theta)} +$$ + + +*** + +What do we mean by probability? + +For aleoteric uncertainty, the frequentist interpretation makes the most sense. +There is noise in the data, we can sample many times and count the number of times (the frequency) we get a certain value. + +For epistemic uncertainty, the bayesian interpretation makes the most sense. +We must act despite missing information, so we must pick the 'best' option. + + +*** + +In Machine Learning, there are several methods for uncertainty propagation. Here are the main approaches: + +Monte Carlo (MC) Methods: + + +Simple MC sampling +Latin Hypercube Sampling +Importance sampling +Advantages: Simple to implement, works with any model +Disadvantages: Computationally expensive, requires many samples + + +Analytical Methods: + + +Delta Method (First-order Taylor expansion) +Error propagation formula +Advantages: Fast, exact for linear systems +Disadvantages: Only works for simple functions, assumes Gaussian distributions + + +Bayesian Methods: + + +Bayesian Neural Networks +Gaussian Processes +Variational Inference +Advantages: Provides full probability distributions +Disadvantages: Can be computationally intensive + + +Ensemble Methods: + + +Bootstrap +Dropout as uncertainty +Multiple models with different initializations +Advantages: Can capture model uncertainty +Disadvantages: Requires training multiple models + + +Interval Arithmetic: + + +Range propagation +Affine arithmetic +Advantages: Provides guaranteed bounds +Disadvantages: + Can be overly conservative + Dependency problem (treating same variable as independent) + Wrapping effect (overestimation in multiple dimensions) + + +Polynomial Chaos Expansion: + + +Represents uncertainty using orthogonal polynomials +Advantages: Efficient for certain types of problems +Disadvantages: Complex implementation \ No newline at end of file diff --git a/_posts/personal-posts/2015-07-05-right-beliefs-wrong-reasons.md b/_posts/personal-posts/2015-07-05-right-beliefs-wrong-reasons.md index 8e09d65ab14d1..5a84e44c4943e 100644 --- a/_posts/personal-posts/2015-07-05-right-beliefs-wrong-reasons.md +++ b/_posts/personal-posts/2015-07-05-right-beliefs-wrong-reasons.md @@ -9,12 +9,10 @@ subtitle: It occured to me that my belief in evolution was just as illogical as ![]({{site.baseurl}}/assets/right-beliefs-wrong-reasons/{{page.coverImage}}) -It occurred to me that the reasons many people believe in evolution could be just as illogical, or even more so, than the reasons people believe in god or creationism. +It occurred to me that the reasons many people believe in evolution could be just as illogical than the reasons people believe in creationism (and / or god). -It would be interesting to do a survey of people believing in evolution and creationism respectively and compare the reasons they cite for their belief. - -My main point is that many people may believe in evolution because; they are told that it is logical and scientific, it is the trendy thing to do, it is fun to look down on others and call them ignorant, everyone they know does. Where they believe the theory, without understanding any of the evidence and how that evidence validates the theory. Believing a theory, with out any understanding or evidence, is same as faith, or believing in a god. +My main point is that many people may believe in evolution because; they are told that it is logical and scientific, it is the trendy thing to do, it is fun to look down on others and call them ignorant, everyone they know believes in it. Where they believe the theory, without understanding any of the evidence and how that evidence validates the theory. Believing a theory, with out any understanding or evidence, is same as faith. So to those who take a intellectual high-ground against those who believe in creationism, I ask you to rethink your reasons and logic for believing evolution. What is your understanding of the evidence and how does that show that evolution actually occurred? And to those who believe in creationism I ask the same. -This problem is far greater than those who believe in creationism. Peoples beliefs are easily influenced by those around them, their cultural environment. We need to encourage all people to understand their logical biases and to think critically, logically and scientifically. We should not tell people that they are wrong, ignorant and stupid because they believe the wrong thing. We should tell them they are wrong, ignorant and stupid because they don't have a logical and rigorous thought process. +This problem is far greater than those who believe in creationism, or evolution. People's beliefs are easily influenced by those around them, their cultural environment. We need to encourage all people to understand their logical biases and to think critically, logically and scientifically. To value evidence. We should not tell people that they are wrong, ignorant and stupid because they believe the wrong thing. We should tell them they are wrong, ignorant and stupid because they don't have a logical and rigorous thought process. \ No newline at end of file diff --git a/_drafts/autoint/2022-10-13-autoint.md b/_posts/technical-posts/autoint/2022-10-13-autoint.md similarity index 100% rename from _drafts/autoint/2022-10-13-autoint.md rename to _posts/technical-posts/autoint/2022-10-13-autoint.md diff --git a/_posts/technical-posts/pits/2024-08-10-pits-arbitrary-typical-proof.md b/_posts/technical-posts/pits/2024-08-10-pits-arbitrary-typical-proof.md new file mode 100644 index 0000000000000..bd3e7d7bd6929 --- /dev/null +++ b/_posts/technical-posts/pits/2024-08-10-pits-arbitrary-typical-proof.md @@ -0,0 +1,71 @@ +--- +title: "Typical set of arbitrary distributions" +subtitle: "Proving that the image of the typical set is the typical set of the image" +layout: post +permalink: /pits/arbitrary-typical-proof +scholar: + bibliography: "pits.bib" +--- + +Question: +Given a flow, is the image of the typical set (in X) the same as the typical set (in Y)? + +More formally; + +- We have a random variable $\textbf X$ with distribution $\rho_X$. +- We have a flow $f$ that maps from $X$ to $Y$. +- The typical set of $\textbf Z$ is defined as $\mathcal T_\epsilon(\textbf Z) = \{z\in Z: \mid - \frac{1}{N} \log p(z) - h(Z) \mid \le \epsilon \}$. + +Does the image of the typical set in $\textbf X$, $\{f(x): x\in \mathcal T(\textbf X)\}$, equal the typical set of Y, $\mathcal T(Y)$ under the distribution $\rho_Y = f^{push}\rho_X$? + +*** + +Need to show that: $\forall x \in T(\mathbf X)$ we have $f(x) \in T(\mathbf Y)$ and vice versa. + +$$ +\begin{align*} +p_Y(f(x)) &= \frac{p_X(x)}{|\det J(x)|} \tag{change of variables eqn}\\ + \log p_y(y) &= \log \frac{p_x(x)}{|\det J(x)|} \\ +&= \log p_x(x) - \log |\det J(x)| \\ +\end{align*} +$$ + +$$ +\begin{align*} +H(\mathbf Y) &= \int p(y) \log p(y) dy \tag{defn of entropy} \\ +&= \int \frac{p(x)}{|\det J(x)|} \log \frac{p(x)}{|\det J(x)|} |\det J(x)| dx \\ +&= \int p(x) \log p(x) - \log |\det J(x)| dx\\ +&= H(\mathbf X) - \mathbb E[\log |\det J(x)|]\\ +\end{align*} +$$ + +$$ +\begin{align*} +T_\epsilon(\mathbf Y) &= \{y\in \mathbf Y: \mid - \frac{1}{N} \log p(y) - H(\mathbf Y) \mid \le \epsilon \} \\ +&= \{x\in \mathbf X: \mid - \frac{1}{N} (\log p_x(x) - \log |\det J(x)|) - (H(\mathbf X) + \mathbb E[\log |\det J(x)|]) \mid \le \epsilon \} \\ +\end{align*} +$$ + +$T_\epsilon(Y)$ is not generally equal to $T_\epsilon(X)$ because of the terms involving the Jacobian determinant. For equality to hold, we would need: + +$$ +\begin{align*} +\mathbb E[\log |\det J(x)|] = \log |\det J(x)| +\end{align*} +$$ + +then the two terms involving the Jacobian determinant would cancel out, leaving + +$$ +\begin{align*} +T_\epsilon(\mathbf Y) &= \{x\in \mathbf X: \mid - \frac{1}{N} \log p_x(x) - H(\mathbf X) \mid \le \epsilon \} \\ +&= T_\epsilon(\mathbf X) \\ +\end{align*} +$$ + +*** + +Therefore, the answer to the original question is: + +- For linear transformations (constant Jacobian determinant): Yes, the image of the typical set equals the typical set of the image. +- For general flows: No, they are not generally equal, due to the varying Jacobian determinant. \ No newline at end of file diff --git a/_posts/technical-posts/pits/2024-08-10-pits-arbitrary-typical.md b/_posts/technical-posts/pits/2024-08-10-pits-arbitrary-typical.md new file mode 100644 index 0000000000000..26d6ad7e4f0d0 --- /dev/null +++ b/_posts/technical-posts/pits/2024-08-10-pits-arbitrary-typical.md @@ -0,0 +1,22 @@ +--- +title: "Typical set of arbitrary distributions" +subtitle: "Constructing the typical set for arbitrary distributions" +layout: post +permalink: /pits/arbitrary-typical +scholar: + bibliography: "pits.bib" +--- + + + +In general, the typical set, $\mathcal T_{p(x)}^{\epsilon}$, is intractable to compute for arbitrary continuous distributions. + +However, we can approximate the typical set, using a rectified flow that maps from a target data distribution to a Gaussian source distribution. + +We back up this claim in two parts: + +- a derivation [showing]({{ site.baseurl }}/pits/arbitrary-typical-proof) that; + - for linear transformations the image of the typical set equals the typical set of the image. + - for general flows they are not generally equal. +- experimental evidence [showing]({{ site.baseurl }}/pits/arbitrary-typical-experiments) that; + - (WIP) the closer to flow is to being linear, the more accurate the typical set approximation. \ No newline at end of file diff --git a/_posts/technical-posts/pits/2024-08-10-pits-main.md b/_posts/technical-posts/pits/2024-08-10-pits-main.md index acd36f0b07608..1f8a710daa978 100644 --- a/_posts/technical-posts/pits/2024-08-10-pits-main.md +++ b/_posts/technical-posts/pits/2024-08-10-pits-main.md @@ -9,7 +9,6 @@ scholar: bibliography: "pits.bib" --- - The advances in generative modelling have shown that we can generate high-quality samples from complex distributions. A next step is to use these generative models as priors to help solve inverse problems. @@ -59,10 +58,10 @@ I wrote a few posts to help you understand PITS; [1.]({{ site.baseurl }}/pits/typical) Background on typicality \ [2.]({{ site.baseurl }}/pits/map) A simple worked example showing that MAP produces solutions that are not typical. \ -[4.]({{ site.baseurl }}/pits/flow) (WIP) A method to apply PITS arbitrary distributions (using neural flows). \ -[6.]({{ site.baseurl }}/pits/mnist-demo) (WIP) A demonstration of the PITS approach to inverse problems applied to neural flows. \ -[3.]({{ site.baseurl }}/pits/non-typical) (WIP) Does it matter if solutions are not typical? \ -[5.]({{ site.baseurl }}/pits/flow-theory) (WIP) Theory showing that in the Gaussian case, PITS combined with flows is principled. \ +[3.]({{ site.baseurl }}/pits/arbitrary-typical) Using neural flows we can approximate the typical set for arbitrary distributions. \ +[4.]({{ site.baseurl }}/pits/inverse) (WIP) How to combine typicality with flows to solve inverse problems. \ +[5.]({{ site.baseurl }}/pits/mnist-demo) (WIP) A demonstration of the PITS+flow approach to inverse problems. \ +[6.]({{ site.baseurl }}/pits/non-typical) (WIP) Does it matter if solutions are not typical? \ [7.]({{ site.baseurl }}/pits/review-dps) A brief review of methods attempting to solve inverse problems using s.o.t.a generative models.