Skip to content

Commit

Permalink
.
Browse files Browse the repository at this point in the history
  • Loading branch information
act65 committed Dec 13, 2024
1 parent a8a0c8c commit 3183d3e
Show file tree
Hide file tree
Showing 18 changed files with 486 additions and 85 deletions.
5 changes: 5 additions & 0 deletions _drafts/inbetween-posts/2024-10-10-math-origins.md
Original file line number Diff line number Diff line change
Expand Up @@ -182,3 +182,8 @@ References

- https://en.wikipedia.org/wiki/Al-Jabr
- Mathematics in ancient Iraq: a social history. Eleanor Robson. 2008.



<!-- would be cool to explore differnt math notations?
-->
18 changes: 18 additions & 0 deletions _drafts/inbetween-posts/2024-12-01-memory.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
---
title: Memory
---

<!--
basic idea; take rohypnol, forget everything.
instead of sedatives and anasthetics, use memory erasure.
workers in the future are given memory erasure drugs to forget the work they did.
- classified work
- access to private information (doctors?)
- sex work
-
the point. did it happen if you don't remember it?
-->
32 changes: 32 additions & 0 deletions _drafts/inbetween-posts/2024-12-01-pols-tools.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
---
title: Ideas for a better world
subtitle:
layout: post
permalink: better-world
categories:
- "inbetween"
---

## Fight back against online radicalisation

How are people radicalised online?
<!-- from message boards, social media, etc. -->


## Detecting deep fakes

There do exist tools to detect (some kinds of) deep fakes.
However, they are expensive tools (require a lot of computational power).

Main question here is;

- what heuristics can we use to help us detect deep fakes?


## Automated corruption detection

Using public data, can we detect corruption in government?
We could estimate how much money officials are likely to have, based on their job title and history.
The difference between this and their actual wealth (estimated from open sources) could be an indicator of corruption?

<!-- is this done in NZ? -->
12 changes: 12 additions & 0 deletions _drafts/pits/2024-08-10-pits-arbitrary-typical-experiments.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
---
title: "Typical set of arbitrary distributions"
subtitle: "Exploring the flow-based typical set approximation"
layout: post
permalink: /pits/arbitrary-typical-experiments
scholar:
bibliography: "pits.bib"
---

A line of work explores the 'linearisation' / 'rectification' of flows.

Experimentally, we find that the rectification procedure allows us to approximately compute the typical set of the target distribution.
45 changes: 0 additions & 45 deletions _drafts/pits/2024-08-10-pits-flow.md

This file was deleted.

23 changes: 23 additions & 0 deletions _drafts/pits/2024-08-10-pits-inverse.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
---
title: "PITS for solving inverse problems"
subtitle: "Using a flow to correct for noise"
layout: post
permalink: /pits/inverse
scholar:
bibliography: "pits.bib"
---

Thus, we implement PITS as:

$$
\begin{align*}
h = f(y) \tag{forward flow}\\
\hat h = \text{proj}(h) \tag{project into typical set}\\
\hat x = f^{-1}(\hat h) \tag{backward flow}
\end{align*}
$$

![]({{site.baseurl}}/assets/pits/pots-diagram.png)


A diagram of the POTS method. We start with the clean signal $x$, shown as a blue dot. The clean signal is then corrupted to produce the observed signal $y$, shown as a red dot. Next, we project the corrupted signal into the typical set to produce our denoised signal $\hat x$, shown as a green dot. The typical set is shown as a teal annulus.
Original file line number Diff line number Diff line change
@@ -1,8 +1,22 @@
---
layout: post
title: The determinant and the Laplacian
title: Probability chain rule
subtitle: The determinant and the Laplacian
categories:
- "tutorial"
---

The probability chain rule is a useful tool.

$$
\begin{align}
y &= f(x) \\
p(y) &= p(x) \cdot \log \det \Big( \frac{dy}{dx} \Big) \\
p(y) &= e^{-\Delta(x)} \cdot p(x) \\
\end{align}
$$


The __Determinant__ is a measure of how a linear function scales a vector space (_does it get stretched or contracted?_). The __Laplacian__ is a measure of how a gradient field expands/contracts space. These two notions have similar interpretations, yet their equations look different. What is the connection?

(_If you are stuggling to care, skip to <a href="#why-should-we-care">Why should we care?</a>_)
Expand Down Expand Up @@ -152,6 +166,7 @@ $$
potentially useful?
$$
tr(A) = log(det(exp(A))) \\
exp(tr(A)) = det(exp(A)) \\
$$
Nah. that's just using exp to turn multiplication into summation.

Expand Down
29 changes: 0 additions & 29 deletions _drafts/technical-posts/2023-02-11-jax.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,35 +60,6 @@ https://github.com/google/jax/blob/main/cloud_tpu_colabs/Wave_Equation.ipynb



## PRNG

(is this really that important to cover here?)

```python
from jax import random
key = random.PRNGKey(0)
```
***

In Numpy, you are used to errors being thrown when you index an array outside of its bounds, like this:

np.arange(10)[11]

IndexError: index 11 is out of bounds for axis 0 with size 10

However, raising an error from code running on an accelerator can be difficult or impossible.


***

```
make_jaxpr(permissive_sum)(x)
```

***

To get a view of your Python code that is valid for many different argument values, JAX traces it on abstract values that represent sets of possible inputs. There are multiple different levels of abstraction, and different transformations use different abstraction levels.

Resources

- https://jax.readthedocs.io/en/latest/pytrees.html
Expand Down
134 changes: 134 additions & 0 deletions _drafts/technical-posts/2024-12-01-cov-entropy.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,134 @@
---
title: A change of variables formula for entropy
subtitle: The propagation of uncertainty
layout: post
permalink: cov-entropy
categories:
- "technical"
---

One way to quantify the uncertainty in a random variable is through its entropy.
A random variable with more possible outcomes has higher entropy.
For example, a dice with 6 sides (a cube) has higher entropy than a dice with 4 sides (a tetrahedron).

Ideally we would like a formula for the entropy of $Y$ in terms of the entropy of $X$ and the properties of $f$.
<!-- why
- a change of variable formula can be chained together to give the entropy of complex functions without needing to compute any integrals.
- computing integrals is expensive
-->

Given a function $f: \mathbb R^n \to \mathbb R^m$, let's compute the entropy of the output, $H(f(x))$.

We assume;

- n = m
- $f$ is invertible
- $f$ is differentiable
- The Jacobian determinant is non-zero


The entropy of a random variable $x$ is defined as

$$
H(x) = - \int p(x) \log p(x) dx
$$

The change of variables formula for probability distributions is

$$
p(y) = p(x) |\det J_{f^{-1}}| = \frac{p(x)}{|\det J_f|}
$$

Therefore, the entropy of $y = f(x)$ is

$$
\begin{aligned}
H(y) &= - \int p(y) \log p(y) dy \\
dy &= |\det J_f| dx \\
&= - \int \frac{p(x)}{|\det J_f|} \log \left(\frac{p(x)}{|\det J_f|}\right) |\det J_f| dx \\
&= - \int p(x) \log \left(\frac{p(x)}{|\det J_f|}\right) dx \\
&= - \int p(x) \left(\log p(x) - \log |\det J_f|\right) dx \\
&= - \int p(x) \log p(x) dx + \int p(x) \log |\det J_f| dx \\
&= H(x) + \mathbb{E}[\log |\det J_f|]
\end{aligned}
$$

If $f$ is linear, then

$$
\mathbb{E}[\log |\det J_f|] = \log |\det J_f| \\
H(y) = H(x) + \log |\det J_f|
$$

If $f(x) = e^x$, then

$$
\begin{aligned}
\mathbb{E}[\log |\det J_f|] &= \mathbb{E}[\log e^x] \\
&= \mathbb{E}[x] \\
&= \mu_x \\
H(y) &= H(x) + \mu_x
\end{aligned}
$$

If $f(x) = x^a$, then

$$
\begin{aligned}
\mathbb{E}[\log |\det J_f|] &= \mathbb{E}[\log a x^{a-1}] \\
&= \mathbb{E}[\log a + (a-1) \log x] \\
&= \log a + (a-1) \mathbb{E}[\log x] \\
\end{aligned}
$$

Cannot simplfy further without knowing the distribution of $x$.
Or evaluating the integral in the expectation (which we don't want to do).

If $f(x) = \log x$, then

$$
\begin{aligned}
\mathbb{E}[\log |\det J_f|] &= \mathbb{E}[\log \frac{1}{x}] \\
&= \mathbb{E}[-\log x] \\
&= -\mathbb{E}[\log x] \\
\end{aligned}
$$

What???

<!-- For the $x^a$ case: You could add some special cases:
For $x > 0$ and $a = 2$, if $x$ follows a log-normal distribution, $\mathbb{E}[\log x]$ has a closed form
For positive random variables, Jensen's inequality could provide bounds -->

<!-- If we know x follows a specific distribution (like lognormal), we could evaluate E[log |x|] explicitly. But in the general case, this is as far as we can simplify.
Would you like to explore what happens with specific input distributions, or are you interested in other functional forms? -->


<!-- It might be worth considering what happens when we compose transformations. The chain rule for Jacobians might give interesting results. -->

What if f = relu?

$$
\begin{aligned}
f(x) &= \begin{cases}
x, \text{if } x > 0 \\
0, \text{otherwise}
\end{cases} \\
J_f &= \begin{cases}
1, \text{if } x > 0 \\
0, \text{otherwise}
\end{cases} \\
\end{aligned}
$$

$$
\begin{aligned}
\mathbb{E}[\log |\det J_f|] &= \mathbb{E}[\log J_f] \\
&= \mathbb{E}[\log 1] \\
&= ???
\end{aligned}
$$

If we knew the cdf of x, then could calculate p(x>0) = 1 - cdf(0).
E[log 1] = p(x>0) log 1
33 changes: 33 additions & 0 deletions _drafts/technical-posts/2024-12-01-lit-review.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
---
title: Keeping up to date with the literature
subtitle: Interesting topics in the field
layout: post
categories:
- "technical"
---

A few of my pet interests in the field of machine learning and AI.
Which I keep track of loosely, and read papers on when I have time.

[Teaching]({{ site.baseurl }}/teaching-lit)



https://openreview.net/forum?id=UdxpjKO2F9




[Neural wave functions]({{ site.baseurl }}/neural-wave-lit)





[Associative memory]({{ site.baseurl }}/memory-lit)





[Bio-plausible backprop]({{ site.baseurl }}/bio-backprop-lit)
Loading

0 comments on commit 3183d3e

Please sign in to comment.