forked from barryclark/jekyll-now
-
Notifications
You must be signed in to change notification settings - Fork 5
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
18 changed files
with
486 additions
and
85 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,18 @@ | ||
--- | ||
title: Memory | ||
--- | ||
|
||
<!-- | ||
basic idea; take rohypnol, forget everything. | ||
instead of sedatives and anasthetics, use memory erasure. | ||
workers in the future are given memory erasure drugs to forget the work they did. | ||
- classified work | ||
- access to private information (doctors?) | ||
- sex work | ||
- | ||
the point. did it happen if you don't remember it? | ||
--> |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,32 @@ | ||
--- | ||
title: Ideas for a better world | ||
subtitle: | ||
layout: post | ||
permalink: better-world | ||
categories: | ||
- "inbetween" | ||
--- | ||
|
||
## Fight back against online radicalisation | ||
|
||
How are people radicalised online? | ||
<!-- from message boards, social media, etc. --> | ||
|
||
|
||
## Detecting deep fakes | ||
|
||
There do exist tools to detect (some kinds of) deep fakes. | ||
However, they are expensive tools (require a lot of computational power). | ||
|
||
Main question here is; | ||
|
||
- what heuristics can we use to help us detect deep fakes? | ||
|
||
|
||
## Automated corruption detection | ||
|
||
Using public data, can we detect corruption in government? | ||
We could estimate how much money officials are likely to have, based on their job title and history. | ||
The difference between this and their actual wealth (estimated from open sources) could be an indicator of corruption? | ||
|
||
<!-- is this done in NZ? --> |
12 changes: 12 additions & 0 deletions
12
_drafts/pits/2024-08-10-pits-arbitrary-typical-experiments.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,12 @@ | ||
--- | ||
title: "Typical set of arbitrary distributions" | ||
subtitle: "Exploring the flow-based typical set approximation" | ||
layout: post | ||
permalink: /pits/arbitrary-typical-experiments | ||
scholar: | ||
bibliography: "pits.bib" | ||
--- | ||
|
||
A line of work explores the 'linearisation' / 'rectification' of flows. | ||
|
||
Experimentally, we find that the rectification procedure allows us to approximately compute the typical set of the target distribution. |
This file was deleted.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,23 @@ | ||
--- | ||
title: "PITS for solving inverse problems" | ||
subtitle: "Using a flow to correct for noise" | ||
layout: post | ||
permalink: /pits/inverse | ||
scholar: | ||
bibliography: "pits.bib" | ||
--- | ||
|
||
Thus, we implement PITS as: | ||
|
||
$$ | ||
\begin{align*} | ||
h = f(y) \tag{forward flow}\\ | ||
\hat h = \text{proj}(h) \tag{project into typical set}\\ | ||
\hat x = f^{-1}(\hat h) \tag{backward flow} | ||
\end{align*} | ||
$$ | ||
|
||
 | ||
|
||
|
||
A diagram of the POTS method. We start with the clean signal $x$, shown as a blue dot. The clean signal is then corrupted to produce the observed signal $y$, shown as a red dot. Next, we project the corrupted signal into the typical set to produce our denoised signal $\hat x$, shown as a green dot. The typical set is shown as a teal annulus. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,134 @@ | ||
--- | ||
title: A change of variables formula for entropy | ||
subtitle: The propagation of uncertainty | ||
layout: post | ||
permalink: cov-entropy | ||
categories: | ||
- "technical" | ||
--- | ||
|
||
One way to quantify the uncertainty in a random variable is through its entropy. | ||
A random variable with more possible outcomes has higher entropy. | ||
For example, a dice with 6 sides (a cube) has higher entropy than a dice with 4 sides (a tetrahedron). | ||
|
||
Ideally we would like a formula for the entropy of $Y$ in terms of the entropy of $X$ and the properties of $f$. | ||
<!-- why | ||
- a change of variable formula can be chained together to give the entropy of complex functions without needing to compute any integrals. | ||
- computing integrals is expensive | ||
--> | ||
|
||
Given a function $f: \mathbb R^n \to \mathbb R^m$, let's compute the entropy of the output, $H(f(x))$. | ||
|
||
We assume; | ||
|
||
- n = m | ||
- $f$ is invertible | ||
- $f$ is differentiable | ||
- The Jacobian determinant is non-zero | ||
|
||
|
||
The entropy of a random variable $x$ is defined as | ||
|
||
$$ | ||
H(x) = - \int p(x) \log p(x) dx | ||
$$ | ||
|
||
The change of variables formula for probability distributions is | ||
|
||
$$ | ||
p(y) = p(x) |\det J_{f^{-1}}| = \frac{p(x)}{|\det J_f|} | ||
$$ | ||
|
||
Therefore, the entropy of $y = f(x)$ is | ||
|
||
$$ | ||
\begin{aligned} | ||
H(y) &= - \int p(y) \log p(y) dy \\ | ||
dy &= |\det J_f| dx \\ | ||
&= - \int \frac{p(x)}{|\det J_f|} \log \left(\frac{p(x)}{|\det J_f|}\right) |\det J_f| dx \\ | ||
&= - \int p(x) \log \left(\frac{p(x)}{|\det J_f|}\right) dx \\ | ||
&= - \int p(x) \left(\log p(x) - \log |\det J_f|\right) dx \\ | ||
&= - \int p(x) \log p(x) dx + \int p(x) \log |\det J_f| dx \\ | ||
&= H(x) + \mathbb{E}[\log |\det J_f|] | ||
\end{aligned} | ||
$$ | ||
|
||
If $f$ is linear, then | ||
|
||
$$ | ||
\mathbb{E}[\log |\det J_f|] = \log |\det J_f| \\ | ||
H(y) = H(x) + \log |\det J_f| | ||
$$ | ||
|
||
If $f(x) = e^x$, then | ||
|
||
$$ | ||
\begin{aligned} | ||
\mathbb{E}[\log |\det J_f|] &= \mathbb{E}[\log e^x] \\ | ||
&= \mathbb{E}[x] \\ | ||
&= \mu_x \\ | ||
H(y) &= H(x) + \mu_x | ||
\end{aligned} | ||
$$ | ||
|
||
If $f(x) = x^a$, then | ||
|
||
$$ | ||
\begin{aligned} | ||
\mathbb{E}[\log |\det J_f|] &= \mathbb{E}[\log a x^{a-1}] \\ | ||
&= \mathbb{E}[\log a + (a-1) \log x] \\ | ||
&= \log a + (a-1) \mathbb{E}[\log x] \\ | ||
\end{aligned} | ||
$$ | ||
|
||
Cannot simplfy further without knowing the distribution of $x$. | ||
Or evaluating the integral in the expectation (which we don't want to do). | ||
|
||
If $f(x) = \log x$, then | ||
|
||
$$ | ||
\begin{aligned} | ||
\mathbb{E}[\log |\det J_f|] &= \mathbb{E}[\log \frac{1}{x}] \\ | ||
&= \mathbb{E}[-\log x] \\ | ||
&= -\mathbb{E}[\log x] \\ | ||
\end{aligned} | ||
$$ | ||
|
||
What??? | ||
|
||
<!-- For the $x^a$ case: You could add some special cases: | ||
For $x > 0$ and $a = 2$, if $x$ follows a log-normal distribution, $\mathbb{E}[\log x]$ has a closed form | ||
For positive random variables, Jensen's inequality could provide bounds --> | ||
|
||
<!-- If we know x follows a specific distribution (like lognormal), we could evaluate E[log |x|] explicitly. But in the general case, this is as far as we can simplify. | ||
Would you like to explore what happens with specific input distributions, or are you interested in other functional forms? --> | ||
|
||
|
||
<!-- It might be worth considering what happens when we compose transformations. The chain rule for Jacobians might give interesting results. --> | ||
|
||
What if f = relu? | ||
|
||
$$ | ||
\begin{aligned} | ||
f(x) &= \begin{cases} | ||
x, \text{if } x > 0 \\ | ||
0, \text{otherwise} | ||
\end{cases} \\ | ||
J_f &= \begin{cases} | ||
1, \text{if } x > 0 \\ | ||
0, \text{otherwise} | ||
\end{cases} \\ | ||
\end{aligned} | ||
$$ | ||
|
||
$$ | ||
\begin{aligned} | ||
\mathbb{E}[\log |\det J_f|] &= \mathbb{E}[\log J_f] \\ | ||
&= \mathbb{E}[\log 1] \\ | ||
&= ??? | ||
\end{aligned} | ||
$$ | ||
|
||
If we knew the cdf of x, then could calculate p(x>0) = 1 - cdf(0). | ||
E[log 1] = p(x>0) log 1 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,33 @@ | ||
--- | ||
title: Keeping up to date with the literature | ||
subtitle: Interesting topics in the field | ||
layout: post | ||
categories: | ||
- "technical" | ||
--- | ||
|
||
A few of my pet interests in the field of machine learning and AI. | ||
Which I keep track of loosely, and read papers on when I have time. | ||
|
||
[Teaching]({{ site.baseurl }}/teaching-lit) | ||
|
||
|
||
|
||
https://openreview.net/forum?id=UdxpjKO2F9 | ||
|
||
|
||
|
||
|
||
[Neural wave functions]({{ site.baseurl }}/neural-wave-lit) | ||
|
||
|
||
|
||
|
||
|
||
[Associative memory]({{ site.baseurl }}/memory-lit) | ||
|
||
|
||
|
||
|
||
|
||
[Bio-plausible backprop]({{ site.baseurl }}/bio-backprop-lit) |
Oops, something went wrong.