You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: dimensionality-reduction.Rmd
+1-1
Original file line number
Diff line number
Diff line change
@@ -30,7 +30,7 @@ Therefore, the data points in the new subspace can be computed using the left si
30
30
When performing dimensionality reduction it suffice to use only the top $k$ singular values and vectors $Y^{(k)} = AV^{(k)} = U\Sigma V^TV^{(k)} = U^{(k)}\Sigma^{(k)} \in \mathbb{R}^{n \times m}$.
31
31
32
32
##### Quantum algorithms for PCA
33
-
Using the procedures from section \@ref(sec:explainedvariance) it is possible to extract the model for principal component analysis.
33
+
Using the procedures from section \@ref(sec-explainedvariance) it is possible to extract the model for principal component analysis.
34
34
Theorems \@ref(thm:factor-score-estimation), \@ref(thm:check-explained-variance), \@ref(thm:explained-variance-binarysearch) allow to retrieve information on the factor scores and on the factor score ratios, while Theorem \@ref(thm:top-k-sv-extraction) allows extracting the principal components.
35
35
The run-time of the model extraction is the sum of the run-times of the theorems: $\widetilde{O}\left(\left( \frac{1}{\gamma^2} + \frac{km}{\theta\delta^2}\right)\frac{\mu(A)}{\epsilon}\right)$.
Applying amplitude estimation together with the powering lemma \@ref(lem:powering-lemma), one can prove the following theorem valid for algorithm \@ref(fig:quantum-montecarlo-01).
201
+
Applying amplitude estimation together with the powering lemma \@ref(lem:powering-lemma), one can prove the following theorem valid for algorithm \@ref(quantum-montecarlo-01).
202
202
203
203
```{theorem, quantumbounded01, name="Quantum Monte Carlo with bounded output"}
204
-
Let $|\psi\rangle$ as in Eq.\@ref(eq:psistate) and set $U=2|\psi\rangle\langle\psi|-I$. Algorithm in figure \@ref(fig:quantum-montecarlo-01) uses $O(\log(1/\delta))$ copies of $|\psi\rangle = A|0^n\rangle$, uses $U$ $O(t\log(1/\delta))$ times, and outputs an estimate $\widetilde{\mu}$ such that
204
+
Let $|\psi\rangle$ as in Eq.\@ref(eq:psistate) and set $U=2|\psi\rangle\langle\psi|-I$. Algorithm in figure \@ref(quantum-montecarlo-01) uses $O(\log(1/\delta))$ copies of $|\psi\rangle = A|0^n\rangle$, uses $U$ $O(t\log(1/\delta))$ times, and outputs an estimate $\widetilde{\mu}$ such that
This algorithm has a quadratic speedup over the classical Monte Carlo method.
302
302
303
303
```{theorem, quantum-monte-carlo-variance1, name="Quantum Monte Carlo with bounded variance - additive error"}
304
-
Let $|\psi\rangle$ as in Eq.\@ref(eq:psistate), $U=2|\psi\rangle\langle\psi|-I$. Algorithm \@ref(fig:quantum-montecarlo-bounded-var) uses $O(\log(\sigma/\epsilon)\log\log(\sigma/\epsilon))$ copies of $\ket{\psi}$, uses $U$ $O((\sigma/\epsilon)\log^{3/2}(\sigma/\epsilon)\log\log(\sigma/\epsilon))$ times and estimates $\mathbb{E}[\nu(A)]$ up to additive error $\epsilon$ with success probability at least $2/3$.
304
+
Let $|\psi\rangle$ as in Eq.\@ref(eq:psistate), $U=2|\psi\rangle\langle\psi|-I$. Algorithm \@ref(quantum-montecarlo-bounded-var) uses $O(\log(\sigma/\epsilon)\log\log(\sigma/\epsilon))$ copies of $\ket{\psi}$, uses $U$ $O((\sigma/\epsilon)\log^{3/2}(\sigma/\epsilon)\log\log(\sigma/\epsilon))$ times and estimates $\mathbb{E}[\nu(A)]$ up to additive error $\epsilon$ with success probability at least $2/3$.
305
305
```
306
306
307
307
The additive error for this theorem is $\epsilon$ because we required accuracy $\epsilon/32\sigma$ for both uses of algorithm \@ref(fig:quantum-montecarlo-bounded-norm). Indeed, this implies that both the estimates we would get are accurate up to $(\epsilon/32\sigma)(||\nu(B_{\geq 0}/4)||_2+1)^2 \leq (\epsilon/32\sigma)(1+1)^2 = \epsilon/8\sigma$. Now we just multiply by a $4$ factor the estimates of $\mathbb{E}[\nu(B_{\geq 0})/4]$ and $\mathbb{E}[\nu(B_{< 0})/4]$ to get the estimates of $\mathbb{E}[\nu(B_{\geq 0})]$ and $\mathbb{E}[\nu(B_{< 0})]$. The error then gets $4\epsilon/8\sigma =\epsilon/2\sigma$. Combining these two errors, one has a total additive error for the estimate of $A'$ given by $\epsilon/\sigma$. Since $A=\sigma A'$ the error for $A$ is exactly $\epsilon = \sigma (\epsilon/\sigma)$.
To provide the reader with a clearer view of the algorithms in Sections \@ref(sec:explainedvariance), \@ref(sec:qpca) and their use in machine learning, we provide experiments on quantum PCA for image classification.
341
+
To provide the reader with a clearer view of the algorithms in Sections \@ref(sec-explainedvariance), \@ref(sec:qpca) and their use in machine learning, we provide experiments on quantum PCA for image classification.
342
342
We perform PCA on the three datasets for image classification (MNIST, Fashion MNIST and CIFAR 10) and classify them with a K-Nearest Neighbors.
343
343
First, we simulate the extraction of the singular values and the percentage of variance explained by the principal components (top $k$ factor score ratios' sum) using the procedure from Theorem \@ref(thm:factor-score-estimation).
344
344
Then, we study the error of the model extraction, using Lemma \@ref(lem:accuracyUSeVS), by introducing errors on the Frobenius norm of the representation to see how this affects the accuracy.
Copy file name to clipboardExpand all lines: toolbox.Rmd
+40-3
Original file line number
Diff line number
Diff line change
@@ -111,6 +111,8 @@ Another idea is to realize that we could run the algorithm returning the relativ
111
111
Can we check if setting $t=\frac{2\pi \sqrt{a}}{\epsilon}$ can give an absolute error in $O(\frac{\sqrt{a}}{\epsilon})$ runtime? What is difficult about it?
112
112
```
113
113
114
+
The solution to the previous exercise consist in adding a term $\frac{1}{\sqrt{\epsilon}}$ in the number of iterations $t$. If we set $t = \lceil 2\pi\left(\frac{2\sqrt{a}}{\epsilon}\right) + \frac{1}{\sqrt{\epsilon}} \rceil$ we can get an absolute error.
115
+
114
116
115
117
Perhaps a simpler formulation, which hides the complexity of the low-level implementation of the algorithm, and is thus more suitable to be used in quantum algorithms for machine learning is the following:
Recently, various researches worked on improvements of amplitude estimation by getting rid of the part of the original algorithm that performed the phase estimation (i.e. the Quantum Fourier Transform [@NC02]) [@grinko2019iterative], [@aaronson2020quantum]. As the QFT is not considered to be a NISQ subroutine, these results bring more hope to apply these algorithms in useful scenarios in the first quantum computers.
A quantum algorithms $\mathcal{A}$ acting on $\mathcal{H}$ that can be written as $m$ quantum sub-algorithms $\mathcal{A} = \mathcal{A}_m\mathcal{A}_{m-1}\dots \mathcal{A}_1$ is called a variable stopping time algorithm if $\mathcal{H}=\mathcal{H}_C \otimes \mathcal{H}_{A}$, where $\mathcal{H}_C \otimes_{i=1}^m \mathcal{H}_{C_i}$ with $\mathcal{H}_{C_i} = Span(\ket{0}, \ket{1})$ and each unitary $\mathcal{A}_j$ acts on $\mathcal{H}_{C_i} \otimes \mathcal{H}_A$ controlled on the first $j-1$ qubits $\ket{0}^{\otimes j-1} \in \otimes_{i=1}^{j-1} \mathcal{H}_{C_i}$ being in the all zero state.
183
+
```
184
+
185
+
186
+
<!-- something more is in the chakraborty2022regularized paper after definition 5-->
187
+
188
+
189
+
190
+
### Amplitude amplification
191
+
155
192
156
193
157
194
### Example: estimating average and variance of a function
@@ -355,7 +392,7 @@ for all states $\ket{\psi}$, where $U := \sum_i \ket{i}\bra{i} \otimes U_i$ and
355
392
356
393
### Singular value transformation {#subsec:svt}
357
394
358
-
The research in quantum linear algebra culminated with the work of [@CGJ18], [@gilyen2019quantum] with some improvements in [@chakraborty2022quantum]. We now briefly go through the machinery behind these results, as it will be used extensively this work. Before that, we recall the definition of block-encoding from Chapter \@ref(chap-classical-data-quantum-computers).
395
+
The research in quantum linear algebra culminated with the work of [@CGJ18], [@gilyen2019quantum] with some improvements in [@chakraborty2022regularized]. We now briefly go through the machinery behind these results, as it will be used extensively this work. Before that, we recall the definition of block-encoding from Chapter \@ref(chap-classical-data-quantum-computers).
359
396
360
397
```{definition, def-block-encoding, name="Block encoding of a matrix"}
361
398
Let $A\in \mathbb{C}^{2^s \times 2^s}$. We say that a unitary $U \in \mathbb{C}^{(s+a)\times(s+a)}$ is a ($\alpha, a, \epsilon)$ block encoding of $A$ if:
0 commit comments