Skip to content

Commit e66c4e1

Browse files
committed
Fix I/O for algpseudocode algorithms, add pictures of team, minior stuff
1 parent bf3bd69 commit e66c4e1

18 files changed

+163
-85
lines changed

algpseudocode/TEMPLATE.tex

+2
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,8 @@
66
\usepackage{amsmath, amsfonts, amssymb}
77

88
\newcommand{\norm}[1]{\left\lVert#1\right\rVert}
9+
\algrenewcommand\algorithmicrequire{\textbf{Input:}}
10+
\algrenewcommand\algorithmicensure{\textbf{Output:}}
911

1012
\makeatletter
1113
\renewcommand{\fnum@algorithm}{\fname@algorithm}

algpseudocode/cem.tex

+4
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,10 @@
33
\usepackage{algorithm}
44
\usepackage{algpseudocode}
55

6+
7+
\algrenewcommand\algorithmicrequire{\textbf{Input:}}
8+
\algrenewcommand\algorithmicensure{\textbf{Output:}}
9+
610
\begin{document}
711
\pagestyle{empty}
812
\begin{algorithm}[ht]

algpseudocode/logdet-sve.tex

+4
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,10 @@
66
\usepackage{algorithm}
77
\usepackage{algpseudocode}
88

9+
\algrenewcommand\algorithmicrequire{\textbf{Input:}}
10+
\algrenewcommand\algorithmicensure{\textbf{Output:}}
11+
12+
913
\begin{document}
1014
\pagestyle{empty}
1115

algpseudocode/online_quantum_perceptron.tex

+3
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,9 @@
66
\usepackage{amsmath, amsfonts, amssymb}
77

88
\newcommand{\norm}[1]{\left\lVert#1\right\rVert}
9+
\algrenewcommand\algorithmicrequire{\textbf{Input:}}
10+
\algrenewcommand\algorithmicensure{\textbf{Output:}}
11+
912

1013
\makeatletter
1114
\renewcommand{\fnum@algorithm}{\fname@algorithm}

algpseudocode/perceptronalgo.tex

+3
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,9 @@
88
\newcommand{\norm}[1]{\left\lVert#1\right\rVert}
99
\algdef{SE}[DOWHILE]{Do}{doWhile}{\algorithmicdo}[1]{\algorithmicwhile\ #1}%
1010

11+
\algrenewcommand\algorithmicrequire{\textbf{Input:}}
12+
\algrenewcommand\algorithmicensure{\textbf{Output:}}
13+
1114
\makeatletter
1215
\renewcommand{\fnum@algorithm}{\fname@algorithm}
1316
\makeatother

algpseudocode/perceptronalgos.tex

+2-1
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,8 @@
1919
\usepackage{amssymb}
2020

2121

22-
22+
\algrenewcommand\algorithmicrequire{\textbf{Input:}}
23+
\algrenewcommand\algorithmicensure{\textbf{Output:}}
2324

2425
% \usepackage[noend]{algorithmic}
2526
% \usepackage{algorithm,caption}

algpseudocode/q-factor-score-ratio-estimation.tex

+4
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,10 @@
1111
\renewcommand{\fnum@algorithm}{\fname@algorithm}
1212
\makeatother
1313

14+
\algrenewcommand\algorithmicrequire{\textbf{Input:}}
15+
\algrenewcommand\algorithmicensure{\textbf{Output:}}
16+
17+
1418
\begin{document}
1519
\pagestyle{empty}
1620

algpseudocode/qem.tex

+2-1
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,8 @@
44
\usepackage{algpseudocode}
55

66
\newcommand{\norm}[1]{\left\lVert#1\right\rVert}
7-
7+
\algrenewcommand\algorithmicrequire{\textbf{Input:}}
8+
\algrenewcommand\algorithmicensure{\textbf{Output:}}
89

910
\begin{document}
1011
\pagestyle{empty}

algpseudocode/qesa.tex

+4
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,10 @@
33
\usepackage{algorithm}
44
\usepackage{algpseudocode}
55

6+
7+
\algrenewcommand\algorithmicrequire{\textbf{Input:}}
8+
\algrenewcommand\algorithmicensure{\textbf{Output:}}
9+
610
\begin{document}
711
\pagestyle{empty}
812

algpseudocode/quantum-montecarlo-bounded-norm.tex

+3-1
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,8 @@
77
\usepackage{amssymb}
88

99
\newcommand{\norm}[1]{\left\lVert#1\right\rVert}
10+
\algrenewcommand\algorithmicrequire{\textbf{Input:}}
11+
\algrenewcommand\algorithmicensure{\textbf{Output:}}
1012

1113
\makeatletter
1214
\renewcommand{\fnum@algorithm}{\fname@algorithm}
@@ -39,4 +41,4 @@
3941
\end{algorithmic}
4042
\end{algorithm}
4143

42-
\end{document}
44+
\end{document}

algpseudocode/version_space_quantum_perceptron.tex

+3
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,9 @@
66
\usepackage{amsmath, amsfonts, amssymb}
77

88
\newcommand{\norm}[1]{\left\lVert#1\right\rVert}
9+
\algrenewcommand\algorithmicrequire{\textbf{Input:}}
10+
\algrenewcommand\algorithmicensure{\textbf{Output:}}
11+
912

1013
\makeatletter
1114
\renewcommand{\fnum@algorithm}{\fname@algorithm}

appendix.Rmd

+73-72
Large diffs are not rendered by default.

book.bib

+11-1
Original file line numberDiff line numberDiff line change
@@ -1131,7 +1131,17 @@ @article{lahoz2016quantum
11311131
number = 4,
11321132
pages = 24,
11331133
}
1134-
@phdthesis{Prakash:EECS-2014-211,
1134+
@article{partridge1998fast,
1135+
title={Fast dimensionality reduction and simple PCA},
1136+
author={Partridge, Matthew and Calvo, Rafael A},
1137+
journal={Intelligent data analysis},
1138+
volume={2},
1139+
number={3},
1140+
pages={203--214},
1141+
year={1998},
1142+
publisher={IOS Press}
1143+
}
1144+
@phdthesis{PrakashPhD,
11351145
title = {Quantum Algorithms for Linear Algebra and Machine Learning.},
11361146
author = {Prakash, Anupam},
11371147
year = 2014,

dimensionality-reduction.Rmd

+1-1
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ Therefore, the data points in the new subspace can be computed using the left si
3030
When performing dimensionality reduction it suffice to use only the top $k$ singular values and vectors $Y^{(k)} = AV^{(k)} = U\Sigma V^TV^{(k)} = U^{(k)}\Sigma^{(k)} \in \mathbb{R}^{n \times m}$.
3131

3232
##### Quantum algorithms for PCA
33-
Using the procedures from section \@ref(sec:explainedvariance) it is possible to extract the model for principal component analysis.
33+
Using the procedures from section \@ref(sec-explainedvariance) it is possible to extract the model for principal component analysis.
3434
Theorems \@ref(thm:factor-score-estimation), \@ref(thm:check-explained-variance), \@ref(thm:explained-variance-binarysearch) allow to retrieve information on the factor scores and on the factor score ratios, while Theorem \@ref(thm:top-k-sv-extraction) allows extracting the principal components.
3535
The run-time of the model extraction is the sum of the run-times of the theorems: $\widetilde{O}\left(\left( \frac{1}{\gamma^2} + \frac{km}{\theta\delta^2}\right)\frac{\mu(A)}{\epsilon}\right)$.
3636
The model comes with the following guarantees:

exercises.Rmd

-1
Original file line numberDiff line numberDiff line change
@@ -13,4 +13,3 @@
1313
<!-- ## Chapter 4 -->
1414

1515
<!-- ## Chapter 5 -->
16-

montecarlo.Rmd

+3-3
Original file line numberDiff line numberDiff line change
@@ -198,10 +198,10 @@ knitr::include_graphics("algpseudocode/quantum-montecarlo-01.png")
198198
```
199199

200200

201-
Applying amplitude estimation together with the powering lemma \@ref(lem:powering-lemma), one can prove the following theorem valid for algorithm \@ref(fig:quantum-montecarlo-01).
201+
Applying amplitude estimation together with the powering lemma \@ref(lem:powering-lemma), one can prove the following theorem valid for algorithm \@ref(quantum-montecarlo-01).
202202

203203
```{theorem, quantumbounded01, name="Quantum Monte Carlo with bounded output"}
204-
Let $|\psi\rangle$ as in Eq.\@ref(eq:psistate) and set $U=2|\psi\rangle\langle\psi|-I$. Algorithm in figure \@ref(fig:quantum-montecarlo-01) uses $O(\log(1/\delta))$ copies of $|\psi\rangle = A|0^n\rangle$, uses $U$ $O(t\log(1/\delta))$ times, and outputs an estimate $\widetilde{\mu}$ such that
204+
Let $|\psi\rangle$ as in Eq.\@ref(eq:psistate) and set $U=2|\psi\rangle\langle\psi|-I$. Algorithm in figure \@ref(quantum-montecarlo-01) uses $O(\log(1/\delta))$ copies of $|\psi\rangle = A|0^n\rangle$, uses $U$ $O(t\log(1/\delta))$ times, and outputs an estimate $\widetilde{\mu}$ such that
205205
\begin{equation*}
206206
|\widetilde{\mu}-\mathbb{E}[\nu(A)]|\leq C\bigg(\frac{\sqrt{\mathbb{E}[\nu(A)]}}{t}+\frac{1}{t^2}\bigg)
207207
\end{equation*}
@@ -301,7 +301,7 @@ knitr::include_graphics("algpseudocode/quantum-montecarlo-bounded-var.png")
301301
This algorithm has a quadratic speedup over the classical Monte Carlo method.
302302

303303
```{theorem, quantum-monte-carlo-variance1, name="Quantum Monte Carlo with bounded variance - additive error"}
304-
Let $|\psi\rangle$ as in Eq.\@ref(eq:psistate), $U=2|\psi\rangle\langle\psi|-I$. Algorithm \@ref(fig:quantum-montecarlo-bounded-var) uses $O(\log(\sigma/\epsilon)\log\log(\sigma/\epsilon))$ copies of $\ket{\psi}$, uses $U$ $O((\sigma/\epsilon)\log^{3/2}(\sigma/\epsilon)\log\log(\sigma/\epsilon))$ times and estimates $\mathbb{E}[\nu(A)]$ up to additive error $\epsilon$ with success probability at least $2/3$.
304+
Let $|\psi\rangle$ as in Eq.\@ref(eq:psistate), $U=2|\psi\rangle\langle\psi|-I$. Algorithm \@ref(quantum-montecarlo-bounded-var) uses $O(\log(\sigma/\epsilon)\log\log(\sigma/\epsilon))$ copies of $\ket{\psi}$, uses $U$ $O((\sigma/\epsilon)\log^{3/2}(\sigma/\epsilon)\log\log(\sigma/\epsilon))$ times and estimates $\mathbb{E}[\nu(A)]$ up to additive error $\epsilon$ with success probability at least $2/3$.
305305
```
306306

307307
The additive error for this theorem is $\epsilon$ because we required accuracy $\epsilon/32\sigma$ for both uses of algorithm \@ref(fig:quantum-montecarlo-bounded-norm). Indeed, this implies that both the estimates we would get are accurate up to $(\epsilon/32\sigma)(||\nu(B_{\geq 0}/4)||_2+1)^2 \leq (\epsilon/32\sigma)(1+1)^2 = \epsilon/8\sigma$. Now we just multiply by a $4$ factor the estimates of $\mathbb{E}[\nu(B_{\geq 0})/4]$ and $\mathbb{E}[\nu(B_{< 0})/4]$ to get the estimates of $\mathbb{E}[\nu(B_{\geq 0})]$ and $\mathbb{E}[\nu(B_{< 0})]$. The error then gets $4\epsilon/8\sigma =\epsilon/2\sigma$. Combining these two errors, one has a total additive error for the estimate of $A'$ given by $\epsilon/\sigma$. Since $A=\sigma A'$ the error for $A$ is exactly $\epsilon = \sigma (\epsilon/\sigma)$.

on-real-data.Rmd

+1-1
Original file line numberDiff line numberDiff line change
@@ -338,7 +338,7 @@ knitr::include_graphics("images/experiments/qpca/svdistribution/ResearchPaper_sv
338338
```
339339

340340
#### Image classification with quantum PCA
341-
To provide the reader with a clearer view of the algorithms in Sections \@ref(sec:explainedvariance), \@ref(sec:qpca) and their use in machine learning, we provide experiments on quantum PCA for image classification.
341+
To provide the reader with a clearer view of the algorithms in Sections \@ref(sec-explainedvariance), \@ref(sec:qpca) and their use in machine learning, we provide experiments on quantum PCA for image classification.
342342
We perform PCA on the three datasets for image classification (MNIST, Fashion MNIST and CIFAR 10) and classify them with a K-Nearest Neighbors.
343343
First, we simulate the extraction of the singular values and the percentage of variance explained by the principal components (top $k$ factor score ratios' sum) using the procedure from Theorem \@ref(thm:factor-score-estimation).
344344
Then, we study the error of the model extraction, using Lemma \@ref(lem:accuracyUSeVS), by introducing errors on the Frobenius norm of the representation to see how this affects the accuracy.

toolbox.Rmd

+40-3
Original file line numberDiff line numberDiff line change
@@ -111,6 +111,8 @@ Another idea is to realize that we could run the algorithm returning the relativ
111111
Can we check if setting $t=\frac{2\pi \sqrt{a}}{\epsilon}$ can give an absolute error in $O(\frac{\sqrt{a}}{\epsilon})$ runtime? What is difficult about it?
112112
```
113113

114+
The solution to the previous exercise consist in adding a term $\frac{1}{\sqrt{\epsilon}}$ in the number of iterations $t$. If we set $t = \lceil 2\pi\left(\frac{2\sqrt{a}}{\epsilon}\right) + \frac{1}{\sqrt{\epsilon}} \rceil$ we can get an absolute error.
115+
114116

115117
Perhaps a simpler formulation, which hides the complexity of the low-level implementation of the algorithm, and is thus more suitable to be used in quantum algorithms for machine learning is the following:
116118

@@ -134,12 +136,37 @@ exact amplitude amplification -->
134136

135137

136138

137-
138139
Recently, various researches worked on improvements of amplitude estimation by getting rid of the part of the original algorithm that performed the phase estimation (i.e. the Quantum Fourier Transform [@NC02]) [@grinko2019iterative], [@aaronson2020quantum]. As the QFT is not considered to be a NISQ subroutine, these results bring more hope to apply these algorithms in useful scenarios in the first quantum computers.
139140

140141

141142

143+
144+
145+
<!-- Improved maximum-likelihood quantum amplitude estimation -->
146+
<!-- Amplitude estimation via maximum likelihood on noisy quantum computer 2021 -->
147+
<!-- Modified Grover operator for quantum amplitude estimation 2021 -->
148+
149+
<!-- Amplitude estimation without phase estimation jan 2020 --> FIrst proposed MLQAE.
150+
151+
152+
153+
<!-- Quantum Amplitude Estimation in the Presence of Noise [@brown2020quantum] -->
154+
155+
<!-- Alberto Manzano, Daniele Musso, and ́Alvaro Leitao. Real quantum amplitude estimation, 2022. -->
156+
<!-- URL https://arxiv.org/abs/2204.13641. arXiv preprint arXiv:2204.13641 -->
157+
158+
159+
<!-- [@giurgica2022low] -->
160+
<!-- Low-depth amplitude estimation on a trapped-ion quantum computer 2021 -->
161+
162+
QIP2023
163+
with a particular choice of probability, while before we had results "on average".
164+
165+
166+
142167
(ref:ambainis2012variable) [@ambainis2012variable]
168+
(ref:chakraborty2022regularized) [@chakraborty2022regularized]
169+
143170

144171
```{theorem, variable-time-search, name="Variable Time Search (ref:ambainis2012variable)"}
145172
Let $\mathcal A_1, \ldots, \mathcal A_n$ be quantum algorithms that return true or false and run in unknown times $T_1, \ldots, T_n$, respectively.
@@ -148,10 +175,20 @@ Then there exists a quantum algorithm with success probability at least $2/3$ th
148175
$$\widetilde O\left(\sqrt{T_1^2+\ldots+T_n^2}\right).$$
149176
```
150177

151-
### Amplitude amplification
152178

153179

154180

181+
```{definition, variable-topptime-time-algorithm, name="Variabile-stopping-time algorithm (ref:ambainis2012variable) (ref:chakraborty2022regularized)"}
182+
A quantum algorithms $\mathcal{A}$ acting on $\mathcal{H}$ that can be written as $m$ quantum sub-algorithms $\mathcal{A} = \mathcal{A}_m\mathcal{A}_{m-1}\dots \mathcal{A}_1$ is called a variable stopping time algorithm if $\mathcal{H}=\mathcal{H}_C \otimes \mathcal{H}_{A}$, where $\mathcal{H}_C \otimes_{i=1}^m \mathcal{H}_{C_i}$ with $\mathcal{H}_{C_i} = Span(\ket{0}, \ket{1})$ and each unitary $\mathcal{A}_j$ acts on $\mathcal{H}_{C_i} \otimes \mathcal{H}_A$ controlled on the first $j-1$ qubits $\ket{0}^{\otimes j-1} \in \otimes_{i=1}^{j-1} \mathcal{H}_{C_i}$ being in the all zero state.
183+
```
184+
185+
186+
<!-- something more is in the chakraborty2022regularized paper after definition 5-->
187+
188+
189+
190+
### Amplitude amplification
191+
155192

156193

157194
### Example: estimating average and variance of a function
@@ -355,7 +392,7 @@ for all states $\ket{\psi}$, where $U := \sum_i \ket{i}\bra{i} \otimes U_i$ and
355392

356393
### Singular value transformation {#subsec:svt}
357394

358-
The research in quantum linear algebra culminated with the work of [@CGJ18], [@gilyen2019quantum] with some improvements in [@chakraborty2022quantum]. We now briefly go through the machinery behind these results, as it will be used extensively this work. Before that, we recall the definition of block-encoding from Chapter \@ref(chap-classical-data-quantum-computers).
395+
The research in quantum linear algebra culminated with the work of [@CGJ18], [@gilyen2019quantum] with some improvements in [@chakraborty2022regularized]. We now briefly go through the machinery behind these results, as it will be used extensively this work. Before that, we recall the definition of block-encoding from Chapter \@ref(chap-classical-data-quantum-computers).
359396

360397
```{definition, def-block-encoding, name="Block encoding of a matrix"}
361398
Let $A\in \mathbb{C}^{2^s \times 2^s}$. We say that a unitary $U \in \mathbb{C}^{(s+a)\times(s+a)}$ is a ($\alpha, a, \epsilon)$ block encoding of $A$ if:

0 commit comments

Comments
 (0)