This is a repository for figures and tex files of "Modern applications of machine learning in quantum sciences"
- 1.3 % of ML-based articles in the selected fields in years 2000-2021 - A. Dawid
- 2.1 Plots of the (a) binary cross-entropy and (b) mean-squared error - A. Dawid
- 3.1 Ising model (to reproduce, go to the Notebook A1 from the school GitHub) - R. Koch
- 3.2 IGT (to reproduce, go to the Notebook A1 from the school GitHub) - R. Koch
- 3.3bc PCA (to reproduce, go to the Notebook A1 from the school GitHub) - R. Koch
- 3.6 Learning by confusion (to reproduce, go to the Notebook A3 from the school GitHub) - R. Koch
- 3.7 Pediction-based method (to reproduce, go to the Notebook A3 from the school GitHub) - R. Koch
- 4.1 Toy example of a labeled two-dimensional data set - A. Gresch
- 4.3 The kernel form makes a difference - A. Gresch
- 4.6 Selection of new candidate points via BO using Upper Confidence Bound acquisition function - K. Nicoli
- 4.7 Search for the optimal kernel - A. Dauphin
- 6.4a Parameter update for a random walker - B. Requena
- 7.4 Inverse Schrödinger problem solved using dP - J. Arnold
- 8.4 Perceptron capacity by Cover - M. Gabrie
- 8.11 Illustration of a quantum circuit (only pdf) - P. Stornati
- 8.16 Variational quantum simulation - P. Stornati
List of the figures by FESIDO Studio Graficzne in folder graphical_designer
:
- 1.1 Traditional programming vs ML
- 1.2 AI vs ML vs DL
- 1.4 Interplay between AI, quantum computing, many-body physics, and quantum chemistry
- 1.5 Contents of these Lecture Notes
- 1.6 Tree of dependencies between chapters (added in v2)
- 2.2 Learing rate as a hyperparameter
- 2.3 Under- and overfitting
- 2.4 The bias-variance trade-off
- 2.5 Geometric construction of SVMs
- 2.6 Neural network (modified in v2)
- 2.7 Convolutional filter
- 2.8 Autoencoder
- 2.9 Recurrent neural network
- 2.10 Backpropagation (added in v2)
- 3.3a Phase classification with PCA
- 3.9b Interpretation of neural networks via bottlenecks
- 4.2 A linear SVM applied to non-linearly separable data
- 4.4 Bayesian neural network
- 4.5 Bayesian optimization
- 4.8 Three main classes of problems tackled with BO and GPRs
- 4.9 BO and GPRs for feedback loops
- 5.1 Scheme of a restricted Boltzmann machine
- 5.2 Autoregressive neural quantum state
- 5.3 Recurrent neural-network architecture as a neural quantum state
- 5.4 Expressive capacity of neural quantum states
- 5.5 Schematic representation of the of various ansätze
- 6.1 Overview of the basic reinforcement learning setting
- 6.2 Short-term and long-terms rewards in reinforcement learning algorithms
- 6.3 Schematic representation of the episodic and compositional memory of various projective simulation agents
- 6.4b Evolution of various walker policies
- 6.5 Performance of AlphaGo and AlphaGo Zero
- 6.6 Reinforcement learning for quantum feedback of an optical cavity
- 6.7 Reinforcement learning for circuit optimization
- 6.8 Reinforcement learning for quantum error correction
- 6.10 Reinforcement learning to find optimal relaxations
- 7.1 Machine learning influences physics
- 7.2 Standard vs differentiable programming
- 7.5 Sketch of a normalizing flow (modified in v2)
- 7.6 Volume transformation (added in v2)
- 7.14 Illustration of the Hamiltonian learning of a one-spin system
- 7.17 Automated design on experiments (added in v2)
- 8.1 Physics influences machine learning
- 8.2 Statistical physics toolbox for understanding machine learning theory
- 8.3 Generalization error in classical and modern regimes
- 8.6 Schemes of a committee machine and random feature model
- 8.11 Illustration of a quantum circuit diagram
- 8.12 Quantum machine learning
- 8.13 Realization of the famous Shor algorithm in a real quantum computer
- 8.14 Quantum support vector machine enhanced by a quantum device
- 8.15 Variational optimization of quantum circuits
arXiv_v1.zip
- zipped complete set of tex files and associated ones being a basis for the arXiv v1 submission (we recommend loading it with Overleaf).arXiv_v2.zip
- version 2.
colors_dict.pkl
- pickled dictionary with our RGB-coded five main colors (green, purple, yellow, orange, blue) and their three shades (dark, medium, light),colors_1D.pkl
- the same colors in 1D array,colors_2D.pkl
- colors in 2D array,- Jupyter notebook that shows how to unpickle them.
- set of fonts called New Hero used for text in plots,
- Jupyter notebook that shows how to use them with Python.
- We wrote a new section 2.5 on backpropagation in NNs (with a new fig. 2.10)
- We expanded section 7.2.2 on normalizing flows (with a new fig. 7.6)
- We wrote a new section 7.3.4 on automated design of experiments (with a new fig. 7.17) and expanded the outlook of 7.3 (ML for experiments).
- We added the appendix C concerning kernel methods.
- We added a tree of dependencies between chapters to allow the reader to choose what they want to read in a more informed way (fig. 1.6)
- We modified slightly two figures: 2.6 NN and neuron and 7.5 Sketch of a normalizing flow.
- We added new references following feedback from the community.