Record of journal club for nonlinear (& linear) dynamics of artificial/biological networks
Luo, T. Z., Kim, T. D., Gupta, D., Bondy, A. G., Kopec, C. D., Elliot, V. A., DePasquale, B., & Brody, C. D. (2023). Transitions in dynamical regime and neural mode underlie perceptual decision-making. bioRxiv : The Preprint Server for Biology. https://doi.org/10.1101/2023.10.15.562427
Liu, Y., & Wang, X.-J. (2024). Flexible gating between subspaces in a neural network model of internally guided task switching. Nature Communications, 15(1), 6497.
Gilpin, W. (2024). Generative learning for nonlinear dynamics. Nature Reviews. Physics, 1–13.
Webb, T. W., Frankland, S. M., Altabaa, A., Segert, S., Krishnamurthy, K., Campbell, D., Russin, J., Giallanza, T., O’Reilly, R., Lafferty, J., & Cohen, J. D. (2024). The relational bottleneck as an inductive bias for efficient abstraction. Trends in Cognitive Sciences. https://doi.org/10.1016/j.tics.2024.04.001
Ogawa, S., Fumarola, F., & Mazzucato, L. (2023). Multitasking via baseline control in recurrent neural networks. Proceedings of the National Academy of Sciences, 120(33), e2304394120.
Keup, C., & Helias, M. (2022). Origami in N dimensions: How feed-forward networks manufacture linear separability. In arXiv [cs.LG]. arXiv. http://arxiv.org/abs/2203.11355
Wakhloo, A. J., Slatton, W., & Chung, S. (2024). Neural Population Geometry and Optimal Coding of Tasks with Shared Latent Structure. In arXiv [q-bio.NC]. arXiv. http://arxiv.org/abs/2402.16770
Riveland, R., & Pouget, A. (2024). Natural language instructions induce compositional generalization in networks of neurons. Nature Neuroscience, 1–12
Goudar, V., Peysakhovich, B., Freedman, D. J., Buffalo, E. A., & Wang, X.-J. (2023). Schema formation in a neural population subspace underlies learning-to-learn in flexible sensorimotor problem-solving. Nature Neuroscience. https://doi.org/10.1038/s41593-023-01293-9
Tafazoli, S., Bouchacourt, F. M., Ardalan, A., Markov, N. T., Uchimura, M., Mattar, M. G., Daw, N. D., & Buschman, T. J. (2024). Building compositional tasks with shared neural subspaces. In bioRxiv (p. 2024.01.31.578263). https://doi.org/10.1101/2024.01.31.578263
Whittington, J. C. R., Dorrell, W., Behrens, T. E. J., Ganguli, S., & El-Gaby, M. (2023). On prefrontal working memory and hippocampal episodic memory: Unifying memories stored in weights and activation slots. In bioRxiv (p. 2023.11.05.565662). https://doi.org/10.1101/2023.11.05.565662
Driscoll, L., Shenoy, K., & Sussillo, D. (2022). Flexible multitask computation in recurrent networks utilizes shared dynamical motifs. In bioRxiv (p. 2022.08.15.503870). https://doi.org/10.1101/2022.08.15.503870
Miller, K. J., Eckstein, M., Botvinick, M. M., & Kurth-Nelson, Z. (2023). Cognitive model discovery via disentangled RNNs. In bioRxiv (p. 2023.06.23.546250). https://doi.org/10.1101/2023.06.23.546250
Liu, Z., Gan, E., & Tegmark, M. (n.d.). Seeing is believing: Brain-inspired modular training for mechanistic interpretability. Retrieved May 5, 2023, from https://kindxiaoming.github.io/pdfs/BIMT.pdf
Pals, M., Macke, J. H., & Barak, O. (2023). Trained recurrent neural networks develop phase-locked limit cycles in a working memory task. In bioRxiv (p. 2023.04.11.536352). https://doi.org/10.1101/2023.04.11.536352
Linsley, D., & Karkada Ashok, A. (2020). Stable and expressive recurrent vision models. Advances in. https://proceedings.neurips.cc/paper/2020/hash/766d856ef1a6b02f93d894415e6bfa0e-Abstract.html
Ji-An, L., Benna, M. K., & Mattar, M. G. (2023). Automatic Discovery of Cognitive Strategies with Tiny Recurrent Neural Networks. In bioRxiv (p. 2023.04.12.536629). https://doi.org/10.1101/2023.04.12.536629
Galgali, A. R., Sahani, M., & Mante, V. (2023). Residual dynamics resolves recurrent contributions to neural computation. Nature Neuroscience. https://doi.org/10.1038/s41593-022-01230-2
Beiran, M., Meirhaeghe, N., Sohn, H., Jazayeri, M., & Ostojic, S. (2023). Parametric control of flexible timing through low-dimensional neural manifolds. Neuron. https://doi.org/10.1016/j.neuron.2022.12.016
Tkačik, G., Marre, O., Amodei, D., Schneidman, E., Bialek, W., & Berry, M. J., 2nd. (2014). Searching for collective behavior in a large network of sensory neurons. PLoS Computational Biology, 10(1), e1003408.
Mastrogiuseppe, F., & Ostojic, S. (2018). Linking connectivity, dynamics, and computations in low-rank recurrent neural networks. Neuron, 99(3), 609–623.e29.
Smith, J. T. H., Linderman, S. W., & Sussillo, D. (2021). Reverse engineering recurrent neural networks with Jacobian switching linear dynamical systems. In arXiv [cs.LG]. arXiv. https://proceedings.neurips.cc/paper/2021/file/8b77b4b5156dc11dec152c6c71481565-Paper.pdf
Ratzon, A., Derdikman, D., & Barak, O. (2023). Representational drift as a result of implicit regularization. In bioRxiv (p. 2023.05.04.539512). https://doi.org/10.1101/2023.05.04.539512
RNNs strike back https://adrian-valente.github.io/2023/10/03/linear-rnns.html
Fortunato, C., Bennasar-Vázquez, J., Park, J., Chang, J. C., Miller, L. E., Dudman, J. T., Perich, M. G., & Gallego, J. A. (2023). Nonlinear manifolds underlie neural population activity during behaviour. In bioRxiv (p. 2023.07.18.549575). https://doi.org/10.1101/2023.07.18.549575
Durstewitz, D., Koppe, G., & Thurm, M. I. (2023). Reconstructing computational system dynamics from neural data with recurrent neural networks. Nature Reviews. Neuroscience. https://doi.org/10.1038/s41583-023-00740-7
Lake, B. M., & Baroni, M. (2023). Human-like systematic generalization through a meta-learning neural network. Nature. https://doi.org/10.1038/s41586-023-06668-3
Sussillo, D., & Barak, O. (2013). Opening the black box: low-dimensional dynamics in high-dimensional recurrent neural networks. Neural Computation, 25(3), 626–649.
Pollock, E., & Jazayeri, M. (2020). Engineering recurrent neural networks from task-relevant manifolds and dynamics. PLoS Computational Biology, 16(8), e1008128.
Jaffe, P. I., Poldrack, R. A., Schafer, R. J., & Bissett, P. G. (2022). Discovering dynamical models of human behavior. In bioRxiv (p. 2022.03.20.484666). https://doi.org/10.1101/2022.03.20.484666
Kingma, D. P., & Welling, M. (2019). An Introduction to Variational Autoencoders. In arXiv [cs.LG]. arXiv. http://arxiv.org/abs/1906.02691
Flesch, T., Nagy, D. G., Saxe, A., & Summerfield, C. (2022). Modelling continual learning in humans with Hebbian context gating and exponentially decaying task signals. In arXiv [q-bio.NC]. arXiv. http://arxiv.org/abs/2203.11560
Bouchacourt, F., Palminteri, S., Koechlin, E., & Ostojic, S. (2020). Temporal chunking as a mechanism for unsupervised learning of task-sets. eLife, 9. https://doi.org/10.7554/eLife.50469
Disentangling with Biological Constraints: A Theory of Functional Cell Types: https://arxiv.org/abs/2210.01768
Superposition: https://transformer-circuits.pub/2022/toy_model/index.html:
Rajalingham, R., Piccato, A., & Jazayeri, M. (2022). Recurrent neural networks with explicit representation of dynamic latent variables can mimic behavioral patterns in a physical inference task. Nature Communications, 13(1), 5865.
Pagan, M., Tang, V. D., Aoi, M. C., Pillow, J. W., Mante, V., Sussillo, D., & Brody, C. D. (2022). A new theoretical framework jointly explains behavioral and neural variability across subjects performing flexible decision-making. In bioRxiv (p. 2022.11.28.518207). https://doi.org/10.1101/2022.11.28.518207