/A-Different-Kind-of-Research-Lab

An Overview of the Research Vision and Strategy behind the Manifold Computing Group

A Different Kind of Research Lab

An Overview of the Research Vision and high level Strategy behind the Manifold Computing Group

Advancing Learning Systems and Their Applications

Manifold Computing is a fairly unique research lab, formed from the thesis that the biggest scientific problems we face as a civilization will require progress on numerous fronts. One of these is improving the theory and applicability of modern learning systems, which despite significant progress, do not scale well to multimodal, heirarchical and data starved problems. We aren't laser focused on pie in the sky ideas about AGI, but rather focused on building tools that allow anyone to build much more complex learning systems in a modular, interpretable way. Alongside this, we learn and advance theory as well. This key research theme, dubbed the Infrastructure of Intelligence, constitutes one of the large projects many Manifold Computing researchers are involved in, but not our sole focus. There is no limit or constraint on the direction or goal of any research work done in the group, and we already have several diverse collaborations underway, including: Metalearning approaches to build Sparse Multimodal Neural Networks, Neuroscience Inspired Machine Learning, Differential Privacy and Federated Learning applied to Metalearning approaches, Deep Learning methods for Finance Market Prediction, Computer Vision techniques for real time effects and filters, and even Learning based coordination of Space Infrastructure. There is a strong skew towards computational projects because our members are mostly experienced in this domain, but in the future we hope to facillitate research in an even wider variety of areas. We're hoping to build a truly distributed, productive, and transparent research organization that tackles big problems in a systematic way. A distributed, remote friendly culture and an Open Source first mindset are key to this.

Optimize for Open Source and Transparency

Part of the way we want to do research is to overshare and to embrace packaging our breakthroughs into open source software. We want to share results - good or bad - frequently. Engaging with the world about what we're doing transparently doesn't just mean publishing papers, but rather publishing intermediary results, progress updates, journal and conference papers, and other mediums of distribution. In addition to this, we want to formally support and communicate with the Open Source community around our work, building user centered tools, libraries and implementations that we support with the community over long periods of time. Traditionally, it seems like much of ML research has been isolated from industry, despite usability and advancements over adopted methods. We hope that by blending good software management practices, transparency with the community, and also traditional research dissemination we can maximize our usefulness for the world.

A Distributed Team

This will require the collective collaboration of as many people as possible. Location, institution, and numerous other details don't really matter in the grand scheme of things if you're trying to do good work. Domain knowledge, experience, determination, and vision matter. Manifold Computing is built on numerous collaborations, with folks from well known research institutions like Harvard, UCL, Georgia Tech, industry organizations like Google and Facebook. Some team members come from traditional ML research backgrounds, but many have backgrounds as software engineers, physicists, mathematicians, biologists, musicians, etc. As a team, we work over the internet, meeting in a similar manner to traditional colocated research groups and much of our computational work can be done using cloud computing resources. So, if you're anywhere in the world and want to work with us doing great research, please reach out!