In 2012 VideoLectures filmed the prominent 26th Annual Conference on Neural Information Processing Systems (NIPS) @ Lake Tahoe. Along the way we also shot the NIPS 2012 Workshops, Oral sessions and Spotlight sessions which were collected for the Video Journal of Machine Learning Abstracts – Volume 3. One of the pearls among those presentations was the invited talk by Scott Aaronson from EECS @ MIT and we simply couldn’t miss the opportunity to have a chat with him.
[highlight]1. How did it felt to be an invited speaker at NIPS? Since you are originally from the Machine Learning community it was almost like a comeback because you actually started out in grad school at Berkeley as a machine learning person, studying with Michael I. Jordan, one of the most popular authors at VideoLectures.Net.[/highlight]
Going to NIPS was phenomenal. Machine learning is a big, vibrant community that’s extremely close intellectually to my own community (theoretical computer science) – often even addressing exactly the same questions in different language. Yet for some reason, I’d never been to a machine learning conference before. So I was thrilled to have the opportunity to change that. I got to meet legendary figures like and , who I’d heard lots about but never met. And I also got approached by lots of students who read my blog but had never met me in person — there are not many places, besides NIPS, where I could walk around and feel like some sort of minor celebrity! This was also my first time visiting the beautiful Lake Tahoe, and I loved that too, though I wish the weather had been better.
[highlight]2. The title of your talk is “Quantum Information and the Brain”. You were trying to find an intersection of two fields. What’s the message you were trying to present in your talk?[/highlight]
I’ve given talks that had a simple, snappy “message,” but unfortunately, this wasn’t one of them! Given how many balls I was trying to juggle — explaining the basics of quantum computing and information; discussing how quantum computing could help machine learning and vice versa; pouring cold water on some really bad attempts to connect quantum mechanics to the brain, yet avoiding blanket dismissals of ideas that (even if wrong) are clearly worth investigating; and not least, trying to make the audience laugh, all in the space of 45 minutes — the talk itself is almost the shortest summary I can give.
On the other hand, I did try to state a few conclusions at the end of my talk. A first conclusion is that several common speculations—for example, that “the brain is a quantum computer”—seem profoundly implausible on physical, biological, and computational grounds alike. After two decades of research in quantum computing, a rough picture has emerged of what a quantum computer would and wouldn’t be good for, and also of what it would take to build one. Briefly, it looks like quantum computers could provide enormous speedups, but only for certain specialized tasks — like simulating quantum physics or breaking widely-used cryptosystems — that probably didn’t have much evolutionary survival value! Furthermore, no one has any clue how scalable quantum computing could be performed in the hot, wet environment of the brain, when people have had so much trouble demonstrating it even in carefully-controlled laboratory conditions. And even if the brain was a QC, no one has any clue how that would help with the “mysteries of consciousness” that usually motivate such suggestions in the first place. If you don’t think it would “feel like anything” to be a classical computer running (say) Dijkstra’s algorithm, then why would it feel like anything to be a quantum computer running Shor’s algorithm?
[frame align="left" bg="grey" type="shadow small_frame" title=" Personal view from the NIPS 2012 talk"][/frame]
On the other hand, a second conclusion is that quantum mechanics *really does* change the relationship between the physical world and our observations of the world in a profound, not-yet-fully-understood way. Because of that, barring a revolution in physics, I imagine that a century from now people will *still* be speculating that QM is trying to tell us something about the ‘mysteries’ of consciousness and free will! And that speculation will probably lead them to say many stupid things, just as it does today. But in itself, it will be no stupider than any other speculation about consciousness or free will since the beginning of civilization.
Now, one question that interests me a great deal is whether the human brain will eventually become as predictable as a digital computer — not merely “in principle,” but via actual brain-scanning devices consistent with the laws of physics. Alternatively, could the physical limits of measurement — like the Uncertainty Principle or the No-Cloning Theorem — put some fundamental cap on how well a brain-scanning machine could ever work? If you told me that, by the year 3000 or whatever, quantum mechanics *had* been successfully connected to human cognition in some way, then questions like that would be my best guess for what had led to the connection.
[highlight]3. How could quantum computing help—or not help—with machine learning tasks? And did concepts from machine learning have helped quantum computing theory?[/highlight]
There are two quantum algorithms — Grover’s algorithm and the quantum adiabatic algorithm — that could help *somewhat* with all sorts of optimization and model-finding problems, including NP-hard problems, and certainly including problems that arise in machine learning. On the other hand, contrary to common misconception, neither of these algorithms offer a general exponential speedup over the best classical algorithm. Grover’s algorithm offers a quadratic, or square-root speedup — which is great, but far from exponential. Meanwhile, the adiabatic algorithm might offer exponential speedups for certain *instances* of NP-hard problems — but for other instances, it might be just as slow as, or even slower than, classical algorithms like simulated annealing (to which the adiabatic algorithm is closely related). And we probably won’t know what sort of behavior predominates in practice until we have large-scale quantum computers that uncontroversially *behave* as quantum computers, and can run tests on them! So, in summary, Grover’s algorithm and the adiabatic algorithm could indeed push forward the boundaries of what can be done in machine learning, but they wouldn’t have the same sort of revolutionary impact that Shor’s factoring algorithm would have on cryptography.
[frame align=”left” bg=”grey” type=”shadow small_frame” title=” Quantum Mechanics in 1 slide”][/frame]
As for the impact of machine learning on quantum computing — well, that’s a direction I’ve pursued in my own research since 2004 or so. In the talk, I discuss how I used ideas straight from classical machine learning — such as boosting, Valiant’s PAC model, and “fat-shattering dimension” — to solve open problems about the information content and learnability of quantum states. Another example of that sort of “technology transfer” comes from recent work applying compressed-sensing ideas to the reconstruction of low-rank quantum states. I predict that we’ll see many other applications of machine-learning ideas to quantum computing in the future.
[highlight] 4. What has prompted you to go into this field?[/highlight]
I went into computer science because of a childhood desire to create my own video games. Later, I gravitated to theoretical computer science when I realized I was much better at proving theorems than at writing code for large software projects. Then, as a summer student at Bell Labs and a grad student at Berkeley, I got into quantum computing — because it was new and full of meaty, concrete open problems; because the mathematical techniques were beautiful; and most of all, because I saw the ideas of computer science finally making contact with modern physics, and being used to grapple with profound questions about the nature of the physical world. What wasn’t to like?
[highlight]5. You won last year’s Alan T. Waterman Award with Robert Wood of Harvard. How surreal was that?[/highlight]
Pretty surreal. Hardly anyone ever calls my office phone except telemarketers and the occasional crank. So when Subra Suresh, the NSF director, called to tell me about the Waterman, I almost didn’t pick up the phone!
Of course I’m deeply grateful for the award, which will be a huge help in letting me support my students and postdocs. And I’m honored to share the award with Rob Wood. I got to visit his lab last year, and his work on robot bees is awesome. (And definitely easier than quantum computing to explain to a layperson!)
[highlight]6. Your blog is a great, honest and highly interesting resource, both from the professional and humoristic perspective. It doesn’t happen often that authors run blogs, its time-consuming. How do you keep up with research, lectures, writing papers, travel, etc?[/highlight]
I don’t. As my colleagues could tell you, I’ve fallen ridiculously far behind on things I’ve agreed to do. Furthermore, my wife and I just had our first child three days ago. She’s absolutely wonderful, but she’ll probably make the time-crunch problem a hundred times worse!
[highlight]7. Being a member of the MIT Open Access Working Group, how do you feel about the death of Aaron Swartz and the prosecution he faced? Can you comment on this?[/highlight]
As I wrote on my blog, Aaron Swartz‘s suicide was a tragedy. While I never met Aaron (only briefly corresponded by email), he was clearly immensely talented, and his death at age 26 was a loss for the entire world.
[frame align="left" bg="grey" type="shadow small_frame" title="Aaron Swartz committed suicide in January 2012"][/frame]
I wouldn’t have advised Aaron to do what he did. Yet, like many scientists, I share Aaron’s passionate belief that the results of
publicly-funded research should be freely available to the public — and I try to support the open science movement, in the hopes of hastening the day when the world will catch up with that belief.
It also seems to me that the prosecution Aaron faced was disproportionate to the crime. Any prison sentence he faced should
have been measured in days or weeks, not in years. And he shouldn’t have had to accept a humiliating plea bargain in order to get a proportionate sentence: such a sentence should have been the *outcome* of a trial, not an inducement to forgo one. Finally, his victimless, nonviolent, idealistically-motivated crime should never have been labeled a felony.
As an MIT professor, I’m grieved that decisions taken at MIT might have helped the prosecution to go forward, and thereby contributed indirectly to Aaron’s suicide. But we really don’t understand the details yet, which is why I’m glad that MIT’s President Reif appointed Hal Abelson to lead an investigation into what MIT did or didn’t do and what it could have done differently. In retrospect, I wish that MIT’s “foot soldiers” — the students, faculty, and staff — had been made aware of the issue, so that they could have openly debated how to respond. As it was, I had no idea either that Aaron faced the threat of decades in prison, or that there was anything MIT could do that might mitigate the threat (for example, following JSTOR in announcing that it had no further interest in pursuing charges). The first thing I heard about Aaron Swartz after the news of his arrest was the news of his suicide.
[highlight]8. What is your blue sky project, if you have any? If you would have unlimited financial resources, what would you research? Remember, unlimited resources…[/highlight]
One of my dreams is to write a book called “Physics from the Bottom Up,” which would be a little like Stephen Wolfram‘s “A New Kind of Science” — *except* that it would be humbler, and give credit to others, and embrace formal mathematical reasoning, and incorporate quantum mechanics and all the other big things we’ve learned about the laws of physics. The book would *start* by studying simple cellular automata, like Conway’s Game of Life. But that would just be the first chapter. Every subsequent chapter would ask: how can we change our a priori, toy, cellular-automaton model of what the laws of physics “could have been like”, to make it closer to what the laws are *actually* like? For example, how can we incorporate Galilean symmetry? The notions of mass, energy, momentum, particles? Special relativity? Entropy? Gravity? Quantum mechanics? Bosons and fermions? Gauge fields? Symmetry breaking? Dark energy? And crucially, how far can we get in *justifying* each of these choices — in explaining why they’d at least be natural (if not “inevitable”), were we designing the laws of physics ourselves?
Unfortunately, I’ll need to learn *much* more physics before I can attempt this book! And that learning will require a great deal of time that I don’t currently have.
- Multiple steps toward the ‘quantum singularity’ (phys.org)
- Researchers show that relatively simple physical systems could yield powerful quantum computers (phys.org)
- Switching with a few photons for quantum computing (phys.org)
- A Quantum Leap Forward: The World’s First Programmable Quantum Photonic Processor (beanstalk-inc.com)
Tags: complex numbers, Interference, machine learning, No-Cloning Theorem, Physics, Probability Theory, Problem of Scale, Problem of Timing, Quantum computer, Quantum computing, Quantum copy-protected software, Quantum Information, Quantum key distribution, Quantum mechanics, Quantum money