6 Comments

Too much of modern Science Philosophy can be summed up by "shut up and calculate."

We need to break into a real philosophy (or philosophies) of Science, or we will likely be spinning for some time.

Great article! Thanks!

Expand full comment

This is exactly the perspective that I am trying to critique across my work. I find it amazing that "Shut up and calculate" is a real quote, it so perfectly encapsulates everything wrong with modern approaches to science that it almost feels like parody.

Expand full comment

It's a physics slogan used when students start wondering which causal model of quantum mechanics is right.

Expand full comment

I admit I thought it was a quote by Feynman, however this appears to be a case of the Matthew effect, and it was in fact coined by N. David Mermin in criticism of the Copenhagen interpretation: https://pubs.aip.org/physicstoday/article/57/5/10/412592/Could-Feynman-Have-Said-This

However it is definitely something that Feynman *could* have said, and is evocative of his approach to physics. This is an actual quote from Feynman at the 1948 Pocono conference:

"The only way I knew that one of my formulas worked was when I got the right result from it. (...) I said in my talk: "This is my mathematical formula, and I’ll show you that it produces all the results of quantum electrodynamics." immediately I was asked: "Where does the formula come from?’ I said, "It doesn’t matter where it comes from; it works, it’s the right formula!" "How do you know it’s the right formula?" "Because it works, it gives the right results!" "How do you know it gives the right answers?" ’ (...) They got bored when I tried to go into the details."

Which I got from this paper, well worth reading: https://vixra.org/pdf/2002.0011v1.pdf

Expand full comment

Nice article and I generally agree, but why don't you think we'll have AI tools to improve interpretability?

Right now you can paste some data into an llm and ask it questions about the data, which you can imagine scaling up to the point where the llm can find patterns that a human couldn't find then condense it into a model that a human could understand. I've seen some discussion on building a multi headed AI where one of the heads can explain to you what's going on inside the neural net https://www.astralcodexten.com/i/50046004/iii-ipso-facto-ergo-elk

Are you just pessimistic about whether we'll actually build superhuman AI interpretability or is it something else?

Expand full comment

We have scarcely any idea of what AI interpretability will look like as a mature field or what results will ultimately be possible. Given the way that AI simply tries to approximate the training data, it is possible that even a maximally interpretable AI is not capable of capturing underlying principles in their full generality. I think a model capable of that would need to be built with an as-yet undiscovered architecture.

I respect the capabilities of existing AI models but I am highly skeptical that we will see anything but small incremental improvements in performance with the current generation of architectures. I think AI is in a massive hype bubble at the moment.

Expand full comment