Blog Archive for May 2024

Saturday, May 11, 2024

How comfortable are we letting AI find scientific results without having access to the underlying priciples?

There's an article "AlphaFold 3 predicts the structure and interactions of all of life’s molecules" (read here), but imo the gem is the HN discussion about how to handle the problem of "science done by AI".

for example in this thread by moconnor:

What happens when the best methods for computational fluid dynamics, molecular dynamics, nuclear physics are all uninterpretable ML models? Does this decouple progress from our current understanding of the scientific process - moving to better and better models of the world without human-interpretable theories and mathematical models / explanations? Is that even iteratively sustainable in the way that scientific progress has proven to be?

and the answer by dekhn:

If you're a scientist who works in protein folding (or one of those other areas) and strongly believe that science's goal is to produce falsifiable hypotheses, these new approaches will be extremely depressing, especially if you aren't proficient enough with ML to reproduce this work in your own hands.

If you're a scientist who accepts that probabilist models beat interpretable ones (articulated well here: https://norvig.com/chomsky.html), then you'll be quite happy because this is yet another validation of the value of statistical approaches in moving our ability to predict the universe forward.

If you're the sort of person who believes that human brains are capable of understanding the "why" of how things work in all its true detail, you'll find this an interesting challenge- can we actually interpret these models, or are human brains too feeble to understand complex systems without sophisticated models?

If you're the sort of person who likes simple models with as few parameters as possible, you're probably excited because developing more comprehensible or interpretable models that have equivalent predictive ability is a very attractive research subject.

or the question by jprete:

The goal of science has always been to discover underlying principles and not merely to predict the outcome of experiments. I don't see any way to classify an opaque ML model as a scientific artifact since by definition it can't reveal the underlying principles. Maybe one could claim the ML model itself is the scientist and everyone else is just feeding it data. I doubt human scientists would be comfortable with that, but if they aren't trying to explain anything, what are they even doing?


archive