ML is known to be good at interpolating between points in the training set, but do much worse at extrapolating. Both can produce innovation when applied to science. For example, a lot of innovation is about transferring existing ideas from one domain to another. We also start seeing examples of LLMs proving novel math theorems: https://news.ycombinator.com/item?id=48071262. But those could be “interpolation”. Is there any strong evidence for or against LLMs being capable of extrapolating? Comments URL: https://news.ycombinator.com/item?id=48077122 Points: 1 # Comments: 0
Want to discover more AI signals like this?
Explore Steek