Stephen Wolfram explores the potential--and limitations--of AI in science. See cases in which AI will be a useful tool, and in others a less ideal tool.
It’s interesting to get Wolfram’s take on the current state of AI considering he’s led the massive Wolfram Alpha project, which is in a sense the exact opposite of the recent trend of massive blackbox neural networks, Alpha being instead a white box system explicitly putting together the sum of human computational knowledge.
I wish he had made more of an effort to summarize himself. The main point (I think?) is that the basic scientific and computational limits we know of apply as well to the artificial neural networks. If some phenomena such as the Three Body Problem truly is computationally irreducible, then an AI won’t do any better at solving it since it, too, needs to come up with a model to be able to predict the physical behaviour of a system. He doesn’t touch on it, but I guess the same point can be made with NP-completeness.
It’s interesting to get Wolfram’s take on the current state of AI considering he’s led the massive Wolfram Alpha project, which is in a sense the exact opposite of the recent trend of massive blackbox neural networks, Alpha being instead a white box system explicitly putting together the sum of human computational knowledge.
I wish he had made more of an effort to summarize himself. The main point (I think?) is that the basic scientific and computational limits we know of apply as well to the artificial neural networks. If some phenomena such as the Three Body Problem truly is computationally irreducible, then an AI won’t do any better at solving it since it, too, needs to come up with a model to be able to predict the physical behaviour of a system. He doesn’t touch on it, but I guess the same point can be made with NP-completeness.