Quote:
Originally Posted by palmipede
Speaking of computational complexity, some hard problems like boolean satisfiability (e.g. 3SAT) have been observed to have easy solution times for the average random instances of the problem but is no practical upper bound on the solution times of the worst cases.
I wonder what does that mean for learning? Are there known cases of phase transition in learning time?

An example in machine learning is that finding the global minimum in neural networks has been shown to be NPhard, but simple algorithms like stochastic gradient descent (leading to backpropogation) with some heuristics work fine in practice to find good local minima.