There are things that can be figured out by formal processes, but aren’t readily accessible to immediate human thinking.
- Computational irreducibility means that we can never guarantee that the unexpected won’t happen-and it’s only by explicitly doing the computation that you can tell what actually happens in any particular case.
- There’s an ultimate tradeoff between capability and trainability: the more you want a system to make “true use” of its computational capabilities, the more it’s going to show computational irreduciibility, and the less it is going to be trainable.