Biologically-inspired computational frameworks for pattern recognition
HTM tries to mimic the feed-forward and feedback projections thought to be crucial for cortical computation. Bayesian Belief Propagation is used in a hierarchical network to learn invariant spatio-temporal features of the input data and theories exist to explain how this mathematical model could be mapped onto the cortical-thalamic anatomy.
A comprehensive description of HTM architecture and learning algorithms is provided in , where HTM was also proved to perform well on some pattern recognition tasks, even though further studies and validations are necessary. In  we develop a novel technique for HTM (incremental) supervised learning based on error minimization. We prove that error backpropagation can be naturally and elegantly implemented through native HTM message passing based on Belief Propagation and that a two stage training composed by unsupervised pre-training + supervised refinement is very effective (both accurate and efficient).
- Davide Maltoni