A recent talk by Prof. Yoshua Bengio where he expands on his work on the consciousness prior (see video above by Prof. Bengio). Essentially current machine learning models are good at fast unconscious learning (system 1 tasks) and lack the ability to perform slow (sequential system 2 tasks) conscious reasoning that our brains do. His hypothesis is that the brain has a large substrate of unconscious learning and attention mechanism facilitates a very low dimensional representation (sparse factor graph) to picked from that high dimensional space which then also plays a role in subsequent system 1 computation. Language is a bridge between system 1 and system 2 in that it is a low dimensional representation and is used to verbalize unconscious thoughts.
top of page
bottom of page
댓글