Computational cognition (sometimes referred to as computational cognitive science or computational psychology) is the study of the computational basis of learning and inference by mathematical modeling, computer simulation, and behavioral experiments. In psychology, it is an approach which develops computational models based on experimental results. It seeks to understand the basis behind the human method of processing of information. Early on computational cognitive scientists sought to bring back and create a scientific form of Brentano’s psychology.[1]
Artificial intelligence
There are two main purposes for the productions of artificial intelligence: to produce intelligent behaviors regardless of the quality of the results, and to model after intelligent behaviors found in nature.[2] In the beginning of its existence, there was no need for artificial intelligence to emulate the same behavior as human cognition. Until 1960s, economist Herbert Simon and Allen Newell attempted to formalize human problem-solving skills by using the results of psychological studies to develop programs that implement the same problem-solving techniques as people would. Their works laid the foundation for symbolic AI and computational cognition, and even some advancements for cognitive science and cognitive psychology.[3]The field of symbolic AI is based on the physical symbol systems hypothesis by Simon and Newell, which states that expressing aspects of cognitive intelligence can be achieved through the manipulation of symbols.[4] However, John McCarthy focused more on the initial purpose of artificial intelligence, which is to breakdown the essence of logical and abstract reasoning regardless of whether or not human employs the same mechanism.[2]
Over the next decades, the progress made in artificial intelligence started to be focused more on developing logic-based and knowledge-based programs, veering away from the original purpose of symbolic AI. Researchers started to believe that artificial intelligence may never be able to imitate some intricate processes of human cognition like perception or learning. A chief failing of AI is not being able to achieve a complete likeness to human cognition due to the lack of emotion and the impossibility of implementing it into an AI. [5]They began to take a “sub-symbolic” approach to create intelligence without specifically representing that knowledge. This movement led to the emerging discipline of computational modeling, connectionism, and computational intelligence.[4]
Computational modeling
As it contributes more to the understanding of human cognition than artificial intelligence, computational cognitive modeling emerged from the need to define various cognition functionalities (like motivation, emotion, or perception) by representing them in computational models of mechanisms and processes.[6] Computational models study complex systems through the use of specific algorithms and extensive computational resources, or variables, to produce computer simulation.[7] Simulation is achieved by adjusting the variables, changing one alone or even combining them together, to observe the effect on the outcomes. The results help experimenters make predictions about what would happen in the real system if those similar changes were to occur. [8]When computational models attempt to mimic human cognitive functioning, all the details of the function must be known for them to transfer and display properly through the models, allowing researchers to thoroughly understand and test an existing theory because no variables are vague and all variables are modifiable. Consider a model of memory built by Atkinson and Shiffrin in 1968, it showed how rehearsal leads to long-term memory, where the information being rehearsed would be stored. Despite the advancement it made in revealing the function of memory, this model fails to provide answers to crucial questions like: how much information can be rehearsed at a time? How long does it take for information to transfer from rehearsal to long-term memory? Similarly, other computational models raise more questions about cognition than they answer, making their contributions much less significant for the understanding of human cognition than other cognitive approaches.[9] An additional shortcoming of computational modeling is its reported lack of objectivity.[10]
John Anderson in his ACT-R uses the functions of computational models and the findings of cognitive science. Adaptive Control of Thought-Rational is based on the theory that the brain consists of several modules which perform specialized functions separate of each other.[9] The ACT-R model is classified as a symbolic approach to cognitive science.[11]
Connectionist network
Another approach which deals more with the semantic content of cognitive science is connectionism or neural network modeling. Connectionism relies on the idea that the brain consists of simple units or nodes and the behavioral response comes primarily from the layers of connections between the nodes and not from the environmental stimulus itself.[9]Connectionist network differs from computational modeling specifically because of two functions: neural back-propagation and parallel-processing. Neural back-propagation is a method utilized by connectionist network to show evidence of learning. After a connectionist network produce a response, the stimulated results are compared to real-life situational results. The feedback provided by the backward propagation of errors would be used to improve accuracy for the network’s subsequent responses.[12] The second function, parallel-processing, stemmed from the belief that knowledge and perception are not limited to specific modules but rather are distributed throughout the cognitive networks. The present of parallel distributed processing has been shown in psychological demonstrations like the Stroop effect, where the brain seems to be analyzing the perception of color and meaning of language at the same time.[13] However, this theoretical approach has been continually disproved because the two cognitive functions for color-perception and word-forming are operating separately and simultaneously, not parallel of each other.[14]
The field of cognition may have benefitted from the use of connectionist network but because of the completed system, setting up the neural network models can be quite a tedious task and the results may be less interpretable than the system they are trying to model. Therefore, the results can be used as evidence for broad theory of cognition without explaining the particular process happening within the cognitive function. Other disadvantages of connectionism lie in the research methods it employs or hypothesis it tests, which has been proven inaccurate or ineffective often, taking connectionist models further from an accurate representation of how the brain functions. These issues cause neural network models to be ineffective on studying higher forms of information-processing, and hinder connectionism from advancing the general understanding of human cognition.[15]