Semantic Memory is a component of long-term memory, which contributes to the storage of general information and concepts of the world around us. This information can range from wider general knowledge to semantic understanding.
Types of Memory
Memory can be divided in many ways. the major dichotomies of Long Term memory are listed below.
Another way of sub diving Long Term memory is into memory for specific events we have experienced and memory for more general information, referred to as Episodic memory and Semantic memory respectively. Proposed by Tulving (1972)
Semantic memory refers to the memory of meanings, understandings, and other concept-based knowledge unrelated to personal experiences. Basically, semantic memory is the knowledge of facts, such as knowing what 'x' is. Semantic memory is retrospective in nature, and best described as "what we already understand" while episodic memory is best described as "what has happened to us". Semantic memory consists of networks of associations between concepts. Links between these concepts (nodes) are labelled in a semantic fashion, with there being at least two types of links in the Semantic Network, these being class membership and attributes. A set of rules defines a group within the memory network.
As opposed to episodic memory, semantic memory is not affected by Amnesia which is a condition when one's memory is lost. However, semantic memory is affected by Agnosia (loss of knowledge- it can be the loss of ability to recognize objects, persons, sounds, shapes, or smells). Furthermore, it is unrelated to context and personal relevance.
Teachable Language Comprehender (TLC) Collins and Quillian (1969)
The Hierarchical Network mModel (HNM) by Collins & Quillian (1969), was the first systematic model of semantic memory. From this, TLC (a computer program) was created to model human language comprehension. Its goal is to comprehend text input by relating it to a pre-existing large semantic network (SN) representing rules already known about the world. The model suggests that semantic memory is organised into a series of hierarchical networks, consisting of nodes and properties. A node is a major concept, such as 'animal, bird, canary'. A property, attribute or feature is, as expected, a property of that concept. For example 'has wings, is yellow'. The model is arranged as a hierarchy, with the more widely encompassing nodes stored on the higher levels. The model can be said to 'comprehend' a sentence if it successfully relates inputs to the knowledge base. Learning is accomplished by incorporating any successfully comprehended rules into the SN.
Memory structures consist of concepts connected by links (in a generalisation hierarchy) to other concepts. Similar concepts are stored closer together than unrelated concepts.
- Concepts are stored as local representations – each concept is stored as a single node and the nodes of related concepts are linked together in a hierarchal fashion. When a concept is 'activated' in semantic memory, linked nodes are also 'activated'. An example of this is people were faster to react to "a canary is yellow" than "a canary has wings". This illustrates that the closer together in the hierarchy, the faster someone can identify concepts and their properties. The concept (canary) and the property (yellow) are stored at the same level, and are thus activated quickly, but canary and "can fly" are separated by one level, and so reaction time takes longer.
- Links represent relationships between nodes – ‘is a’ or ‘has’.
- The model assumes Cognitive economy; that it should achieve its goal using the least cognitive resources possible. It does this by minimising the number of representations of a piece of information. Also, a property is stored at the highest possible node in the hierarchy so information can be deduced via inheritance for lower nodes e.g. 'has wings'. what this means is that 'Canary' would not have the link 'has wings' but instead is linked to 'bird' which would have 'wings' linked to it. this stops us having to add 'wings' to every specific bird we encounter, which would take a lot of space. We can just assume that if it is connected to 'bird' it also has all the links bird has.
- All or none (i.e. no strength values regarding links corresponding to class membership)
How does it understand language?
- Intersection search: as soon as a word is parsed (broken down and analysed) it spreads like a 'plague', by activating all its links, whih activate their links and so on.... until it reaches a node it has already 'touched' this links the two nodes semantically (length of path= semantic distance). It records its where it came from so it can trace where it started.
- Semantic Interpretation of input corresponds to the set of linked words. E.g. for the phrase ‘the canary the shark bit had wings’, the semantic network is used to infer that the canary is the owner of the wings, and the shark is the one that bit, because ‘canary’ has ‘wings’ as a semantically linked property, and ‘shark’ has ‘biting’ as a semantically linked property.
- Syntax is only used to check the validity of interpretation. Any input that is not syntactically correct is rejected.
Classes of failure
- Because of its generalisation hierarchy structure, there was nowhere to put any abstract information that didn’t fit into the hierarchy.
- Often made false connections between subjects if they were too general and the syntax was too vague. E.g. ‘he hated the landlord so much that he moved into the house on Brunswick Street’ – TLC would incorrectly associate the landlord and the house.
- Although it is possible to comprehend episodic input using the semantic network, it doesn't incorporate/add these episodes into the semantic network itself, i.e. it can only learn semantic relationships
Studies on sentence verification times (Collins and Quillian 1969/72) show good support for the notion that reaction time (RT) increases as the semantic distance increases.
Problems for the TLC:
- Own data is inconsistent with model: RT's are faster for falsification and therefore faster the greater the semantic distance. Collins and Quillian found that 'a canary is a tulip' was rejected faster than 'a canary is a robin'. Whereas the model would predict that, for the tulip question, the participant would have to search through the whole network before rejecting it and thus would be slower.
- Typicality effects: no associative strength attached to links shown by Rosch (1973). 'A robin is a bird' is verified faster than 'a chicken is a bird' due to the fact that there is a difference in typicality between the two; ratings of typicality were robin-bird (1.1) and chicken-bird (3.8) on a 1-7 rating scale. Rosch and Mervis (1975) who investigated the typicality ratings of fruits and found that oranges, apples, bananas and pears were rated as much more typical fruits than olives, tomatoes, coconuts and dates. Rips, Shoben and Smith (1973), found that verification times were faster for more typical or representative members, than for more atypical members of their category; this is called the typicality gradient
- Alternative explanation for sentence verification= issue of typicality/representation. No evidence for cognitive economy when typicality was controlled(Conrad 1972).
It is unlikely that the precise representation chosen bears much resemblance to human semantic memory.
However, TLC was hugely influential: - It demonstrated that it is possible to model SM. - It influenced the development of subsequent, better models.
The Spreading Activation Model
Collins and Loftus (1975) developed the spreading activation model of semantic memory as a more complex answer to the HNM/TLC's criticisms. It suggests that concepts and nodes are linked together with different levels of conductivity. The more often the two concepts are linked, the greater conductivity.
Collins and Loftus assumed that semantic memory is organised on the basis of semantic relatedness or semantic distance. Nodes that get consistently activated together form stronger connections that make it easier to excite each other, when you look at the network they show this by having shorter links between nodes. The shorter the link, the closer the semantic relation, and so the faster the brain will be at making the connection between the nodes. Furthermore, the longer a concept is accessed, the larger the spread of activation.
Spreading activation is the idea that when a concept is accessed activation spreads out from that node in all directions, as the node attempts to excite all the nodes around it. The higher the conductivity the faster it spreads down that link. Whenever a person thinks, hears or sees, a concept the appropriate node is activated.
The model uses the analogy of neurons: nodes have an activation threshold, and it must be beat for it to fire.
The principle of weak cognitive economy is basically a revised version of Collins and Quillian's cognitive economy principle. Information is allowed to be stored at a lower node in the hierarchy if the link has been explicit, even if already stored at a higher level. If relations are not stored explicitly it is still possible to infer them using hierarchical information.
Collins and Loftus claim that there are different types of links including: -Class membership (a cat is a mammal), -Subordinate (a cat has fur), -Prediction (game-play-people) -Exclusion (a whale is not a fish).
The more properties 2 concepts have in common, the greater number of links between them; e.g. more typical birds will be heavily linked. Semantic relatedness defined as aggregate of the criteralities of the links between them.
Collins and Loftus suggest that connections made are not necessarily logical, rather based on personal experience.
The model can explain the familiarity effect, the typicality effect, and direct concept-property associations. It explains how a semantic network is built up in the first place.
S.A.M is supported by studies of priming in which there is an improvement in speed or accuracy to respond to a stimulus when it is seen to proceed a semantically related concept. Mackay (1973) demonstrated for example how prior context can remove any disambiguation from a phrase (e.g. he walked towards the bank) because of these interconnected units of information. Meyer and Schvaneveldt (1971) found that when words are related, reaction times are quicker. They asked participants if both words in a pair were words or non words. Participants answered "yes" much quicker when the words were related (e.g. bread and butter) compared to when they were not related (e.g. bread and coat). If the words are related activation from the first word is spread to the second word making the association much faster than if they are not related.
However the disadvantage is that the theory is unable to predict much, as it is based on the individual. It handles everything and makes very few predictions which are open to empirical testing making it very difficult to falsify.
The model also fails to consider how episodic knowledge or non-propositional knowledge could be stored. There are so many possible parameters to the system that it is possible to fit almost any empirical data anyway.
Despite its neurological plausibility it is not sufficiently constrained enough to allow it to be implemented reliably.
The Fan Effect
The Fan effect causes interference in semantic memory. The more facts that are associated with a 'node', the slower the activation spreads from it, as a node has a fixed capacity for emitting activation. Therefore if there are more links to that node, then more time must be taken in order to activate all the links, although this can be sped up if the links are often used and so there is more immediate association.
Anderson (1974) asked participants to learn sentences comprising of a subject and location with a relation between them. For example:
1. The Doctor is in the bank
2. The fireman is in the park
3. The lawyer is in the Church
4. The lawyer is in the park
Participants were then given a speed recognition task. They were asked to indicate when they recognised a learnt sentence (the target) amongst other sentences of similar nature (the distractors). An example of a distractor may be "the Doctor is in the park". Anderson found participants reaction time was faster when there were less shared facts. Reaction time for unique sentences -e.g. "the Doctor is in the bank" - was 1.11 seconds compared to when the location and person appeared in two sentences -e.g. "the lawyer is in the park" - which was at 1.22 seconds. Thus the more facts are associated with a node the slower the activation spreads from it - this is the fan effect.
This has implications- the more you know the slower you get?! This may be the case but we can speed this up consciously using procedural memory.
Semantic nodes are very subjective as people's schemas differ significantly. For example, the word 'apple' might elicit the colour 'green' to one person, yet 'red' to another. This could reduce the Fan Effect due to the collaboration of knowledge that others may have; hence slower the activation.
Adaptive Control of Thought (ACT*)Declarative Memory (Anderson, 1983)
ACT* was built upon the TLC and SAM models of semantic memory. It maintained the idea of “semantic networks” but suggested it was “activation” that was key to semantic knowledge and memory. The ACT* model was the first complete model of human cognition, (a challenging task). It therefore has a highly complex architecture which allows it to learn. This type of knowledge is called declarative; the knowledge of facts and information. ACT* also suggests that human knowledge can be procedural; knowledge we hold in order to perform automatic actions such as driving.
The ACT* therefore suggests that memories must be activated from source nodes in the Working Memory. Studies have highlighted that activation takes place automatically, that is, it requires no conscious awareness. The ACT* Theory of fact recognition proposed that items in the LTM remain permanently but cannot be accessed directly unless they are 'activated'.
This model viewed Working Memory differently to how it had previously been viewed (Baddeley and Hitch) whereby 'source nodes' could be located anywhere throughout the brain and all those that are activated at any one time make up Working Memory. Being a complex system the ACT* is most easily represented by the “lightbulb analogy”. If you think of a floor of interconnected lightbulbs, most of which are off, some are dim (partially activated) and some will be lit brightly (fully activated). At different times, different sections of light bulbs will be turned off and on. This is supposed to represent the idea that activation is a continuous function rather than an all-or-none action.
Conclusions on Semantic Memory
There is no doubt that we have some sort of SN in our brains developing through experience. Semantic memory plays a crucial role in almost any cognitive activity and it is very likely that some sort of speading activation is involved in accessing this system. However, the system is not constrained enough to allow us to decide which complex model is closest to the truth. We don't have a good understandingof the way other parts of the system work. Semantic memories are learned through experiences so are idiosyncratic, that is, particular to the individual. Finally, Cognitive neuroscience may provide further insight into regional brain activity during SM tasks.