Theory that if we are correctly said to remember some fact or event (as against relearning it, guessing it, and so on) there must be some physiologically identifiable trace in the brain which carried the information in question right through from the time when we first learnt it.
The trace need not be a physical object; it could be an electrical circuit or such like.
Source:
H A Bursen, Dismantling the Memory Machine (1978); critical
Attributes
The attributes an item possesses form its trace and can fall into many categories. When an item is committed to memory, information from each of these attributional categories is encoded into the item’s trace. There may be a kind of semantic categorization at play, whereby an individual trace is incorporated into overarching concepts of an object. For example, when a person sees a pigeon, a trace is added to the “pigeon” cluster of traces within his or her mind. This new “pigeon” trace, while distinguishable and divisible from other instances of pigeons that the person may have seen within his or her life, serves to support the more general and overarching concept of a pigeon.
Physical
Physical attributes of an item encode information about physical properties of a presented item. For a word, this could include color, font, spelling, and size, while for a picture, the equivalent aspects could be shapes and colors of objects. It has been shown experimentally that people who are unable to recall an individual word can sometimes recall the first or last letter or even rhyming words,^{[4]} all aspects encoded in the physical orthography of a word’s trace. Even when an item is not presented visually, when encoded, it may have some physical aspects based on a visual representation of the item.
Contextual
Contextual attributes are a broad class of attributes that define the internal and external features that are simultaneous with presentation of the item. Internal context is a sense of the internal network that a trace evokes.^{[5]} This may range from aspects of an individual’s mood to other semantic associations the presentation of the word evokes. On the other hand, external context encodes information about the spatial and temporal aspects as information is being presented. This may reflect time of day or weather, for example. Spatial attributes can refer both to physical environment and imagined environment. The method of loci, a mnemonic strategy incorporating an imagined spatial position, assigns relative spatial positions to different items memorized and then “walking through” these assigned positions to remember the items.
Modal
Modality attributes possess information as to the method by which an item was presented. The most frequent types of modalities in an experimental setting are auditory and visual. Any sensory modality may be utilized practically.
Classifying
These attributes refer to the categorization of items presented. Items that fit into the same categories will have the same class attributes. For example, if the item “touchdown” were presented, it would evoke the overarching concept of “football” or perhaps, more generally, “sports”, and it would likely share class attributes with “endzone” and other elements that fit into the same concept. A single item may fit into different concepts at the time it is presented depending on other attributes of the item, like context. For example, the word “star” might fall into the class of astronomy after visiting a space museum or a class with words like “celebrity” or “famous” after seeing a movie.
Mathematical formulation
The mathematical formulation of traces allows for a model of memory as an evergrowing matrix that is continuously receiving and incorporating information in the form of a vectors of attributes. Multiple trace theory states that every item ever encoded, from birth to death, will exist in this matrix as multiple traces. This is done by giving every possible attribute some numerical value to classify it as it is encoded, so each encoded memory will have a unique set of numerical attributes.
Matrix definition of traces
By assigning numerical values to all possible attributes, it is convenient to construct a column vector representation of each encoded item. This vector representation can also be fed into computational models of the brain like neural networks, which take as inputs vectorial “memories” and simulate their biological encoding through neurons.
Formally, one can denote an encoded memory by numerical assignments to all of its possible attributes. If two items are perceived to have the same color or experienced in the same context, the numbers denoting their color and contextual attributes, respectively, will be relatively close. Suppose we encode a total of L attributes anytime we see an object. Then, when a memory is encoded, it can be written as m_{1} with L total numerical entries in a column vector:

 {\displaystyle \mathbf {m_{1}} ={\begin{bmatrix}m_{1}(1)\\m_{1}(2)\\m_{1}(3)\\\vdots \\m_{1}(L)\end{bmatrix}}}.
A subset of the L attributes will be devoted to contextual attributes, a subset to physical attributes, and so on. One underlying assumption of multiple trace theory is that, when we construct multiple memories, we organize the attributes in the same order. Thus, we can similarly define vectors m_{2}, m_{3}, …, m_{n} to account for n total encoded memories. Multiple trace theory states that these memories come together in our brain to form a memory matrix from the simple concatenation of the individual memories:

 {\displaystyle \mathbf {M} ={\begin{bmatrix}\mathbf {m_{1}} &\mathbf {m_{2}} &\mathbf {m_{3}} &\cdots &\mathbf {m_{n}} \end{bmatrix}}={\begin{bmatrix}m_{1}(1)&m_{2}(1)&m_{3}(1)&\cdots &m_{n}(1)\\m_{1}(2)&m_{2}(2)&m_{3}(2)&\cdots &m_{n}(2)\\\vdots &\vdots &\vdots &\vdots &\vdots \\m_{1}(L)&m_{2}(L)&m_{3}(L)&\cdots &m_{n}(L)\end{bmatrix}}}.
For L total attributes and n total memories, M will have L rows and n columns. Note that, although the n traces are combined into a large memory matrix, each trace is individually accessible as a column in this matrix.
In this formulation, the n different memories are made to be more or less independent of each other. However, items presented in some setting together will become tangentially associated by the similarity of their context vectors. If multiple items are made associated with each other and intentionally encoded in that manner, say an item a and an item b, then the memory for these two can be constructed, with each having k attributes as follows:

 {\displaystyle \mathbf {m_{ab}} ={\begin{bmatrix}a(1)\\a(2)\\\vdots \\a(k)\\b(1)\\b(2)\\\vdots \\b(k)\end{bmatrix}}={\begin{bmatrix}\mathbf {a} \\\mathbf {b} \end{bmatrix}}}.
Context as a stochastic vector
When items are learned one after another, it is tempting to say that they are learned in the same temporal context. However, in reality, there are subtle variations in context. Hence, contextual attributes are often considered to be changing over time as modeled by a stochastic process.^{[6]} Considering a vector of only r total context attributes t_{i} that represents the context of memory m_{i}, the context of the nextencoded memory is given by t_{i+1}:

 {\displaystyle \mathbf {t_{i+1}(j)} =\mathbf {t_{i}(j)+\epsilon (j)} }
so,

 {\displaystyle \mathbf {t_{i+1}} ={\begin{bmatrix}t_{i}(1)+\epsilon (1)\\t_{i}(2)+\epsilon (2)\\\vdots \\t_{i}(r)+\epsilon (r)\end{bmatrix}}}
Here, ε(j) is a random number sampled from a Gaussian distribution.
Summed similarity
As explained in the subsequent section, the hallmark of multiple trace theory is an ability to compare some probe item to the preexisting matrix of encoded memories. This simulates the memory search process, whereby we can determine whether we have ever seen the probe before as in recognition tasks or whether the probe gives rise to another previously encoded memory as in cued recall.
First, the probe p is encoded as an attribute vector. Continuing with the preceding example of the memory matrix M, the probe will have L entries:

 {\displaystyle \mathbf {p} ={\begin{bmatrix}p(1)\\p(2)\\\vdots \\p(L)\end{bmatrix}}}.
This p is then compared one by one to all preexisting memories (trace) in M by determining the Euclidean distance between p and each m_{i}:

 {\displaystyle \left\Vert \mathbf {pm_{i}} \right\={\sqrt {\sum _{j=1}^{L}(p(j)m_{i}(j))^{2}}}}.
Due to the stochastic nature of context, it is almost never the case in multiple trace theory that a probe item exactly matches an encoded memory. Still, high similarity between p and m_{i} is indicated by a small Euclidean distance. Hence, another operation must be performed on the distance that leads to very low similarity for great distance and very high similarity for small distance. A linear operation does not eliminate lowsimilarity items harshly enough. Intuitively, an exponential decay model seems most suitable:

 {\displaystyle similarity(\mathbf {p,m_{i}} )=e^{\tau \left\Vert \mathbf {pm_{i}} \right\}}
where τ is a decay parameter that can be experimentally assigned. We can go on to then define similarity to the entire memory matrix by a summed similarity SS(p,M) between the probe p and the memory matrix M:

 {\displaystyle \mathbf {SS(p,M)} =\sum _{i=1}^{n}e^{\tau \left\Vert \mathbf {pm_{i}} \right\}=\sum _{i=1}^{n}e^{\tau {\sqrt {\sum _{j=1}^{L}(p(j)m_{i}(j))^{2}}}}}.
If the probe item is very similar to even one of the encoded memories, SS receives a large boost. For example, given m_{1} as a probe item, we will get a near 0 distance (not exactly due to context) for i=1, which will add nearly the maximal boost possible to SS. To differentiate from background similarity (there will always be some low similarity to context or a few attributes for example), SS is often compared to some arbitrary criterion. If it is higher than the criterion, then the probe is considered among those encoded. The criterion can be varied based on the nature of the task and the desire to prevent false alarms. Thus, multiple trace theory predicts that, given some cue, the brain can compare that cue to a criterion to answer questions like “has this cue been experienced before?” (recognition) or “what memory does this cue elicit?” (cued recall), which are applications of summed similarity described below.
I have learn some excellent stuff here. Certainly value bookmarking for revisiting. I surprise how much attempt you place to create any such great informative web site.