Saturday, May 30, 2015

Decision-making 401

In the previous post, Decision-making 101, I provided evidence that selective attention to items that were retrieved into working memory were a major factor in making good decisions. This has generally unrecognized educational significance. Rarely is instructional material packaged with foreknowledge of how it can be optimized in terms of reducing the working memory cognitive load. New research from a cognitive neuroscience group in the U.K. is demonstrating the particular importance this has for learning how to correctly categorize new learning material. They show that learning is more effective when the instruction is optimized ("idealized" in their terminology).

Decisions often require categorizing novel stimuli, such as normal/abnormal, friend/foe, helpful/harmful, right/wrong or even assignment to one of multiple category options. Teaching students how to make correct category assignments is typically based on showing them examples for each category. Categorization issues routinely arise when learning is tested. For example, the common multiple-choice testing in schools requires that a decision be made on each potential answer as right or wrong.

In reviewing the literature on optimizing training, these investigators found reports that one approach that works is to present training in a specific order. For example, in teaching students how to classify by category, people perform better when a number of examples from one category are presented together followed by a number of contrasting examples from the other category. Other ordering manipulations are learned better if simple, unambiguous cases in either category are presented together early in training, while the harder, more confusing cases are presented afterwards. Such training strengthens the contrast between the two categories.

The British group has focused on the role of working memory in learning. Their idea is that ambiguity during learning is a problem. In real-world situations that require correct category identification, naturally occurring ambiguities make correct decisions difficult. Think of these ambiguities as cognitive "noise" that interferes with the training that is recalled into working memory. This noise clutters the encoding during learning and clutters the thinking process and impairs the rigorous thought processes that may be needed to make a correct distinction. In the real world of youngsters in school, other major cognitive noise sources are the task-irrelevant stimuli that come from multi-tasking habits so common in today's students.

The theory is that when performing a learned task, the student recalls what has been taught into working memory. Working memory has very limited capacity, so any "noise" associated with the initial learning may be incompletely encoded and the remembered noise may also complicate the thinking required to perform correctly. Thus, simplifying learning material should reduce remembered ambiguities, lower the working memory load, and enable better reasoning and test performance.


One example of optimizing learning is the study by Hornsby and Love (2014) who applied the concept to training people with no prior medical training to decide whether a given mammogram was normal or cancerous. They hypothesized that learning would be more efficient if students were trained on mammograms that were easily identified as normal or cancerous, and did not include examples where the distinction was not so obvious. The underlying premise is that decision-making involves recalling past remembered examples into working memory and accumulating the evidence for the appropriate category.  If the remembered items are noisy (i.e. ambiguous) the noise also accumulates and makes the decision more difficult. Thus, learners will have more difficulty if they are trained on examples across the whole range of possibilities from clearly evident to obscure than if they were separately trained on examples that were clearly evident as belong into one category or another.

Initially a group of learners was trained on a full-range mixture of mammograms so the images could be classified by diagnostic difficulty as easy or hard or in between. On each trial, three mammograms were shown: the left image was normal, the right was cancerous, and the middle was the test item requiring a diagnosis of whether it was normal or cancerous.

In the actual experiment, one student group was trained to classify a representative set of easy, medium, and hard images, while the other group was trained only on easy samples. During training trials, learners looked at the three mammograms, stated their diagnosis for the middle image, and were then given feedback as to whether they were right or wrong. After completing all 324 training trials, participants completed 18 test trials, which consisted of three previously unseen easy, medium and hard items from each category displayed in a random order. Test trials followed the same procedure as training trials.

When both groups were tested on samples across the range in both conditions, the optimized group was better able to distinguish normal from cancerous mammograms in both the easy and medium images. Note that the optimized group was not trained on medium images. However, no advantage was found in the case of hard test items; both groups made many errors on the hard cases, and optimized training yielded poorer results than regular training. 

We need to explain why this strategy does not seem to work on hard cases. I suspect that in easy and medium cases, not much understanding is required. It is just a matter of pattern recognition, made easier because the training was more straightforward and less ambiguous. The learner is just making casual visual associations. For hard cases, a learner must know and understand the criteria needed to make distinctions. The subtle differences go unrealized if diagnostic criteria are not made explicit in the training. In actual medical practice, many mammograms actually cannot be distinguished by visual inspection—they really are hard. Other diagnostic tests are needed.

The basic premise of such research is that learning objects or task should be pared down to the basics, eliminating extraneous and ambiguous information, which constitute “noise” that confounds the ability to make correct categorizations.

In common learning situations, a major source of noise is extraneous information, such as marginally relevant detail. Reducing this noise is achieved by focus on the underlying principle. Actually I stumbled on this basic premise of simplification over 50 years ago when I was a student trying to optimize my own learning. What I realized was the importance of homing in on the basic principle of what I was trying to learn from instructional material. If I understood a principle, I could use that understanding to think through to many of the implications and applications.

In other words, the principle is: "don't memorize any more than you have to." Use the principles as a way to figure out what was not memorized. Once core principles are understood, much of the basic information can be deduced or easily learned. This is akin to the standard practice of moving from the general to the specific. Even so, general ideas should emphasize principles.

Textbooks are sometimes quite poor in this regard. Too many texts have so much ancillary information in them that they should be thought of as reference books. That is why I have found a good market for my college-level neuroscience electronic textbook, “Core Ideas in Neuroscience,” in which each 2-3 page chapter is based entirely on each of the 75 core principles that cover the broad span of membrane biochemistry to human cognition.. A typical neuroscience textbook by other authors can run up to 1,500 pages.



Source:

Hornsby, Adam, and Love, B. C. (2014). Improved classification of mammograms following idealized training. J. Appl. Res. Memory and Cognition. 3(2):72-76.


Dr. Klemm is a Senior Professor of Neuroscience at Texas A&M. His latest books are Memory Power 101, (Skyhorse) and Mental Biology (Prometheus). He also writes learning and memory blogs for Psychology Today magazine and his own site at thankyoubrain.blogspot.com. His posts have nearly 1.5 million reader views.

Thursday, May 21, 2015

Decision-making 101

Teenagers are notorious for poor decision-making. Of course that is inevitable, given that their brains are still developing, and they have had relatively little life experience to show them how to predict what works and what doesn’t. Unfortunately, what doesn’t work may have more emotional appeal, and most of us at any age are more susceptible to our emotions than cold, hard logic.
Seniors also are prone to poor decision-making if senility has set it. Unscrupulous people take advantage of such seniors because a brain that is deteriorating has a hard time making wise decisions.
In between teenage and senility is when the brain is at its peak for good decision making. Wisdom comes with age, up to a point. Some Eastern cultures venerate their old people as generally being especially wise. After all, it you live long enough, and are still mentally healthy, you ought to make good decisions because you have a lifetime of experience to teach you what future choices are likely to work and which are not.
Much of that knowledge comes from learning from one’s mistakes. On the other hand, some people, regardless of age, can’t seem to learn from their mistakes. Most of the time the problem is not stupidity but a flawed habitual process by which one is motivated to make wise decisions and evaluate options. Best of all is learning from somebody else’s mistakes, so you don’t have to make them yourself.
Learning from your mistakes can be negative, if you fret about it. Learning what you can do to avoid repeating a mistake is one thing, but dwelling on it erodes one’s confidence and sense of self worth. I can never forget the good advice I read from, of all people, T. Boone Pickens. He was quoted in an interview as saying that he was able to re-make his fortune on multiple occasions because he didn’t dwell on losing the fortunes. He credited that attitude to his college basketball coach who told the team after each defeat, “Learn from your mistakes, but don’t dwell on them. Learn from what you did right and do more of that.”
It would help if we knew how the brain made decisions, so we could train it to operate better. “Decision neuroscience” is an emerging field of study aimed at how learning how brains make decisions and how to optimize the process. Neuroscientists seemed to have honed in on two theories, both of which deal with how the brain handles the processing of alternate options to arrive at a decision.
One theory is that each option is processed in its own competing pool of neurons. As processing evolves, the activity in each pool builds up and down as each pool competes for dominance. At some point, activity builds up in one of the pools to reach a threshold, in winner-take-all fashion, to allow the activity in that pool to dominate and issue the appropriate decision commands to the parts of the brain needed for execution. As one possible example, two or more pools of neurons separately receive input that reflects the representation of different options. Each pool sends an output to another set of neurons that feed back either excitatory or inhibitory influences, thus providing a way for competition among pools to select the pool that eventually dominates because it has built up more impulse activity than the others.

The other theory is based on guided gating wherein input to pools of decision-making neurons is gated to regulate how much excitatory influence can accumulate in each given pool. [i] The specific routing paths involve inhibitory neurons that shut down certain routes, thus preferentially routing input to a preferred accumulating circuit. The route is biased by estimated salience of each option, current emotional state, memories of past learning, and the expected reward value for the outcome of each option.
These decision-making possibilities involve what is called “integrate and fire.” That is, input to all relevant pools of neurons accumulates and leads to various levels of firing in each pool. The pool firing the most is most likely to dominate the output, that is, the decision.
However circuits make decisions, there is considerable evidence that nerve impulse representations for each given choice option simultaneously code for expected outcome and reward value. These value estimates update on the fly.[ii] Networks containing these representations compete to arrive at a decision.
Any choice among alternative options is affected by how much information for each option the brain has to work on. When the brain is consciously trying to make a decision, this often means how much relevant information the brain can hold in working memory. Working memory is notoriously low-capacity, so the key becomes remembering the sub-sets of information that are the most relevant to each option. Humans think with what is in their working memory. Experiments have shown that older people are more likely to hold the most useful information in working memory, and therefore they can think more effectively. The National Institute of Aging began funding decision-making research in 2010 at Stanford University’s Center on Longevity. Results of their research are showing that older people often make better decisions than younger people.
As one example, older people are more likely to make rational cost-benefit analyses. Older people are more likely to recognize when they have made a bad investment and walk away rather than throwing more good money after bad.
A key factor seems to be that older people are more selective about what they remember. For example, one study from the Stanford Center compared the ability of young and old people to remember a list of words. Not surprisingly, younger people remembered more words, but when words were assigned a number value, with some words being more valuable than others, older people were better at remembering high-value words and ignoring low-value words. It seems that older people selectively remember what is important, which should make it easier to make better decisions.
Decision-making skills are important of learning achievement in school. Students need to know how to focus in general, and focus on what is most relevant in particular. They are not learning that skill, and their multi-tasking culture is teaching them many bad habits.
Those of us who care deeply about educational development of youngsters need to push our schools to address the thinking and learning skills of students. "Teaching to the test" detracts from time spent in teaching what matters most. Today's culture of multi-tasking is making matters worse. Children don't learn how to attend selectively and intensely to the most relevant information, because they are distracted by superficial attention to everything. Despite their daily use of Apple computers and smart phones, only one college student out of 85 could draw the Apple logo correctly.[iii]
Memory training is generally absent from teacher training programs. Despite my locally well-publicized experience in memory training, no one in the College of Education at my university has ever asked me to share my knowledge with their faculty or with pre-service teachers. The paradox is that teachers are trained to help students remember curricular answers for high-stakes tests. What could be more important than learning how to decide what to remember and how to remember it? And we wonder why student performance is so poor?


"Memory Medic" is author of Memory Power 101 (Skyhorse) and Better Grades, Less Effort (Benecton).



[i]Purcell, B. A.; Heitz, R. P.; Cohen, J. Y.; Schall, et al. (2010). Neurally constrained modeling of perceptual decision making. Psychological Review, 117(4), 1113-1143.

[ii] McCoy, A. N., and Platt, M. L. (2005). Expectations and outcomes: decision-making in the primate brain. J. Comp. Physiol A 191, 201-211.

[iii] Blake, Adam B., Nazarian, Meenely, and Castel, Alan D. (2015). The Apple of the mind's eye: Everyday attention, metamemory, and reconstructive memory for the Apple logo. The Quarterly Journal of Experimental Psychology, 2015; 1 DOI: 10.1080/17470218.2014.1002798