adsense code

Friday, April 26, 2019

Learning and Memory Bias Choices But Don't Preclude Free Will


One common definition of "free will" is that a person can decide or choose among multiple alternatives without being forced by physical laws, luck, fate, or divine will. Most of us feel that there are situations where we are in charge of our choices and no outside force compels us to make a particular choice. But it is fashionable these days for scientists to insist that free will is an illusion. In fact, they claim, without evidence, that consciousness cannot do anything. It just observes a little of what the magnificent unconscious mind does. The possibility that conscious thought programs neural circuitry escapes their biased thinking.

People who believe that humans have no free will are hard-pressed to explain why no one is responsible for their choices and actions. What is it that compels foolish or deviant behavior? Who or what compels us to accept one moral code over any other? Who or what compels us to believe in God or to be an atheist? Who or what compels us to become a certain kind of person, with no option to "improve" itself in any self-determined way? Learning experiences may bias our choices, but we are free to reject learning that does not serve us well. Wise people do that.

Human brains make choices consciously and unconsciously by real-time evaluation of alternatives in terms of the anticipated usefulness of previous learning from other situations. This learning occurs in the context of the learned sense of self, which begins unconsciously in the womb, as neural connections construct a map of body parts. The conscious brain is aware that it is aware of choice processing and makes decisions in light of such understanding. When a given alternative choice is not forced, the conscious mind is aware that it is not obliged to accept any one choice but is "free" to select any one of the available options. We may be creative by consciously constructing other alternatives than the ones presented. Such realization might even guide many decisions at the subconscious level. In any case, neural networks weigh the probable value of each alternative and collectively reach a "decision" by inhibiting networks that lead to less-favored alternatives. Thus, network activity underlying the preferred choice prevails and leads to a selective willed action. What governs the network activity causing the final choice is the activity in other networks, which in turn is governed by stored memories and real-time processing of the current choice contingencies.

What usually gets left out of free-will discussions is the question of how a brain establishes stored-memory preferences and how it evaluates current contingencies. These functions surely cause things to happen, but what is the cause of the cause? Any given brain can choose within certain limits of its learning experiences and stored memory. We govern those choices by what a brain has learned about the self-interest value associated with given contingencies. Brain circuitry assigns value, and values chosen are largely optional choices. The conscious brain directs the choices that govern value formation, reinforcement, and preservation in memory.

Now we are confronted with explaining how neural circuit impulse patterns (CIPs) representing the sense of self can have a free will. First, I reason that each person's brain has a conscious Avatar that acts as an active agent to act in the world on embodied brain's behalf, as explained more completely in my recent book. This is reminiscent of the 3rd Century idea of a homunculus, a "little person" inside the brain. A modern view is that this homunculus exists in the form of mapped circuitry within a more global workspace.

Certain maps are created under genetic control. These include the topographic map of the body in the sensory and motor cortices. Then there is the capacity for real-time construction of maps of the body location in space that resides in circuitry of the hippocampus and entorhinal cortex. Other maps are created from learning experience from the near-infinite circuit capacity of association cortex. What these maps learn is stored in memory as facilitated circuit synapses and deployed "on-line" in the form of CIP representations of what was originally learned. New learning likewise exists as CIP representations in network populations. Thus, what has been learned is stored as memories that can be accessed later in decision- and choice-making.

The conscious Avatar itself is a constellation of certain CIPs representing the conscious-agent sense of self. Certainly, by definition, the Avatar can make choices and decisions. Avatar choices can be implemented unconsciously, because Avatar circuitry is embedded in the global workspace of unconscious mind. Wakefulness releases consciousness to make its own choices and decisions. Avatar processing is neither random nor inevitable, and presumably can occur with more degrees of freedom than found in unconscious mind. Avatar processing more likely progresses via non-linear chaotic dynamics than by linear deterministic processing.

If the conscious Avatar exists as a set of CIPs, how can something as "impersonal" and physiological as that have any kind of "will," much less free will? Consider that the "virtual you" is your Avatar. Let us recall that "will" is little more than an intent that couples bodily actions to achieve the intent. This kind of thinking does also occur in the circuitry that controls unconscious minds. These circuits automatically generate actions in response to conditions that call for a response. Such actions are stereotyped and inflexible, but not when there is conscious regulation.

Each choice alternative is represented as circuit impulse patterns (CIPs) within a group of neurons. Each group's activity interacts with the others―and with the CIP representation of the conscious Avatar. The Avatar CIP is poised to influence activity in the alternative sub-populations and thus can help direct the final processing result.

The Avatar must have some criteria to bias a given option. Those criteria have been learned and remembered. The Avatar CIP activity can modulate the alternative-choice representations in the context of self-awareness according to past learning and value assessments of current contingencies. The existence of bias does argue for determinism at this stage of choice making, but the bias could have been created earlier by conscious free-will reasoning and value assessments.

While it is true that genetics and experience help program the Avatar circuitry, the Avatar does its own non-linear processing and makes choices about who to interact with and what experiences to value, promote, and allow. The Avatar can insist that it has a need to remember some lessons of experience and makes it a point to remember them. In short, the Avatar gets to help shape what it becomes.

It seems to this Avatar that current debates about determinism and free will tend to obscure the important matters of our humanness. Free will debates distract us from a proper framing of the issues about human choices and personal responsibility.

Sources:

Klemm, W. R. (2014). Mental Biology: The New Science of How the Brain and Mind Relate. New York: Prometheus.
Klemm, W. R. (2016). Making a Scientific Case for Conscious Agency and Free Will. New York: Academic Press.

Sunday, April 14, 2019

Cursive Is Not Dead Yet


The national education standards, Common Core, aimed to kill the teaching of cursive. But is not deadjust wounded.

Yesterday, I did a radio interview on WHO in DesMoines. WHO bills itself as the “America’s #1 Audio Company.” I remember fondly listening to WHO over the three years when I lived in Iowa many years ago. The Justin Brady Radio Show people had read one of my articles on why teaching cursive to children is valuable, and they wanted to explore things further. As many people know the Common Core standards did away with the teaching of cursive, presumably because it is not relevant in the digital age where children write by tapping a screen or keyboard.

My state of Texas, notable for doing its own thing, has refused to endorse Common Core, but still the state did not require the teaching of cursive. Now Texas mandates the teaching of cursive. In accordance with the state's new school guidelines, second graders will be taught how to write cursive letters before advancing to third grade, where they'll be expected to "write complete words, thoughts and answers legibly in cursive writing leaving appropriate spaces between words." When students get to fourth grade, they'll be required to write all of their assignments in cursive.
Justin Brady wanted to know what I thought about all this. My first reaction was this: “If we don’t need to teach cursive, why do we need to teach printing by hand?” Cursive is just a refinement of printing letters. Why don’t we just show them pictures of the letters and teach them to punch a key for the letters? In fact, that may well be the next educational “reform.”

We teach printing so kids can more easily learn their ABCs. We could teach ABCs by showing children which letters to tap on a screen. Maybe in some states that think they are so progressive, the teaching of printing letters will be on the way out. However, the reason learning to print letters by hand matters is that it demands mental engagement. A child has to think about the structure of each letter, and in the process of thinking about how to draw it, learns and remembers what the letters look like. Hand printing is an example of the “production effect” principle that benefits memory. We remember things better if we reproduce the learning, either by drawing, writing, or telling. One of the fundamental but unheralded principles of learning is that the best way to remember anything is to think about it.

Learning cursive builds on this principle and provides additional benefits. Cursive has two special advantages over printing: it promotes a higher-level mental development, and it can nurture a child’s emotions and motivation for learning and achievement.

Brain Development

Cursive should be easy to learn once one knows how to print letters, because there are many good books explaining the slight modifications needed to turn printed letters into script. But cursive demands more hand-eye coordination, a change in brain wiring that creates the mental infrastructure for many later uses in real life. Hand-finger dexterity becomes crucial in later life if a child wants to play a musical instrument, excel in sports, manipulate tools, or even master a computer keyboard. In my blog post that Justin had read, I had described how writing in cursive activated many more areas of brain than mere printing. It is training the brain to recruit neural resources to solve problems.

Excelling at cursive does another important thing. The learner has to pay more attention and focus on what needs to be done to make each letter and attractive. To do a good job at cursive requires self-discipline. Who can argue that kids don’t need to learn focus and self-discipline? Our multi-tasking culture is teaching kids to be scatterbrained. All kids have some level of attention deficit.
Learning cursive successfully also incidentally programs the brain for the habit of deliberate practice. Deliberate practice is a mental heuristic that enables a person to pay attention to the details of what is needed to improve a skill. If an adult wants to improve her golf game, she has to do more than just repeat a swing of the club. She has to think about what is the best way to improve the swing with each attempt.

Motivational Benefit

Learning to write cursive well has enormous motivational and emotional benefits. First, writing cursive is a form of drawing, and children naturally love to draw. The child happily takes ownership of their cursive creations, being proud of having a skill that generates such elegant writing. They can even develop a personal style, which is gratifying in their limited world that demands so much conformity. They discover that they have powers of mastery, which motivates them to do better in other school work. Of course, they also discover the practical benefit of cursive, which is that they can write much faster than printing, which helps them greatly in taking schoolwork notes.

In recalling my own childhood, I remember that I did not like school until the seventh grade. Before then I hated school and made poor grades. It may have been no accident that I started to like school and make all As in that year when I also had a couple months of penmanship class. I knew how to write cursive earlier, but penmanship taught me how to write cursive that was attractive, not perhaps as elegant as the script in the Declaration of Independence, but still something I created that I could be proud of. I still have attractive cursive today.

So, I say “hats off” to states like Texas that are restoring the hallowed place of cursive in elementary education. My only criticism is that second graders are not likely to have the brain development and hand-eye coordination required to create attractive cursive. Children need refresher instruction when they are older, as I was lucky enough to get in a couple months of the seventh grade. If a child does not learn to do cursive well, many of the emotional and motivational benefits do not occur. In fact, if their cursive is ugly and unreadable, the emotions are negative.


Sunday, April 07, 2019

Making Note-taking Work for You


Despite a recurring stream of educational fads, lectures still dominate teaching approaches. In spite of such teaching reforms as "hands-on" learning, small group collaborations, project-based learning, and others, teachers generally can't resist the temptation to be a "sage of the stage," instead of a "guide on the side." And when they are not lecturing, teachers may assign instructional videos. Maybe that's a good thing, because many students are not temperamentally equipped to be active learners. Rather, they have been conditioned by television and movies to function as a passive audience. Even the way we test learning with multiple-choice questions conditions students to be passive by recognizing a provided correct answer among three or four incorrect ones.

Then there is the problem of alternatives, such as learning from reading. Too many students don't like to read academic material. They want somebody to spoon feed the information to them. Most lectures are just that—spoon feeding.

Given that the dominance of lecturing is not likely to change any time soon, shouldn't teachers focus more on showing students how to learn from lectures or from videos? It seems there is an implicit assumption that passive listening will suffice to understand and remember what is presented in lecture or video presentations. The problem is, however, that deep learning requires active, not passive, engagement. Students need to parse content to identify what they don't understand, don't know already, and can't figure out from what they do already know. This has to happen in real time, as different ideas and factoids come and go.

So how should students engage with presentations? Traditionally, this means taking notes. But I wonder if note-taking is a dying art. I don't see many students taking notes from lectures or web pages or U-tube videos. Or textbooks (highlighting is a poor substitute). My concern was reinforced the other day when I gave a lecture on improving learning and memory to college students. The lecture was jam packed with more information than anyone could remember without being actively engaged. Yet, I did not see a single one of the 58 students taking notes. Notably, the class's regular professor, who had invited me to give the lecture, was vigorously taking notes throughout.

Why don’t students take notes? Are they too conditioned for passive learning? Is it because they can’t write legibly in cursive, and printing is too slow and cumbersome? Whatever the cause, it can be traced to faulty teaching by previous teachers.

Just what is it that I think is valuable about note taking? First and foremost is the requirement for engagement. Students have to pay attention well enough to make decisions about the portion of the presentation that will need to be studied later. Paying attention is essential for encoding information. Nobody can remember anything that never registered in the first place.

Next, note taking requires thinking about the material to decide what needs to be captured for later study. This hopefully generates questions that can be raised and answered during the presentation. In the college class I just mentioned, not one student asked a question, even though I interrupted the lecture four times to try and pry out questions. Notably, after the lecture, about a dozen students came to me to ask questions.

Notes should be taken by hand. This is a good place to mention note-taking with a laptop computer. Students are being encouraged to bring laptops to take notes. Two important consequences of typing notes should be recognized. One problem is that for touch typists, taking notes on a laptop is a relatively brain-dead process in which letters are banged out more or less on autopilot. A good typist does not have to think. And if you have not mastered the keyboard, paying attention to which keys to hit is a distraction from the content the learner should be thinking about. Hand-written notes inevitably engage thinking and decisions about what to write down, how to represent the information, and where on the page to put specific items. A formal experiment has been published showing that students remembered more when they took notes by hand than when they took notes by laptop typing.

A special benefit of hand-written note-taking is that students create a spatial layout of the information they think they will need to study. A well-established principle of learning is that where information is provides important cues as to what the information is. The spatial layout of script and diagrams on a page allows the information to be visualized, creating an opportunity for a rudimentary form of photographic memory, where a study can imagine in the mind's eye just were on the page certain information is, and that alone makes it easier to memorize and recall what the information is.

This brings me to the important point of visualization. Pictures are much easier to remember than words. Hand-written notes allow the student to represent verbalized ideas as drawings or diagrams. If you have ever had to learn the Kreb's cycle of cellular energy production, for example, you know how much easier it is to remember the cycle if it is drawn rather than described in paragraph form.

All learners take in information differently. There are at least five common types of note-taking. Learners should select the type that works best for them. The type selected may vary with the nature of the information source. After reading the different descriptions of note-taking styles below, it will be up to you to decide which style of notes you would prefer to utilize.

Styles of Note-Taking:

1. Outline
2. Charting Notes
3. Cornell Notes
4. Mind Mapping
5. Matrix Notes

Outline Notes


These notes are arranged in terms of topic, sub-topic, sub-sub topic, and so on. Each item is on a separate line and is indented. Each topic or sub-topic can be numbered and lettered. Here is an example for information on cell biology:

1. History
A. Initial discoveries
1). Robert Hooke
2). Early microscopes
3). Etc.
2, Cells
          A. Definition/cell theory
          B. Organelles
                     1) Mitochondria
                     2) ER
                     3) etc.

The numbering and lettering can become distracting. I prefer to use headings, sub-headings, sub-sub headings. This is readily automated in a word processing by using a styles menu (Heading 1, Heading 2, Heading 3, and so on). Here is an example:

History (main heading)

Initial discoveries (subheading)

Robert Hooke (sub- sub-heading)

Early microscopes


Cells

Organelles

Mitochondria

ER

Etc.


Outline notes are most useful when you have to capture information quickly. If you don’t have much time to think, outlines are usually easy to construct because that is the way most information is presented in lectures, videos, and textbooks. A presenter typically presents a main thought, then explains it with some detail, and then moves on to the nest main idea.

For more understanding and to promote memory, it is important to think about the words that appear in an outline. Other note-taking methods require reconstructing the initial information in a different format, and this requires some thinking. Thinking is the best way to improve understanding, and it also automatically promotes memory formation.

Charting Notes:

These notes are put in a table with column headings. Here is an illustration based on cell biology information:

Main Topic: Cell Biology

Learning Objective: understand that all organisms are composed of one or more cells and explain the three parts of cell theory.

Topic 1: History



Topic 2: Cells
Topic 3: Cell Theory
Topic 4:
Key Info/Ideas

Robert Hooke conceived idea of cells.

Hooke used a microscope to look at slices of cork
Etc.
Key Info/Ideas

Cells are individual units that are alive

Cells contain organelles that perform specific functions


Etc.
Key Info/Ideas

All living things are made of cells

Cells are organized into tissues of similar cells
Key Info/Ideas


Cornell Notes


There are 5 components of the Cornell notes: topic, learning objective/outcome, keywords/questions, notes, and summary.

Main Topic: Cell Biology

Learning Objective/Outcome: understand that all organisms are composed of one or more cells and explain the three parts of cell theory.

Keywords/Issues/Questions

History

Cell definition

Organelles


Etc.
Notes

Hooke’s study of cork under microscope

Cells as membrane-bound units

Organelles: nucleus, mitochrondria, ER, etc.
Summary:           


Mind Mapping


Ideas can be mapped in ways that show how they relate to each other. The map drawing should begin with outlined notes, because few people can think fast enough to construct a map in real time during a lecture or video. In simple mind mapping, basic ideas are stated within circles and arrows are drawn from “parent” to “daughter” circles. A useful addition is to write in brief text along the arrows that explain what the relationship is. When this addition is included, the map is called a concept map. Here is an example:



Each circle object in the map can be expanded to whatever level of detail is required. In the map above, for example, from “History” you could add a circle for “Hooke” with a labeled connecting arrow saying “the first pioneer was.” Maps like this are easily made with paper and pencil. If you want more formal maps, these can be done in a computer drawing program like Powerpoint or more automated concept mapping software that is available from multiple vendors.

Matrix Notes

Matrix notes place information in a table, where the columns might be categories of information and the rows contain items within each category. The columns represent one category of information (such as topics and the rows another, such as items. Here is the basic idea:

Core Ideas
History
Cells
Cell Theory
Organelles
Structure




Relation to tissues




Energy production




Proteins




Etc.






As with concept maps, the process should begin with outlined notes, because few people can think fast enough to construct a matrix in real time during a lecture or video. Also, as with concepts maps, the main advantage is that the learner has to think about the content. The best way to remember anything is to think about it. Such thinking may also provide insights that would otherwise not occur.

Matrix notes can be more comprehensive and force thinking about content in a wide range of contexts. Matrix notes are most useful when cross-cutting relationships need to be clarified.

The advantages for learning are that the learner conceptualizes the ideas in the process of constructing the matrix. Because ideas are presented in one view, preferably in units of one page at a time, it is easy to see cross-cutting relationships that otherwise are not so apparent. Such organization is an aid to stimulating insight. In addition, the fixed spatial layout is a memory aid, because knowing where a given piece of information is located makes it easier to remember the information.

To conclude, learners will remember more if they take notes of the learning material. The reason is that note taking requires more attentiveness, engagement with the information, thinking about relationships and applications of the information. Notes also provide a condensed personal copy that can be filed for later reference.

For more or memory strategies, see my books that are described at WRKlemm.com (author tab).

Friday, March 08, 2019

Brave New World of Gene Manipulation in Human Brain


In recent years scientists have discovered mobile pieces of genetic material that move around inside cells. These pieces are called retrotransposons. They can copy themselves and insert near and into DNA and thus induce mutations. Australian research reported in the journal Nature reveals that retrotransposons can alter the genome of human brain cells. In fact, retrotransposons more effectively penetrate neuron genes than those in blood cells that were used for comparison. Thousands of retrotransposon mutations were seen in two of the five areas examined from brains of human post-mortem donors. In fact, retrotransposon activity may explain the very recent discovery that every brain cell seems to have a unique genome. I explained this finding in an earlier post.

Though these pieces of DNA are not genes, they interact with genes that hop around to different sites within a chromosome (perhaps you heard about Barbara McClintok’s 1983 Nobel Prize winning discovery of “jumping genes”. All cells have enzymes that cut transposons out of a string of DNA, which then insert back in at other locations in the DNA. Sometimes the cut includes an adjacent gene along with the transposon, and thus when reinsertion occurs the gene hitchhikes along to the new location. The jumping around is not random; it occurs preferentially into active protein-coding regions, even sometimes in a different chromosome. The potential for changing function is enormous, yet we don’t know just what functional consequences occur. We do know that the process is most common in humans and higher primates. And these are not "random" mutations.

We have known for some time that all cells are influenced by epigenetic effects; that is, events in the environment can alter the genome. The mechanism may well involve retrotransposons. Gene manipulation may be especially robust for altering learning and memory. It may be no accident that retrotransposon mutations were seen in human hippocampus, the brain region most directly involved in forming memories and the one part of the brain where new cells are continuously born in adults. Memory of learned events results from more or less lasting changes in the junctions (synapses) among cells in the circuits that processed the learning. These lasting changes are enabled by new protein production in those synapses. That protein is under genetic control (thus a memory can be sustained because the genes can replace any protein that degrades over time). 

The implications of this discovery for learning and memory―and brain function in general―are inestimable. More importantly, and here is where the “Brave New World” comes in, there should be the potential for manipulating gene functions in predictable and lasting ways by using synthetic transposons (which should be easy to manufacture).  Transport of synthetic retrotransposons into neurons might be accomplished by packaging them with a harmless virus; the basics of “transfection” technology are already well established. The hard part will be in discovering what transposons produce desired changes in brain function. But it seems reasonable to test various retrotransposons in the hope that some can be found that will help to cement or magnify memories and perhaps others that erase unwanted memories, as occur in post-traumatic stress syndrome. There is a potential down-side, however. Some retrotransposons may be a cause of cancer.

Source:
Baillie, J. K., et al. (2011) Somatic retrotransposition alters the genetic  landscape of the human brain. Nature. 479: 534-537.

Wednesday, February 13, 2019

The Practical Meaning of Free Will


Philosophers and scientists have debated the issue of free will for centuries. In general, the consensus among seems to be that there is no such thing as free will. The problem is the premise of the debate. Those who have already decided against free will frame the issue so that no other conclusion can be drawn. Proper definition of terms is crucial to stay out of rhetorical weeds and traps.
For example, people will say that every action or event has a cause. Therefore, the event was determined and did not occur “freely.” To occur freely, an action or event would have to occur randomly. I have had professional statisticians tell me that in the real world almost nothing is truly random. Too many things are inter-dependent; that is, what happens to one thing creates a bias of action on something else.
Another argument is that every action or event has a certain probability of occurrence, ranging from zero to 100% chance that it will occur. Thus, the argument is that anything that can occur will occur, eventually. It if has a low probability, happening may just take a long time. It does not require being willed into existence.
Before we can go much further in this examination, we have to understand the word, “will.” This word implies an intent from an active, living agent that chooses to do a certain thing or avoid doing it. So, I suppose you could say that an ant has a will to go search for food, for example. But no one would suggest that an ant can freely do that. It is compelled by a biological need for food and sensory detection of odor cues that propel the ant to move in the direction of the food. This technicality aside, common use of the word “will” is that this is a goal or intent that higher animals have, and they may be constrained from complete freedom. In fact, a key part of the common definitions of will is that it requires consciousness. But free will opponents promote their foreordained conclusion that people can’t have free will by claiming that  consciousness itself has no agency. It is just an observer. Space prevents me for challenging this specious argument here, but I have defended conscious agency in other publications.
The most obvious constraint is lack of freedom of action. I cannot will to fly by flapping my arms, because that is not within my biological repertoire. I am not free to crack a safe, because I do not know how. So let us not confuse freedom of action with free will. Free will can only exercised if there is freedom of action for what one wills.
As for “free” will or “free” won’t, the premise is that one has two or more available choices and that nothing compels selection of one over the other. You may well have different probabilities for a given choice, each biased by certain contingencies associated with each choice. For example, the probability that I will have a breakfast tomorrow morning is highly likely, assuming I have the freedom of action by still being alive and that there are things in my kitchen to eat.  But, the probability is not 100%. I may get nauseous and not want to eat. I may have to fast because I am getting a medical blood test. But I can over-rule the forbidding factors. I can choose to eat, knowing that it may cause me to vomit (but maybe it won’t and in fact might settle my stomach if I pick something really digestible). I can choose to risk creating bad test numbers or skip the blood test to do it on another day that seems more convenient.
Here is how a free-will argument might proceed:
Determinist: “Whatever choice is made, it will be influenced by some factor that your reasoning develops. You used reasoning to change the probabilities and thus biased your choice. You simply redefine free will in a way that allows us to have it.”
Free-will Believer: “Well, you defined free will in a way that does not allow us to have it. It is specious logic to define things out of existence. The problem is that you have tried to foreordain your conclusion by saying that reason is not an acceptable basis for freely making a choice. This is a rhetorical trick. I am free to think this out, whatever way my knowledge and thinking skills allow. Remember, the reasoning only affects the probabilities. Reason does not compel a given choice. It merely alters the probabilities. People do make illogical or dumb choices from time to time.”
Determinist: “But you are constrained by the limits of your knowledge and brain. People make dumb choices when they are being dumb.”
Free-will Believer: “Yes, but within those limits, I have free choice. I may even make a choice that my reasoning concludes to be a bad choice, just for the hell of itor just to counter your argument.”
Determinist: “Do you not see that just for the hell of it is an emotion that has biased your decision. Thus it is not free?”
Free-will Believer: “Note that I said may, not I will. I still reserve the possibility to choose. Do you not seen we have fallen into an infinite regress trap? Your line of argument cannot be pursued to a definitive conclusion.”

Thus, it seems to me that philosophical logic is not useful for this kind of debate. Here is a case where common sense makes more sense. In any choice that is not forced, we are free to change the probabilities or to confound themfor whatever reason or emotion.

Sources:
Klemm, W. R. 2016. Making a Scientific Case for Conscious Agency and Free Will
New York: Elsevier.

Klemm, W. R. (2018). Reason and Creativity May Require Free Will, Chapter 2, In  . Hauppauge, New York: Nova.

Klemm, W. R. (2015). Neurobiology Perspectives on Agency: 10 Axioms and 10 Proposition, Chapter 4. Constraints of Agency. Explorations of Theory in Everyday Life. edited  by Graig W. Gruber et al. Annals of Theoretical Psychology, Vol. 12, p.51-88.

Klemm, W. R. 2010. Free will debates: simple experiments are not so simple. Advances in Cognitive Psychology. 6: (6) 47-65.


Monday, January 28, 2019

Learning to Learn Emotional Stability


Educated people know about Pavlov's classical conditioning studies. But few people realize the pervasive implications that apply even today.

The key initial observation made by Pavlov was that when dogs saw objects that looked like (and probably smelled like) food, they salivated. He immediately seized on the concept of ASSOCATION that somehow caused nervous systems to learn. He had no way to know if dogs actually "thought" about the association. It did not matter whether the dogs did or not. The biological adaptiveness of such a learning system was obvious. Pavlov realized he needed to pursue this, instead of digestive physiology, as it was something new and fundamental. He went on to perform experiments that lead to the ideas of UCS/CS and UCR/CR.

The idea he missed was positive reinforcement. In fact, it took some 50 years for others to realize that reinforcement was an underlying mechanism in classical conditioning. This led of course to the idea that you could produce learning by manipulating reinforcement (i.e., operant conditioning).

Pavlov's work, old as it is, is still finding applications today. A couple years ago I got an up-date in the area of PTSD research at a seminar by Gregory Quick from the Department of Psychiatry at the University of Puerto Rico. As Pavlov showed, memory extinction is a basic phenomenon even in simple animals. If you repeatedly flash a light and then stress a rat, it soon learns to become distressed the next time it sees the flash, even after you stop the stress. In the lab, this is manifested by the rat showing freeze behavior. But, if you repeat flash cue enough times without the stress, the conditioned response (CR) (freeze behavior) eventually becomes extinguished.

At first, scientists thought that extinction erases the memory of the CR. But extinction really creates a new memory that competes with memory of the original CR. Both memories co-exist. Over time the extinction memory may be lost, and the CR can return. The implication is that, just as ordinary learning needs rehearsal, so does extinction learning.

Therapy for emotional trauma and PTSD might be more effective if therapy were approached like a conventional learning experience whose memory is affected in all the usual ways. Recall what was said about extinction being a case of new learning. Re-learning of an extinguished response occurs much more readily than it does for initial extinction learning. This is an example of priming. It’s like re-learning a foreign language. It goes easier the second time and the memory might be even more dependable. 

Since memory of an emotional CR learning experience and its extinction can co-exist, these two memories compete for which one is strong enough to survive long-term. Sadly, the CR memory that causes the PTSD is often stronger. Cues are extremely important to both forming and retrieving all kinds of memory. It seems likely there are many more explicit cues for CR memories than for extinction memories. Therapy should be aimed at enriching the number and variety of cues associated with extinction learning. Rehearsal is likewise important. So far, nobody seems to have given that much thought.

There is another aspect to emotional learning: learning to learn. If you have multiple anxieties, they may generalize and "spread" to facilitate learning new anxieties. In other words, the brain is learning to become emotionally dysfunctional. The corollary would be that learning how to promote extinction could also generalize and thus increase the general ability to cope with emotional trauma. Obviously, for one's brain to learn how to do that, one would need to begin with a single relatively easy extinction learning task, and then apply that learning-to-extinguish experience to other situations. Extinction learning needs to be repeated in order to become firmly established.

Sunday, January 13, 2019

Why Lucid Dreams Matter


Lucid dreams are often defined as the ones you know you are having in real time. These are the dreams where you seem to be conscious. You are aware of the story line, and you are often a central character in the story. Sometimes, you may even consciously manipulate the dream content toward a more acceptable outcome.

Scientists have recorded physiological changes during sleep, and there are multiple episodes during sleep, especially early in the morning, that display brain waves similar to those when you are awake accompanied by rapid, jerky eye moves (REM). When people were awakened every time these signs appeared, they invariably said a dream was interrupted.
Source, with permission: Carroll Jones III, Nathaniel Graphics, 2013

Incidentally, I have studied this in animals. It appears that REM sleep is an innate property of the brains of mammals. I discovered REM sleep in ruminants, which at the time were assumed to rest without true sleep. I also discovered a rudimentary form of REM sleep in armadillos, which I studied because they are among the most primitive mammals. However, only people show numerous REM episodes lasting significant times. I have even published a theoretical paper suggesting why people need so much REM sleep.

Some people claim that they don't have lucid dreams, but there are physiological indicators that everybody does dream. It is possible that lucid dreams can occur but are not consolidated in memory. What is the first thing you do when you wake up? You start thinking about something other than what you were dreaming about, such as going to the bathroom, your aching joints, having breakfast, upcoming day's events, and so on. Such distractions interfere with memory consolidation of recent thought.

A sleep-lab study in which the EEG was recorded revealed certain physiological signs that are unique to lucid dreams, as opposed to non-lucid dreams. Subjects were trained to generate, recognize, and remember lucid dreams. Subjects who commonly reported having lucid dreams were selected for specific training, which included reminding themselves before going to sleep that they were to recognize when they were having lucid dreams and signal that to sleep monitors by a specific pattern of eye movements (in dream sleep, only the eyes move continuously because a descending motor-inhibition circuit in the brainstem is activated). During early-morning sleep, when lucid dreams were more prevalent, EEG recordings during lucid dreaming revealed REM-like activity in frequency bands δ and θ, and higher-than-usual REM activity in the γ band, the between-states-difference peaking around 40 Hz. 

Voltage power in the 40 Hz band is strongest in the frontal and frontolateral region. Moreover, the 40-Hz activity during REM is more coherent with similar activity in other regions of the cortex. The specific increase in gamma activity and the increased in 40 Hz-band coherence in lucid dreaming suggests that these are this may be the physiological basic of consciousness.
This study is important because the EEG changes are not like those in regular, non-dream sleep but are similar to what occurs in conscious wakefulness. Thus, REM sleep seems to be a form of consciousness. The lucid dreams are special because the content means something, but usually expresses it symbolically or in metaphors. Your brain has escaped the editing shackles of wakefulness and is free to reveal things you might not know about. Sometimes it is things you don't want to know about. However, you brain is trying to tell you something. You don't have to be a Sigmund Freud to figure out some of the meaning.

With my own lucid dreams, when I reflect on the content, I often find they help me to recognize and deal with deeply personal issues. They can point the way to personal insight. If you reflect on the dream content right after awakening, you are likely to remember it. Lucid dream content can change your life, one small step at a time.

Sources:
Klemm, W. R. 2011. Why does REM sleep occur? A wake-up hypothesis. Frontiers in Neuroscience. 5 (73): 1- 12. Doi: 10.3389/fnsys.2011.00073

Voss, Ursula, Holzmann, Romain, Tuin, Inka, and Hobson, Allan, J. (2009). Lucid dreaming: A state of consciousness with features of both waking and non-lucid dreaming. Sleep. 32(9), 1191-1200. https://academic.oup.com/sleep/article/32/9/1191/2454513


Friday, January 04, 2019

Consciousness as Afterthought


I get a lot of questions in Quora about neuroscience, because neuroscience is what I do. A recent question prompts this post. The question was: "Does all thinking originate in subconscious thinking?" This is a provocative question. It gets to the heart of the matter: What is the default mode of brain operation, conscious or subconscious?

Semantic Confusion

Much of the confusion about consciousness arises because words fail us. We have poor definitions for the usual words: conscious, unconscious, subconscious, non-conscious. Before I attempt an answer to my Quora question, let me establish some background about terminology. First, the currency of thought is patterns of nerve impulse activity constrained by flowing in and through defined circuits of linked neurons. The impulse thought patterns that occur in primitive circuitry, like spinal  segments and neuroendocrine circuits are considered nonconscious thoughts because we can never be consciously aware of what those circuits are doing. We can, for example, use instruments to measure our blood pressure, but on its own, the brain can never detect that consciously.

Perhaps the most common kind of thought is that which occurs all the time, even when asleep, that we are not aware of. These days, scholars like to call this "unconscious" thinking.  But coma is clearly an unconscious state, and there often is little electrical activity that reflects thought. That is why a more useful term in this context is "subconscious," a term popularized by Freud. That is probably why the term has fallen into disuse. Too many of Freud's ideas have been discredited. But not his idea of subconsciousness.

Consciousness Is Not the Same as Being Awake

Reflect on your own perceptual experiences. Every time you are consciously aware of something you were attending to it. True, you can be awake without being conscious (see Selective Attention below). This means that we have to make a careful distinction between wakefulness and consciousness. They are not synonymous. You can't be conscious if you are not awake, but being awake does not assure consciousness of non-attended objects.  Wakefulness is generated out of excitatory activity of the brainstem reticular formation acting on neocortex, as I explain in my book, Mental Biology. The mechanisms of consciousness have not been established, but they likely involve coherent nerve impulse activity in distributed circuitry.

The phylogenetic perspective argues for unconsciousness as the default mode of thinking, inasmuch as lower animals are not likely to have conscious thought, yet their behavior clearly indicates that they are awake and their brains are "thinking." Also, we know from studies of infants that behavioral signs of consciousness are rare and only emerge as the brain matures. It is clear that much human thinking occurs below the level of conscious awareness.

The many scholars who claim that humans have no free will use the assumption of subconscious thinking to defend their stance against free will. They came to this conclusion from experiments that say indicate that all willed action is generated subconsciously and only recognized later in consciousness. The experiments and the interpretation are flawed, as I explain in my book on free will. To help defend the stance that free will is an illusion, the proponents go further to argue that consciousness is just an observer, like a movie patron in a theatre. You can just watch what is happening but can't do anything about it. Thus, they construct the specious circular argument that you can’t have free will because free will requires consciousness by definition, and consciousness can’t do anything. How convenient! This absurd notion, held by academics who are not as smart as they think they are, assume that all our consciousness thinking is basically irrelevant. They assume that the neural activity of conscious thought cannot influence neural activity in other parts of the brain, even though they have to admit that the neurons that mediate conscious thought are functionally connected with the other parts of brain. By these connections, conscious thought can, for example, explicitly evaluate the meaning of stimuli, or order certain muscles to contract, or force mental  effort to memorize, or change our emotional state and visceral functions in light of reason or mindfulness meditation, and so on. The circuitry of consciousness is not in a pickle jar outside the brain. It is inextricably bound to other brain circuits.

I certainly don't mean to dignify the anti-free-will position by describing it. However, debunking that position opens the door to reconsider the possible relationship between subconscious and conscious thought. Suppose conscious thought is an afterthought, but not in the restricted sense prescribed by the anti-free-will crowd. Just because subconscious thought can lead to conscious thought does not mean that conscious thought has no action of its own. When we consciously think about what we have recognized in consciousness, all that thinking is, by definition, conscious.  Conscious thought can consider options explicitly. It can reason. It can set goals, plan, command action, evaluate consequences of action, and adjust programming as needed. Subconscious thinking can do that too. Most likely the two modes of thinking work in potentially synergistic ways, though it seems clear that conscious thought can veto subconscious impulses and bad ideas.


Consciousness as Selective Attention

Have you seen the U tube video of a pickup basketball game? The video instructs viewers to count how many times one of the teams pass the ball. Viewers are so focused on the task that many of them fail to see a man in a gorilla suite walk into the game, do a little chest pounding, and then walk off the court. The point is that the eyes and subconscious mind saw the gorilla, but not the conscious mind. The same phenomena has been confirmed in another context. The phenomenon is labeled by psychologists as "inattentional blindness." In other words, we are only conscious of targets of our attention.

Like all biological systems, brains are stimulus-response systems. Humans have unique ways to respond to stimuli and experience, in that their brains selectively identify the information content, evaluate it in terms of available optional responses, and then determines an appropriate response. Both subconscious and conscious thought can be involved, but conscious thought only operates on attended targets.


Scanning for Meaningful Impulse Patterns

While it is clear that conscious brains think, it may be useful to consider that consciousness is also a scanning mechanism. We don't know how such scanning is enabled by wakefulness, but we do know that the awake brain generates more regular oscillations of impulse activity. These oscillations arise in many localized subnetworks throughout the cortex, occurring at varying frequencies and extent of synchrony among other generators. Intracellular recordings of neurons reveal that one or a few spikes are generated each time the membrane depolarizes. Oscillation is a built-in feature of neural circuits which commonly oscillate because impulse output re-enters the circuitry that generates it.  Increasing the frequency of oscillation increases the total impulse discharge because there are more depolarizations per unit of time. This increases the informational throughput in the network. Likewise, the degree to which multiple oscillators synchronize to share data modulates impulse throughput throughout linked circuits.

Perhaps the oscillation itself is the scanning mechanism. As novel or particularly relevant input enters an oscillating circuit, that circuit’s own impulse firing pattern may be disrupted, re-set, change frequency, or alter its time locking to other subnetworks. Enhanced time-locking among circuits could have the magnifying intensity effect that seems to be required in selective conscious attention. The carrying capacity for information is limited, because only subsets of networks in the global workspace synchronously engage at any one moment. This is one way to improve the signal-to-noise ratio of neural circuit processing.

Perhaps conscious thought is the afterthought of this scanning once it latches onto a subconscious thought that compels attention. Such a mechanism has great biological advantage in that it is a way for brain to scan through a noisy stimulus- and thought-world to identify signals that are salient for appropriate and selective processing and response. Once the target is captured in consciousness, conscious neural activity evaluates the salient signals and determines what to do about it and directs useful action. Taken in this light, I answer a tentative yes to my Quora questioner who wanted to know if all thinking originates in subconscious thinking.

Sources

Klemm, W. R. (2014). Mental Biology: The New Science of How the Brain and Mind Relate. New York: Prometheus.

Klemm, W. R. (2016). Making a Scientific Case for Conscious Agency and Free Will. New York: Academic Press.

https://www.youtube.com/watch?v=vJG698U2Mvo   The original basketball game example of the invisible gorilla.

https://www.youtube.com/watch?v=UtKt8YF7dgQ  A confirmation of the invisible gorilla in another context.