The Situationist

Posts Tagged ‘Neuroscience’

Rebecca Saxe on how we read each other’s minds

Posted by The Situationist Staff on February 28, 2010

From TEDTalks:  Sensing the motives and feelings of others is a natural talent for humans. But how do we do it? Here, Rebecca Saxe shares fascinating lab work that uncovers how the brain thinks about other peoples’ thoughts — and judges their actions.

* * *

* * *

Dan Gilbert on Why the Brain Scares Itself,” “Nancy Kanwisher on the Situation of our Brain,” Smart People Thinking about People Thinking about People Thinking” and ““The Grand Illusion” — Believing We See the Situation.”  To review a collection of Situationist posts on neuroscience, click here.

Posted in Life, Neuroscience, Video | Tagged: , | Leave a Comment »

The Neuro-Situation of Responsibility

Posted by The Situationist Staff on February 27, 2010

Nicole Vincent recently posted her interesting paper, “Neuroimaging and Responsibility Assessments” on SSRN.  Here’s the abstract.

* * *

Could neuroimaging evidence help us to assess the degree of a person’s responsibility for a crime which we know that they committed? This essay defends an affirmative answer to this question. A range of standard objections to this high-tech approach to assessing people’s responsibility is considered and then set aside, but I also bring to light and then reject a novel objection — an objection which is only encountered when functional (rather than structural) neuroimaging is used to assess people’s responsibility.

* * *

Download the paper for free here.   To read a sample of related Situationist posts see, Your Brain and Morality,” “Law & the Brain, The Science of Morality,” “Attributing Blame — from the Baseball Diamond to the War on Terror,” “David Vitter, Eliot Spitzer, John Edwards, Jon Ensign, and Now Mark Sanford: The Disposition Is Weaker than the Situation,” and “The Need for a Situationist Morality.”

Posted in Abstracts, Law, Neuroscience | Tagged: , , | Leave a Comment »

Neuroscience and Illusion

Posted by The Situationist Staff on December 7, 2009

Laura Sanders wrote an interesting article, titled “SPECIALIS REVELIO!  It’s not magic, it’s neuroscience,” in ScienceNews. Here are some excerpts.

* * *

Skill in manipulating people’s perceptions has earned magicians a new group of spellbound fans: Scientists seeking to learn how the eyes and brain perceive — or don’t perceive — reality.

“The interest for magic has been there for a long time,” says Gustav Kuhn, a neuroscientist at Durham University in England and former performing magician. “What is new is that we have all these techniques to get a better idea of the inner workings of these principles.”

A recent brain imaging study by Kuhn and his colleagues revealed which regions of the brain are active when people watch a magician do something impossible, such as make a coin disappear. Another research group’s work on monkeys suggests that two separate kinds of brain cells are critical to visual attention. One group of cells enhances focus on what a person is paying attention to, and the other actively represses interest in everything else. A magician’s real trick, then, may lie in coaxing the suppressing brain cells so that a spectator ignores the performer’s actions precisely when and where required.

Using magic to understand attention and consciousness could have applications in education and medicine, including work on attention impairments.

Imaging the impossible

Kuhn and his collaborators performed brain scans while subjects watched videos of real magicians performing tricks, including coins that disappear and cigarettes that are torn and miraculously put back together.  Volunteers in a control group watched videos in which no magic happened (the cigarette remained torn), or in which something surprising, but not magical, took place (the magician used the cigarette to comb his hair). Including the surprise condition allows researchers to separate the effects of witnessing a magic trick from those of the unexpected.

In terms of brain activity patterns, watching a magic trick was clearly different from watching a surprising event. Researchers saw a “striking” level of activity solely in the left hemisphere only when participants watched a magic trick, Kuhn says. Such a clear hemisphere separation is unusual, he adds, and may represent the brain’s attempt to reconcile the conflict between what is witnessed and what is thought possible. The two brain regions activated in the left hemisphere — the dorsolateral prefrontal cortex and the anterior cingulate cortex — are thought to be important for both detecting and resolving these types of conflicts.

Masters of suppression

Exactly how the brain attends to one thing and ignores another has been mysterious.  Jose-Manuel Alonso of the SUNY State College of Optometry in New York City thinks that the answer may lie in brain cells that actively suppress information deemed irrelevant by the brain. These cells are just as important, if not more so, than cells that enhance attention on a particular thing, says Alonso. “And that is a very new idea . . . . When you focus your attention very hard at a certain point to detect something, two things happen: Your attention to that thing increases, and your attention to everything else decreases.”

Alonso and his colleagues recently identified a select group of brain cells in monkeys that cause the brain to “freeze the world” by blocking out all irrelevant signals and allowing the brain to focus on one paramount task. Counter to what others had predicted, the team found that the brain cells that enhance attention are distinct from those that suppress attention. Published in the August 2008 Nature Neuroscience, the study showed that these brain cells can’t switch jobs depending on where the focus is — a finding Alonso calls “a total surprise.”

The work also shows that as a task gets more difficult, both the enhancement of essential information and suppression of nonessential information intensify. As a monkey tried to detect quicker, more subtle changes in the color of an object, both types of cells grew more active.

Alonso says magicians can “attract your attention with something very powerful, and create a huge suppression in regions to make you blind.” In the magic world, “the more interest [magicians] manage to draw, the stronger the suppression that they will get.”

Looking but not seeing

In the French Drop trick [see video below], a magician holds a coin in the left hand and pretends to pass the coin to the right hand, which remains empty. “What’s critical is that the magician looks at the empty hand. He pays riveted attention to the hand that is empty,” researcher Stephen Macknik says.

Several experiments have now shown that people can stare directly at something and not see it.  For a study published in Current Biology in 2006, Kuhn and his colleagues tracked where people gazed as they watched a magician throw a ball into the air several times. On the last throw, the magician only pretended to toss the ball. Still, spectators claimed to have seen the ball launch and then miraculously disappear in midair. But here’s the trick: In most cases, subjects kept their eyes on the magician’s face. Only when the ball was actually at the top part of the screen did participants look there. Yet the brain perceived the ball in the air, overriding the actual visual information.

Daniel Simons of the University of Illinois at Urbana-Champaign and his colleagues asked whether more perceptive people succumb less easily to inattentional blindness, which is when a person doesn’t perceive something because the mind, not the eyes, wanders. In a paper in the April Psychonomic Bulletin & Review, the researchers report that people who are very good at paying attention had no advantage in performing a visual task that required noticing something unexpected. Task difficulty was what mattered. Few participants could spot a more subtle change, while most could spot an easy one. The results suggest that magicians may be tapping in to some universal property of the human brain.

“We’re good at focusing attention,” says Simons. “It’s what the visual system was built to do.” Inattentional blindness, he says, is a by-product, a necessary consequence, of our visual system allowing us to focus intently on a scene.

Magical experiments

Martinez-Conde and Macknik plan to study the effects of laughter on attention. Magicians have the audience in stitches throughout a performance.  When the audience is laughing, the magician has the opportunity to act unnoticed.  Understanding how emotional states can affect perception and attention may lead to more effective ways to treat people who have attention problems.  “Scientifically, that can tell us a lot about the interaction between emotion and attention, of both the normally functioning brain and what happens in a diseased state,” says Martinez-Conde.

He expects that the study of consciousness and the mind will benefit enormously from teaming up with magicians. “We’re just at the beginning,” Macknik says. “It’s been very gratifying so far, but it’s only going to get better.”

* * *

You can read the entire article here.  For some related Situationist posts, see “Brain Magic,” Magic is in the Mind,” and “The Situation of Illusion” or click here for a collection of posts on illusion.

Posted in Entertainment, Illusions, Neuroscience, Video | Tagged: , , | Leave a Comment »

The Situation of Negotiation

Posted by The Situationist Staff on December 1, 2009

John F. McCarthy, Carl A. Scheraga, and Donald E. Gibson, recently posted their interesting paper, titled “Culture, Cognition and Conflict: How Neuroscience Can Help to Explain Cultural Differences in Negotiation and Conflict Management” on SSRN.  Here’s the abstract.

* * *

In negotiation and conflict management situations, understanding cultural patterns and tendencies is critical to whether a negotiation will accomplish the goals of the involved parties. While differences in cultural norms have been identified in the current literature, what is needed is a more fine-grained approach that examines differences below the level of behavioral norms. Drawing on recent social neuroscience approaches, we argue that differing negotiating styles may not only be related to differing cultural norms, but to differences in underlying language processing strategies in the brain, suggesting that cultural difference may influence neuropsychological processes. If this is the case, we expect that individuals from different cultures will exhibit different neuropsychological tendencies. Consistent with our hypothesis, using EEG measured responses, native German-speaking German participants took significantly more time to indicate when they understood a sentence than did native English-speaking American participants. This result is consistent with the theory that individuals from different cultures develop unique language processing strategies that affect behavior. A deliberative cognitive style used by Germans could account for this difference in comprehension reaction time. This study demonstrates that social neuroscience may provide a new way of understanding micro-processes in cross-cultural negotiations and conflict resolution.

* * *

You can download the paper for free here.  For a sample of related Situationist posts, see “Social Neuroscience and the Study of Racial Biases,” “Law & the Brain,” “The Situation of Risk Perceptions – Abstract,”and to review previous Situationist posts on cultural cognition, click here.

Posted in Abstracts, Conflict, Cultural Cognition, Neuroscience | Tagged: , , , , | 1 Comment »

The Situation of Emotional Distress Claims

Posted by The Situationist Staff on November 20, 2009

Betsy Grey has recently posted her intriguing paper, “Neuroscience and Emotional Harm in Tort Law: Rethinking the American Approach to Free-Standing Emotional Distress Claims” on SSRN.  Here’s the abstract.

* * *

American tort law traditionally distinguishes between “physical” and “emotional” harm for purposes of liability, with emotional harm treated as a second class citizen. The customary view is that physical injury is more entitled to compensation because it is considered more objectively verifiable and perhaps more important. The current draft of the Restatement of the Law (Third) of Torts maintains this view. Even the name of the Restatement project itself – “Liability for Physical and Emotional Harm” – emphasizes this distinction. Advances in neuroscience suggest that the concern over verification may no longer be valid, and that the phenomena we call “emotional” harm has a physiological basis. Because of these early scientific advances, this may be an appropriate time to re-examine our assumptions about tort recovery for emotional harm.

Using studies of Post Traumatic Stress Disorder as an example, this paper explores advances in neuroscience that have begun to shed light on the biological basis of the harm suffered when an individual is exposed to extreme stress. These advances underline the shrinking scientific distinction between physical and emotional harm. Drawing on these scientific developments, as well as on the British approach to emotional injury claims, the paper concludes that we should rethink the American treatment of emotional distress claims. In general, it proposes that we change our approach to account for advances in neuroscience, moving toward a more unified view of bodily and emotional injury. Two potential legal applications are advanced in this paper: (1) that science can provide empirical evidence of what it means to suffer emotional distress, thus helping to validate a claim that has always been subject to greater scrutiny; and (2) that this evidence may allow us to move away from the sharp distinction between how physical and emotional injuries are conceptualized, viewing both as valid types of harm with physiological origins.

* * *

To download the paper for free, click here.  To read a sample of related Situationist posts, see “New Study Looks at the Roots of Empathy,” “Placebo and the Situation of Healing,” “The Situation of Time and Mind,” “The Rubber Hand Illusion,” The Body Has a Mind of its Own,” “A (Situationist) Body of Thought,” and “A Closer Look at the Interior Situation.”

Posted in Abstracts, Emotions, Law, Neuroscience | Tagged: , , , | 1 Comment »

Greely on Law and Neuroscience

Posted by The Situationist Staff on July 28, 2009

From LBNstudio: “The degree to which brain scans will be admissible in court remains unclear, but experts already are pointing to precedent-setting cases and warning that neuroscience could alter the law, creating new methods and new visual evidence to determine criminal intent and criminal responsibility. Scott Drake talks with Stanford law Professor Hank Greely.”

* * *

To read a sample of related Situationist posts, see “Jurors, Brain Imaging, and the Allure of Pretty Pictures,” “Neurolaw Sampler,” Law & the Brain,” and Your Brain and Morality.”

Posted in Law, Legal Theory, Neuroscience, Video | Tagged: , , | Leave a Comment »

Neuroscience and Illusion

Posted by The Situationist Staff on May 4, 2009

magicLaura Sanders recently wrote an interesting article, titled “SPECIALIS REVELIO!  It’s not magic, it’s neuroscience,” in ScienceNews. Here are some excerpts.

* * *

Skill in manipulating people’s perceptions has earned magicians a new group of spellbound fans: Scientists seeking to learn how the eyes and brain perceive — or don’t perceive — reality.

“The interest for magic has been there for a long time,” says Gustav Kuhn, a neuroscientist at Durham University in England and former performing magician. “What is new is that we have all these techniques to get a better idea of the inner workings of these principles.”

A recent brain imaging study by Kuhn and his colleagues revealed which regions of the brain are active when people watch a magician do something impossible, such as make a coin disappear. Another research group’s work on monkeys suggests that two separate kinds of brain cells are critical to visual attention. One group of cells enhances focus on what a person is paying attention to, and the other actively represses interest in everything else. A magician’s real trick, then, may lie in coaxing the suppressing brain cells so that a spectator ignores the performer’s actions precisely when and where required.

Using magic to understand attention and consciousness could have applications in education and medicine, including work on attention impairments.

Imaging the impossible

Kuhn and his collaborators performed brain scans while subjects watched videos of real magicians performing tricks, including coins that disappear and cigarettes that are torn and miraculously put back together.  Volunteers in a control group watched videos in which no magic happened (the cigarette remained torn), or in which something surprising, but not magical, took place (the magician used the cigarette to comb his hair). Including the surprise condition allows researchers to separate the effects of witnessing a magic trick from those of the unexpected.

In terms of brain activity patterns, watching a magic trick was clearly different from watching a surprising event. Researchers saw a “striking” level of activity solely in the left hemisphere only when participants watched a magic trick, Kuhn says. Such a clear hemisphere separation is unusual, he adds, and may represent the brain’s attempt to reconcile the conflict between what is witnessed and what is thought possible. The two brain regions activated in the left hemisphere — the dorsolateral prefrontal cortex and the anterior cingulate cortex — are thought to be important for both detecting and resolving these types of conflicts.

Masters of suppression

Exactly how the brain attends to one thing and ignores another has been mysterious.  Jose-Manuel Alonso of the SUNY State College of Optometry in New York City thinks that the answer may lie in brain cells that actively suppress information deemed irrelevant by the brain. These cells are just as important, if not more so, than cells that enhance attention on a particular thing, says Alonso. “And that is a very new idea . . . . When you focus your attention very hard at a certain point to detect something, two things happen: Your attention to that thing increases, and your attention to everything else decreases.”

Alonso and his colleagues recently identified a select group of brain cells in monkeys that cause the brain to “freeze the world” by blocking out all irrelevant signals and allowing the brain to focus on one paramount task. Counter to what others had predicted, the team found that the brain cells that enhance attention are distinct from those that suppress attention. Published in the August 2008 Nature Neuroscience, the study showed that these brain cells can’t switch jobs depending on where the focus is — a finding Alonso calls “a total surprise.”

The work also shows that as a task gets more difficult, both the enhancement of essential information and suppression of nonessential information intensify. As a monkey tried to detect quicker, more subtle changes in the color of an object, both types of cells grew more active.

Alonso says magicians can “attract your attention with something very powerful, and create a huge suppression in regions to make you blind.” In the magic world, “the more interest [magicians] manage to draw, the stronger the suppression that they will get.”

Looking but not seeing

In the French Drop trick [see video below], a magician holds a coin in the left hand and pretends to pass the coin to the right hand, which remains empty. “What’s critical is that the magician looks at the empty hand. He pays riveted attention to the hand that is empty,” researcher Stephen Macknik says.

Several experiments have now shown that people can stare directly at something and not see it.  For a study published in Current Biology in 2006, Kuhn and his colleagues tracked where people gazed as they watched a magician throw a ball into the air several times. On the last throw, the magician only pretended to toss the ball. Still, spectators claimed to have seen the ball launch and then miraculously disappear in midair. But here’s the trick: In most cases, subjects kept their eyes on the magician’s face. Only when the ball was actually at the top part of the screen did participants look there. Yet the brain perceived the ball in the air, overriding the actual visual information.

Daniel Simons of the University of Illinois at Urbana-Champaign and his colleagues asked whether more perceptive people succumb less easily to inattentional blindness, which is when a person doesn’t perceive something because the mind, not the eyes, wanders. In a paper in the April Psychonomic Bulletin & Review, the researchers report that people who are very good at paying attention had no advantage in performing a visual task that required noticing something unexpected. Task difficulty was what mattered. Few participants could spot a more subtle change, while most could spot an easy one. The results suggest that magicians may be tapping in to some universal property of the human brain.

“We’re good at focusing attention,” says Simons. “It’s what the visual system was built to do.” Inattentional blindness, he says, is a by-product, a necessary consequence, of our visual system allowing us to focus intently on a scene.

Magical experiments

Martinez-Conde and Macknik plan to study the effects of laughter on attention. Magicians have the audience in stitches throughout a performance.  When the audience is laughing, the magician has the opportunity to act unnoticed.  Understanding how emotional states can affect perception and attention may lead to more effective ways to treat people who have attention problems.  “Scientifically, that can tell us a lot about the interaction between emotion and attention, of both the normally functioning brain and what happens in a diseased state,” says Martinez-Conde.

He expects that the study of consciousness and the mind will benefit enormously from teaming up with magicians. “We’re just at the beginning,” Macknik says. “It’s been very gratifying so far, but it’s only going to get better.”

* * *

You can read the entire article here.  For some related Situationist posts, see “Brain Magic,” Magic is in the Mind,” and “The Situation of Illusion” or click here for a collection of posts on illusion.

Posted in Entertainment, Illusions, Neuroscience, Video | Tagged: , , | 5 Comments »

The Situation of Confabulation

Posted by The Situationist Staff on April 13, 2009

El Alma del EbroHelen Philips had a nice article  titled “Mind fiction: Why your brain tells tall tales,” in the October 2006 issue of New Scientist.  Here are some excerpts.

* * *

The kind of storytelling my grandmother did after a series of strokes . . . [n]eurologists call . . . confabulation. It isn’t fibbing, as there is no intent to deceive and people seem to believe what they are saying. Until fairly recently it was seen simply as a neurological deficiency – a sign of something gone wrong. Now, however, it has become apparent that healthy people confabulate too.

Confabulation is clearly far more than a result of a deficit in our memory, says William Hirstein, a neurologist and philosopher at Elmhurst College in Chicago and author of a book on the subject entitled Brain Fiction . . . . Children and many adults confabulate when pressed to talk about something they have no knowledge of, and people do it during and after hypnosis. . . . In fact, we may all confabulate routinely as we try to rationalise decisions or justify opinions. Why do you love me? Why did you buy that outfit? Why did you choose that career? At the extreme, some experts argue that we can never be sure about what is actually real and so must confabulate all the time to try to make sense of the world around us.

Confabulation was first mentioned in the medical literature in the late 1880s, applied to patients of the Russian psychiatrist Sergei Korsakoff. He described a distinctive type of memory deficit in people who had abused alcohol for many years. These people had no recollection of recent events, yet filled in the blanks spontaneously with sometimes fantastical and impossible stories.

Neurologist Oliver Sacks of the Albert Einstein College of Medicine in New York wrote about a man with Korsakoff’s syndrome in his 1985 book The Man Who Mistook His Wife for a Hat. Mr Thompson had no memory from moment to moment about where he was or why, or to whom he was speaking, but would invent elaborate explanations for the situations he found himself in. If someone entered the room, he might greet them as a customer of the shop he used to own. A doctor wearing a white coat might become the local butcher. To Mr Thompson, these fictions seemed plausible and he never seemed to notice that they kept changing. He behaved as though his improvised world was a perfectly normal and stable place.

* * *

So confabulation can result from an inability to recognise whether or not memories are relevant, real and current. But that’s not the only time people make up stories, says Hirstein. He has found that those with delusions or false beliefs about their illnesses are among the most common confabulators. He thinks these cases reveal how we build up and interpret knowledge about ourselves and other people.

It is surprisingly common for stroke patients with paralysed limbs or even blindness to deny they have anything wrong with them, even if only for a couple of days after the event. They often make up elaborate tales to explain away their problems. One of Hirstein’s patients, for example, had a paralysed arm, but believed it was normal, telling him that the dead arm lying in the bed beside her was not in fact her own. When he pointed out her wedding ring, she said with horror that someone had taken it. When asked to prove her arm was fine, by moving it, she made up an excuse about her arthritis being painful. It seems amazing that she could believe such an impossible story. Yet when Vilayanur Ramachandran of the University of California, San Diego, offered cash to patients with this kind of delusion, promising higher rewards for tasks they couldn’t possibly do – such as clapping or changing a light bulb – and lower rewards for tasks they could, they would always attempt the high pay-off task, as if they genuinely had no idea they would fail.

* * *

What all these conditions have in common is an apparent discrepancy between the patient’s internal knowledge or feelings and the external information they are getting from what they see. In all these cases “confabulation is a knowledge problem”, says Hirstein. Whether it is a lost memory, emotional response or body image, if the knowledge isn’t there, something fills the gap.

Helping to plug that gap may well be a part of the brain called the orbitofrontal cortex, which lies in the frontal lobes behind the eye sockets. The OFC is best known as part of the brain’s reward system, which guides us to do pleasurable things or seek what we need, but Hirstein . . . suggest that the system has an even more basic role. It and other frontal brain regions are busy monitoring all the information generated by our senses, memory and imagination, suppressing what is not needed and sorting out what is real and relevant. According to Morten Kringelbach, a neuroscientist at the University of Oxford who studies pleasure, reward and the role of the OFC, this tracking of ongoing reality allows us to rate everything subjectively to help us work out our priorities and preferences.

* * *

Kringelbach goes even further. He suspects that confabulation is not just something people do when the system goes wrong. We may all do it routinely. Children need little encouragement to make up stories when asked to talk about something they know little about. Adults, too, can be persuaded to confabulate, as [Situationist contributor] Timothy Wilson of the University of Virginia in Charlottesville and his colleague Richard Nisbett have shown. They laid out a display of four identical items of clothing and asked people to pick which they thought was the best quality. It is known that people tend to subconsciously prefer the rightmost object in a sequence if given no other choice criteria, and sure enough about four out of five participants did favour the garment on the right. Yet when asked why they made the choice they did, nobody gave position as a reason. It was always about the fineness of the weave, richer colour or superior texture. This suggests that while we may make our decisions subconsciously, we rationalise them in our consciousness, and the way we do so may be pure fiction, or confabulation.

More recent experiments by philosopher Lars Hall of Lund University in Sweden develop this idea further. People were shown pairs of cards with pictures of faces on them and asked to choose the most attractive. Unbeknown to the subject, the person showing the cards was a magician and routinely swapped the chosen card for the rejected one. The subject was then asked why they picked this face. Often the swap went completely unnoticed, and the subjects came up with elaborate explanations about hair colour, the look of the eyes or the assumed personality of the substituted face. Clearly people routinely confabulate under conditions where they cannot know why they made a particular choice. Might confabulation be as routine in justifying our everyday choices?

* * *

Even when we think we are making rational choices and decisions, this may be illusory too. The intriguing possibility is that we simply do not have access to all of the unconscious information on which we base our decisions, so we create fictions upon which to rationalise them, says Kringelbach. That may well be a good thing, he adds. If we were aware of how we made every choice we would never get anything done – we cannot hold that much information in our consciousness. Wilson backs up this idea with some numbers: he says our senses may take in more than 11 million pieces of information each second, whereas even the most liberal estimates suggest that we are conscious of just 40 of these.

Nevertheless it is an unsettling thought that perhaps all our conscious mind ever does is dream up stories in an attempt to make sense of our world. “The possibility is left open that in the most extreme case all of the people may confabulate all of the time,” says Hall.

* * *

To read the entire article, including a discussion of the problem of relying on eyewitnesses, click here. To read some related Situationist posts, see “The Interior Situation of Complex Human Feelings,” “Magic is in the Mind,” “John Darley on “Justice as Intuitions” – Video,” “The Split Brain and the Interior Situation of Theories of the Self,” “Jonathan Haidt on the Situation of Moral Reasoning,” and “Vilayanur Ramachandran On Your Mind.”

Posted in Book, Choice Myth, Deep Capture, Illusions, Neuroscience | Tagged: , , , , , , , , , , | 1 Comment »

The Split Brain and the Interior Situation of Theories of the Self

Posted by The Situationist Staff on August 26, 2008

The following (5 minute) video demonstrates the effects of split brain surgery where the corpus collusum is severed. The effects are explained by Dr. Michael Gazzaniga.

From Youtube: “To reduce the severity of his seizures, Joe had the bridge between his left and right cerebral hemisphers (the corpus callosum) severed. As a result, his left and right brains no longer communicate through that pathway. Here’s what happens as a result.”

* * *

To watch a (3.5 minute) clip from Situationist contributor Phil Zimbardo’s program, Discovering Psychology, in whcih Michael Gazzaniga discusses the essential role of the “interpreter” in creating in each of us a unique sense of self.

* * *

Below you can watch an vintage (11 minute) video in which a very young Dr. Gazzaniga goes into detail regarding his early split-brain research on animals and humans (includes a fascinating example of how the right and left hands of a split-brain patient squabble with one another as if hands from two different individuals).

* * *

For a sample of related Situationist posts, see “Our Interior Situations – The Human Brain,” “Learning to Influence Our Interior Situation,” It’s All In Your (Theory of the) Mind,” “Smart People Thinking about People Thinking about People Thinking,” Vilayanur Ramachandran On Your Mind,”Jonathan Haidt on the Situation of Moral Reasoning,” “Unconscious Situation of Choice,” The Situation of Reason,” and Part I, Part II, Part III, and Part IV of “The Unconscious Situation of our Consciousness.”

Posted in Choice Myth, Classic Experiments, Neuroscience, Video | Tagged: , , , , , , , | 3 Comments »

The Military Meets the Mind Sciences

Posted by The Situationist Staff on August 14, 2008

Yesterday, Brandon Keim published a disturbing article, “Uncle Sam Wants Your Brain” in Wired Science. We’ve excerpted his introduction below, and recommend the entire article which is here.

* * *

Drugs that make soldiers want to fight. Robots linked directly to their controllers’ brains. Lie-detecting scans administered to terrorist suspects as they cross U.S. borders.

These are just a few of the military uses imagined for cognitive science — and if it’s not yet certain whether the technologies will work, the military is certainly taking them very seriously.

“It’s way too early to know which — if any — of these technologies is going to be practical,” said Jonathan Moreno, a Center for American Progress bioethicist and author of Mind Wars: Brain Research and National Defense. “But it’s important for us to get ahead of the curve. Soldiers are always on the cutting edge of new technologies.”

Moreno is part of a National Research Council committee convened by the Department of Defense to evaluate the military potential of brain science. Their report, “Emerging Cognitive Neuroscience and Related Technologies,” was released today. It charts a range of cognitive technologies that are potentially powerful — and, perhaps, powerfully troubling.

* * *

To read Keim’s summary and analysis, click here. For some related Situationist posts, see “The Situation of Soldiers,” “The Disturbing Mental Health Situation of Returning Soldiers,” Our Soldiers, Their Children: The Lasting Impact of the War in Iraq,” and “The Situation of a “Volunteer” Army.”

Posted in Conflict, Deep Capture, Neuroscience, Public Policy | Tagged: , , , | Leave a Comment »

Learning to Influence Our Interior Situation

Posted by The Situationist Staff on July 16, 2008

From TED: Neuroscientist and inventor Christopher deCharms demonstrates a new way to use fMRI to show brain activity — thoughts, emotions, pain — while it is happening. In other words, you can actually see how you feel.

* * *

Posted in Neuroscience, Video | Tagged: , , , , , | Leave a Comment »

It’s All In Your (Theory of the) Mind

Posted by The Situationist Staff on July 13, 2008

Story by Anne-Marie Tobin, from Canadian Press.

* * *

Can robots and computers take the place of a human being? Two new studies involving research on brain activity in humans provide some food for thought in the evolving debate about interactions between man and machine – and in both cases, people seem to prefer people.

German scientists used an MRI scanner to see how the brain reacted when subjects thought they were playing a game against four different opponents – a laptop computer, a functional robot with no human shape except for artificial hands, a robot with a humanlike shape and another person.

The 20 participants were also asked about their enjoyment levels after playing the Prisoner’s Dilemma Game, which is similar to the Rock Paper Scissors game.

“We were interested in what’s going on in the brain when you play an interaction game when you need to think what your opponent is thinking,” said Soren Krach, a psychologist in the department of psychiatry at RWTH Aachen University.

In social cognitive neuroscience, the ability to attribute intentions and desires to others is referred to as having a Theory of Mind, according to the study.

“We found out that the activity in the cortical network related to Theory of Mind … was increasingly engaged the more the opponents exhibited humanlike features,” Krach explained.

Before going into the MRI scanner, the subjects played against the laptop, the two robots and the human. Once inside the scanner, they played again, using special video glasses, and they were told which opponent they were playing against at any given time.

Later, they were asked about the interaction.

“They indicated that the more humanlike the opponent was, the more they had perceived fun during the game and they more attributed intelligence to their opponent,” Krach said.

The behaviour of the four opponents was randomized.

The study was published Tuesday in the online open-access journal PLoS ONE, along with another study in which neuroscientists looked at the brain’s response to piano sonatas played either by a computer or musician.

* * *

To read the rest of the article and about that second experiment, click here.

The article, from which the image above is taken, is: Krach S, Hegel F, Wrede B, Sagerer G, Binkofski F, et al. (2008) Can Machines Think? Interaction and Perspective Taking with Robots Investigated via fMRI. PLoS ONE 3(7): e2597. doi:10.1371/journal.pone.0002597

Posted in Neuroscience, Situationist Sports | Tagged: , , , | Leave a Comment »

Moral Psychology Primer

Posted by The Situationist Staff on May 27, 2008

Dan Jones has a terrific article in the April issue of Prospect, titled “The Emerging Moral Psychology.” We’ve included some excerpts from the article below.
* * *

Long thought to be a topic of enquiry within the humanities, the nature of human morality is increasingly being scrutinised by the natural sciences. This shift is now beginning to provide impressive intellectual returns on investment. Philosophers, psychologists, neuroscientists, economists, primatologists and anthropologists, all borrowing liberally from each others’ insights, are putting together a novel picture of morality—a trend that University of Virginia psychologist Jonathan Haidt has described as the “new synthesis in moral psychology.” The picture emerging shows the moral sense to be the product of biologically evolved and culturally sensitive brain systems that together make up the human “moral faculty.”

A pillar of the new synthesis is a renewed appreciation of the powerful role played by intuitions in producing our ethical judgements. Our moral intuitions, argue Haidt and other psychologists, derive not from our powers of reasoning, but from an evolved and innate suite of “affective” systems that generate “hot” flashes of feelings when we are confronted with a putative moral violation.

This intuitionist perspective marks a sharp break from traditional “rationalist” approaches in moral psychology, which gained a large following in the second half of the 20th century under the stewardship of the late Harvard psychologist Lawrence Kohlberg. In the Kohlbergian tradition, moral verdicts derive from the application of conscious reasoning, and moral development throughout our lives reflects our improved ability to articulate sound reasons for the verdicts . . . .

But experimental studies give cause to question the primacy of rationality in morality. In one experiment, Jonathan Haidt presented people with a range of peculiar stories, each of which depicted behaviour that was harmless (in that no sentient being was hurt) but which also felt “bad” or “wrong.” One involved a son who promised his mother, while she was on her deathbed, that he would visit her grave every week, and then reneged on his commitment because he was busy. Another scenario told of a man buying a dead chicken at the supermarket and then having sex with it before cooking and eating it. These weird but essentially harmless acts were, nonetheless, by and large deemed to be immoral.

Further evidence that emotions are in the driving seat of morality surfaces when people are probed on why they take their particular moral positions. In a separate study which asked subjects for their ethical views on consensual incest, most people intuitively felt that incestuous sex is wrong, but when asked why, many gave up, saying, “I just know it’s wrong!”—a phenomenon Haidt calls “moral dumbfounding.”

It’s hard to argue that people are rationally working their way to moral judgements when they can’t come up with any compelling reasons—or sometimes any reasons at all—for their moral verdicts. Haidt suggests that the judgements are based on intuitive, emotional responses, and that conscious reasoning comes into its own in creating post hoc justifications for our moral stances. Our powers of reason, in this view, operate more like a lawyer hired to defend a client than a disinterested scientist searching for the truth.

Our rational and rhetorical skill is also recruited from time to time as a lobbyist. Haidt points out that the reasons—whether good or bad—that we offer for our moral views often function to press the emotional buttons of those we wish to bring around to our way of thinking. So even when explicit reasons appear to have the effect of changing people’s moral opinions, the effect may have less to do with the logic of the arguments than their power to elicit the right emotional responses. We may win hearts without necessarily converting minds. . . .

Even if you recognise the tendency to base moral judgements on how moral violations make you feel, you probably would also like to think that you have some capacity to think through moral issues, to weigh up alternative outcomes and make a call on what is right and wrong.

Thankfully, neuroscience gives some cause for optimism. Philosopher-cum-cognitive scientist Joshua Greene of Harvard University and his colleagues have used functional magnetic resonance imaging to map the brain as it churns over moral problems, inspired by a classic pair of dilemmas from the annals of moral philosophy called the Trolley Problem and the Footbridge Problem. [For a review of Greene’s research, clickFootbridge Problem - Image by Isdky (Flickr) here.]

* * *

What is going on in the brain when people mull over these different scenarios? Thinking through cases like the Trolley Problem—what Greene calls an impersonal moral dilemma as it involves no direct violence against another person—increases activity in brain regions located in the prefrontal cortex that are associated with deliberative reasoning and cognitive control (so-called executive functions). This pattern of activity suggests that impersonal moral dilemmas such as the Trolley Problem are treated as straightforward rational problems: how to maximise the number of lives saved. By contrast, brain imaging of the Footbridge Problem—a personal dilemma that invokes up-close and personal violence—tells a rather different story. Along with the brain regions activated in the Trolley Problem, areas known to process negative emotional responses also crank up their activity. In these more difficult dilemmas, people take much longer to make a decision and their brains show patterns of activity indicating increased emotional and cognitive conflict within the brain as the two appalling options are weighed up.

Greene interprets these different activation patterns, and the relative difficulty of making a choice in the Footbridge Problem, as the sign of conflict within the brain. On the one hand is a negative emotional response elicited by the prospect of pushing a man to his death saying “Don’t do it!”; on the other, cognitive elements saying “Save as many people as possible and push the man!” For most people thinking about the Footbridge Problem, emotion wins out; in a minority of others, the utilitarian conclusion of maximising the number of lives saved.

* * *

While there is a growing consensus that the moral intuitions revealed by moral dilemmas such as the Trolley and Footbridge problems draw on unconscious psychological processes, there is an emerging debate about how best to characterise these unconscious elements.

On the one hand is the dual-processing view, in which “hot” affectively-laden intuitions that militate against personal violence are sometimes pitted against the ethical conclusions of deliberative, rational systems. An alternative perspective that is gaining increased attention sees our moral intuitions as driven by “cooler,” non-affective general “principles” that are innately built into the human moral faculty and that we unconsciously follow when assessing social behaviour.

In order to find out whether such principles drive moral judgements, scientists need to know how people actually judge a range of moral dilemmas. In recent years, Marc Hauser, a biologist and psychologist at Harvard, has been heading up the Moral Sense Test (MST) project to gather just this sort of data from around the globe and across cultures.

The project is casting its net as wide as possible: the MST can be taken by anyone with access to the internet. Visitors to the “online lab” are presented with a series of short moral scenarios—subtle variations of the original Footbridge and Trolley dilemmas, as well as a variety of other moral dilemmas. The scenarios are designed to explore whether, and how, specific factors influence moral judgements. Data from 5,000 MST participants showed that people appear to follow a moral code prescribed by three principles:

• The action principle: harm caused by action is morally worse than equivalent harm caused by omission.

• The intention principle: harm intended as the means to a goal is morally worse than equivalent harm foreseen as the side-effect of a goal.

• The contact principle: using physical contact to cause harm to a victim is morally worse than causing equivalent harm to a victim without using physical contact.

Crucially, the researchers also asked participants to justify their decisions. Most people appealed to the action and contact principles; only a small minority explicitly referred to the intention principle. Hauser and colleagues interpret this as evidence that some principles that guide our moral judgments are simply not available to, and certainly not the product of, conscious reasoning. These principles, it is proposed, are an innate and universal part of the human moral faculty, guiding us in ways we are unaware of. In a (less elegant) reformulation of Pascal’s famous claim that “The heart has reasons that reason does not know,” we might say “The moral faculty has principles that reason does not know.”

The notion that our judgements of moral situations are driven by principles of which we are not cognisant will no doubt strike many as implausible. Proponents of the “innate principles” perspective, however, can draw succour from the influential Chomskyan idea that humans are equipped with an innate and universal grammar for language as part of their basic design spec. In everyday conversation, we effortlessly decode a stream of noise into meaningful sentences according to rules that most of us are unaware of, and use these same rules to produce meaningful phrases of our own. Any adult with normal linguistic competence can rapidly decide whether an utterance or sentence is grammatically valid or not without conscious recourse to the specific rules that determine grammaticality. Just as we intuitively know what we can and cannot say, so too might we have an intuitive appreciation of what is morally permissible and what is forbidden.

Marc Hauser and legal theorist John Mikhail of Georgetown University have started to develop detailed models of what such an “innate moral grammar” might look like. Such models usually posit a number of key components, or psychological systems. One system uses “conversion rules” to break down observed (or imagined) behaviour into a meaningful set of actions, which is then used to create a “structural description” of the events. This structural description captures not only the causal and temporal sequence of events (what happened and when), but also intentional aspects of action (was the outcome intended as a means or a side effect? What was the intention behind the action?).

With the structural description in place, the causal and intentional aspects of events can be compared with a database of unconscious rules, such as “harm intended as a means to an end is morally worse than equivalent harm foreseen as the side-effect of a goal.” If the events involve harm caused as a means to the Morality - Image by Joel Duggan, Flickrgreater good (and particularly if caused by the action and direct contact of another person), then a judgement of impermissibility is more likely to be generated by the moral faculty. In the most radical models of the moral grammar, judgements of permissibility and impermissibility occur prior to any emotional response. Rather than driving moral judgements, emotions in this view arise as a by-product of unconsciously reached judgements as to what is morally right and wrong

Hauser argues that a similar “principles and parameters” model of moral judgement could help make sense of universal themes in human morality as well as differences across cultures (see below). There is little evidence about how innate principles are affected by culture, but Hauser has some expectations as to what might be found. If the intention principle is really an innate part of the moral faculty, then its operation should be seen in all cultures. However, cultures might vary in how much harm as a means to a goal they typically tolerate, which in turn could reflect how extensively that culture sanctions means-based harm such as infanticide (deliberately killing one child so that others may flourish, for example). These intriguing though speculative ideas await a thorough empirical test.

* * *

Although current studies have only begun to scratch the surface, the take-home message is clear: intuitions that function below the radar of consciousness are most often the wellsprings of our moral judgements. . . .

Despite the knocking it has received, reason is clearly not entirely impotent in the moral domain. We can reflect on our moral positions and, with a bit of effort, potentially revise them. An understanding of our moral intuitions, and the unconscious forces that fuel them, give us perhaps the greatest hope of overcoming them.

* * *

To read the entire article, click here. To reaad some related Situationist posts, see “Quick Introduction to Experimental (Situationist?) Philosophy,” and “Pinker on the Situation of Morality.”

Posted in Ideology, Morality, Neuroscience, Philosophy | Tagged: , , , , , , , , , , , , | 5 Comments »

Mapping the Social Brain

Posted by The Situationist Staff on May 16, 2008

What goes through your head when you hear that you have a good reputation or find out that your social status is slipping? Researchers are starting to find out. By examining brain activity through functional magnetic resonance imaging (fMRI), two groups of researchers report how our brains respond to information about reputation and social status in the journal Neuron this week.

Caroline Zink, a neuroscientist from the National Institute of Mental Health, and colleagues developed a simple game in which participants played for money. The participants were competing against themselves only. The researchers told the players, however, that other people happened to be playing the same game simultaneously and then gave the participants information about how well they were doing compared to these other players.

* * *

When participants in Zink’s study viewed a superior player, regions of the brain associated with social-emotional processing–like the amygdala–were activated. “If you think about it, it makes sense that you wouldn’t have the same kind of emotional response if it was a computer,” Zink says.

* * *

Researchers also recently mapped the neural response to reputation. “Although we all intuitively know that a good reputation makes us feel good, the idea that good reputation is a reward has long been just an assumption in social sciences, and there has been no scientific proof,” says Norihiro Sadato, a researcher at the National Institute for Physiological Sciences in Aichi, Japan and author of another study in Neuron this week.

Sadato and colleagues report that when people are told that they have a good reputation, regions of the brain associated with the reward are activated. A good reputation prompts a similar neural response to a monetary reward. “We found that these seemingly different kinds of rewards (good reputation vs. money) are biologically coded by the same neural structure, the striatum,” Sadato writes in an email

Posted in Emotions, Neuroscience, Uncategorized | Tagged: , , , , , , | Leave a Comment »

Unconscious Situation of Choice

Posted by The Situationist Staff on April 16, 2008

From Science Dailey - Credit John Dylan-HaynesFrom Science Daily Release:

Contrary to what most of us would like to believe, decision-making may be a process handled to a large extent by unconscious mental activity. A team of scientists has unraveled how the brain actually unconsciously prepares our decisions. Even several seconds before we consciously make a decision its outcome can be predicted from unconscious activity in the brain.

This is shown in a study by scientists from the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig, in collaboration with the Charité University Hospital and the Bernstein Center for Computational Neuroscience in Berlin. The researchers from the group of Professor John-Dylan Haynes used a brain scanner to investigate what happens in the human brain just before a decision is made. “Many processes in the brain occur automatically and without involvement of our consciousness. This prevents our mind from being overloaded by simple routine tasks. But when it comes to decisions we tend to assume they are made by our conscious mind. This is questioned by our current findings.”

In the study, published in Nature Neuroscience, [Chun Siong Soon, Marcel Brass, Hans-Jochen Heinze & John-Dylan Haynes. Unconscious determinants of free decisions in the human brain. Nature Neuroscience April 13th, 2008] participants could freely decide if they wanted to press a button with their left or right hand. They were free to make this decision whenever they wanted, but had to remember at which time they felt they had made up their mind. The aim of the experiment was to find out what happens in the brain in the period just before the person felt the decision was made. The researchers found that it was possible to predict from brain signals which option participants would take up to seven seconds before they consciously made their decision. Normally researchers look at what happens when the decision is made, but not at what happens several seconds before. The fact that decisions can be predicted so long before they are made is a astonishing finding.

This unprecedented prediction of a free decision was made possible by sophisticated computer programs that were trained to recognize typical brain activity patterns preceding each of the two choices. Micropatterns of activity in the frontopolar cortex were predictive of the choices even before participants knew which option they were going to choose. The decision could not be predicted perfectly, but prediction was clearly above chance. This suggests that the decision is unconsciously prepared ahead of time but the final decision might still be reversible.

“Most researchers investigate what happens when people have to decide immediately, typically as a rapid response to an event in our environment. Here we were focusing on the more interesting decisions that are made in a more natural, self-paced manner”, Haynes explains.

More than 20 years ago the American brain scientist Benjamin Libet found a brain signal, the so-called “readiness-potential” that occurred a fraction of a second before a conscious decision. Libet’s experiments were highly controversial and sparked a huge debate. Many scientists argued that if our decisions are prepared unconsciously by the brain, then our feeling of “free will” must be an illusion. In this view, it is the brain that makes the decision, not a person’s conscious mind. Libet’s experiments were particularly controversial because he found only a brief time delay between brain activity and the conscious decision.

In contrast, Haynes and colleagues now show that brain activity predicts — even up to 7 seconds ahead of time — how a person is going to decide. But they also warn that the study does not finally rule out free will: “Our study shows that decisions are unconsciously prepared much longer ahead than previously thought. But we do not know yet where the final decision is made. We need to investigate whether a decision prepared by these brain areas can still be reversed.”

* * *

For a sample of previous, related Situationist posts, see The Situation of Reason,” and Part I, Part II, Part III, and Part IV of “The Unconscious Situation of our Consciousness.”

Posted in Choice Myth | Tagged: , , | 2 Comments »

 
Follow

Get every new post delivered to your Inbox.

Join 874 other followers

%d bloggers like this: