The Situationist

Archive for the ‘Neuroscience’ Category

The Situation of Competition

Posted by The Situationist Staff on February 12, 2012

From the University of Illinois News Bureau:

Researchers have found a way to study how our brains assess the behavior – and likely future actions – of others during competitive social interactions. Their study, described in a paper in the Proceedings of the National Academy of Sciences, is the first to use a computational approach to tease out differing patterns of brain activity during these interactions, the researchers report.

“When players compete against each other in a game, they try to make a mental model of the other person’s intentions, what they’re going to do and how they’re going to play, so they can play strategically against them,” said University of Illinois postdoctoral researcher Kyle Mathewson, who conducted the study as a doctoral student in the Beckman Institute with graduate student Lusha Zhu and economics professor and Beckman affiliate Ming Hsu, who now is at the University of California, Berkeley. “We were interested in how this process happens in the brain.”

Previous studies have tended to consider only how one learns from the consequences of one’s own actions, called reinforcement learning, Mathewson said. These studies have found heightened activity in the basal ganglia, a set of brain structures known to be involved in the control of muscle movements, goals and learning. Many of these structures signal via the neurotransmitter dopamine.

“That’s been pretty well studied and it’s been figured out that dopamine seems to carry the signal for learning about the outcome of our own actions,” Mathewson said. “But how we learn from the actions of other people wasn’t very well characterized.”

Researchers call this type of learning “belief learning.”

To better understand how the brain processes information in a competitive setting, the researchers used functional magnetic resonance imaging (fMRI) to track activity in the brains of participants while they played a competitive game, called a Patent Race, against other players. The goal of the game was to invest more than one’s opponent in each round to win a prize (a patent worth considerably more than the amount wagered), while minimizing one’s own losses (the amount wagered in each trial was lost). The fMRI tracked activity at the moment the player learned the outcome of the trial and how much his or her opponent had wagered.

A computational model evaluated the players’ strategies and the outcomes of the trials to map the brain regions involved in each type of learning.

“Both types of learning were tracked by activity in the ventral striatum, which is part of the basal ganglia,” Mathewson said. “That’s traditionally known to be involved in reinforcement learning, so we were a little bit surprised to see that belief learning also was represented in that area.”

Belief learning also spurred activity in the rostral anterior cingulate, a structure deep in the front of the brain. This region is known to be involved in error processing, regret and “learning with a more social and emotional flavor,” Mathewson said.

The findings offer new insight into the workings of the brain as it is engaged in strategic thinking, Hsu said, and may aid the understanding of neuropsychiatric illnesses that undermine those processes.

“There are a number of mental disorders that affect the brain circuits implicated in our study,” Hsu said. “These include schizophrenia, depression and Parkinson’s disease. They all affect these dopaminergic regions in the frontal and striatal brain areas. So to the degree that we can better understand these ubiquitous social functions in strategic settings, it may help us understand how to characterize and, eventually, treat the social deficits that are symptoms of these diseases.”

More.

The paper, “Dissociable Neural Representations of Reinforcement and Belief Prediction Errors Underlie Strategic Learning,” is available online or from the U. of I. News Bureau.

Related Situationist posts:

Posted in Abstracts, Altruism, Conflict, Neuroscience, Uncategorized | Tagged: , | Leave a Comment »

The Situation of Choice

Posted by The Situationist Staff on January 23, 2012

From the APS Monitor (excerpts from a terrific primer on “The Mechanics of Choice”):

* * *

The prediction of social behavior significantly involves the way people make decisions about resources and wealth, so the science of decision making historically was the province of economists. And the basic assumption of economists was always that, when it comes to money, people are essentially rational. It was largely inconceivable that people would make decisions that go against their own interests. Although successive refinements of expected-utility theory made room for individual differences in how probabilities were estimated, the on-the-surface irrational economic behavior of groups and individuals could always be forced to fit some rigid, rational calculation.The problem is — and everything from fluctuations in the stock market to decisions between saving for retirement or purchasing a lottery ticket or a shirt on the sale rack shows it — people just aren’t rational. They systematically make choices that go against what an economist would predict or advocate.Enter a pair of psychological scientists — Daniel Kahneman (currently a professor emeritus at Princeton) and Amos Tversky — who in the 1970s turned the economists’ rational theories on their heads. Kahneman and Tversky’s research on heuristics and biases and their Nobel Prize winning contribution, prospect theory, poured real, irrational, only-human behavior into the calculations, enabling much more powerful prediction of how individuals really choose between risky options.

* * *

Univ. of Toronto psychologist Keith E. Stanovich and James Madison Univ. psychologist Richard F. West refer to these experiential and analytical modes as “System 1” and “System 2,” respectively. Both systems may be involved in making any particular choice — the second system may monitor the quality of the snap, System-1 judgment and adjust a decision accordingly.7 But System 1 will win out when the decider is under time pressure or when his or her System-2 processes are already taxed.

This is not to entirely disparage System-1 thinking, however. Rules of thumb are handy, after all, and for experts in high-stakes domains, it may be the quicker form of risk processing that leads to better real-world choices. In a study by Cornell University psychologist Valerie Reyna and Mayo Clinic physician Farrell J. Lloyd, expert cardiologists took less relevant information into account than younger doctors and medical students did when making decisions to admit or not admit patients with chest pain to the hospital. Experts also tended to process that information in an all-or-none fashion (a patient was either at risk of a heart attack or not) rather than expending time and effort dealing with shades of gray. In other words, the more expertise a doctor has, the more that his or her intuitive sense of the gist of a situation was used as a guide.8

In Reyna’s variant of the dual-system account, fuzzy-trace theory, the quick-decision system focuses on the gist or overall meaning of a problem instead of rationally deliberating on facts and odds of alternative outcomes.9 Because it relies on the late-developing ventromedial and dorsolateral parts of the frontal lobe, this intuitive (but informed) system is the more mature of the two systems used to make decisions involving risks.

A 2004 study by Vassar biopsychologist Abigail A. Baird and Univ. of Waterloo cognitive psychologist Jonathan A. Fugelsang showed that this gist-based system matures later than do other systems. People of different ages were asked to respond quickly to easy, risk-related questions such as “Is it a good idea to set your hair on fire?”, “Is it a good idea to drink Drano?”, and “Is it a good idea to swim with sharks?” They found that young people took about a sixth of a second longer than adults to arrive at the obvious answers (it’s “no” in all three cases, in case you were having trouble deciding).10 The fact that our gist-processing centers don’t fully mature until the 20s in most people may help explain the poor, risky choices younger, less experienced decision makers commonly make.

Adolescents decide to drive fast, have unprotected sex, use drugs, drink, or smoke not simply on impulse but also because their young brains get bogged down in calculating odds. Youth are bombarded by warning statistics intended to set them straight, yet risks of undesirable outcomes from risky activities remain objectively small — smaller than teens may have initially estimated, even — and this may actually encourage young people to take those risks rather than avoid them. Adults, in contrast, make their choices more like expert doctors: going with their guts and making an immediate black/white judgment. They just say no to risky activities because, however objectively unlikely the risks are, there’s too much at stake to warrant even considering them.11

Making Better Choices

The gist of the matter is, though, that none of us, no matter how grown up our frontal lobes, make optimal decisions; if we did, the world would be a better place. So the future of decision science is to take what we’ve learned about heuristics, biases, and System-1 versus System-2 thinking and apply it to the problem of actually improving people’s real-world choices.

One obvious approach is to get people to increase their use of System 2 to temper their emotional, snap judgments. Giving people more time to make decisions and reducing taxing demands on deliberative processing are obvious ways of bringing System 2 more into the act. Katherine L. Milkman (U. Penn.), Dolly Chugh (NYU), and Max H. Bazerman (Harvard) identify several other ways of facilitating System-2 thinking.12 One example is encouraging decision makers to replace their intuitions with formal analysis — taking into account data on all known variables, providing weights to variables, and quantifying the different choices. This method has been shown to significantly improve decisions in contexts like school admissions and hiring.

Having decision makers take an outsider’s perspective on a decision can reduce overconfidence in their knowledge, in their odds of success, and in their time to complete tasks. Encouraging decision makers to consider the opposite of their preferred choice can reduce judgment errors and biases, as can training them in statistical reasoning. Considering multiple options simultaneously rather than separately can optimize outcomes and increase an individual’s willpower in carrying out a choice. Analogical reasoning can reduce System-1 errors by highlighting how a particular task shares underlying principles with another unrelated one, thereby helping people to see past distracting surface details to more fully understand a problem. And decision making by committee rather than individually can improve decisions in group contexts, as can making individuals more accountable for their decisions.13

In some domains, however, a better approach may be to work with, rather than against, our tendency to make decisions based on visceral reactions. In the health arena, this may involve appealing to people’s gist-based thinking. Doctors and the media bombard health consumers with numerical facts and data, yet according to Reyna, patients — like teenagers — tend initially to overestimate their risks; when they learn their risk for a particular disease is actually objectively lower than they thought, they become more complacent — for instance by forgoing screening. Instead, communicating the gist, “You’re at (some) risk, you should get screened because it detects disease early” may be a more powerful motivator to make the right decision than the raw numbers. And when statistics are presented, doing so in easy-to-grasp graphic formats rather than numerically can help patients (as well as physicians, who can be as statistically challenged as most laypeople) extract their own gists from the facts.14

Complacency is a problem when decisions involve issues that feel more remote from our daily lives — problems like global warming. The biggest obstacle to changing people’s individual behavior and collectively changing environmental policy, according to Columbia University decision scientist Elke Weber, is that people just aren’t scared of climate change. Being bombarded by facts and data about perils to come is not the same as having it affect us directly and immediately; in the absence of direct personal experience, our visceral decision system does not kick in to spur us to make better environmental choices such as buying more fuel-efficient vehicles.15

How should scientists and policymakers make climate change more immediate to people? Partly, it involves shifting from facts and data to experiential button-pressing. Powerful images of global warming and its effects can help. Unfortunately, according to research conducted by Yale environmental scientist Anthony A. Leisurowitz, the dominant images of global warming in Americans’ current consciousness are of melting ice and effects on nonhuman nature, not consequences that hit closer to home; as a result, people still think of global warming as only a moderate concern.16

Reframing options in terms that connect tangibly with people’s more immediate priorities, such as the social rules and norms they want to follow, is a way to encourage environmentally sound choices even in the absence of fear.17 For example, a study by Noah J. Goldstein (Univ. of Chicago), Robert B. Cialdini (Arizona State), and Vladas Griskevicius (Univ. of Minnesota) compared the effectiveness of different types of messages in getting hotel guests to reuse their towels rather than send them to the laundry. Messages framed in terms of social norms — “the majority of guests in this room reuse their towels” — were more effective than messages simply emphasizing the environmental benefits of reuse.18

Yet another approach to getting us to make the most beneficial decisions is to appeal to our natural laziness. If there is a default option, most people will accept it because it is easiest to do so — and because they may assume that the default is the best. University of Chicago economist Richard H. Thaler suggests using policy changes to shift default choices in areas like retirement planning. Because it is expressed as normal, most people begin claiming their Social Security benefits as soon as they are eligible, in their early to mid 60s — a symbolic retirement age but not the age at which most people these days are actually retiring. Moving up the “normal” retirement age to 70 — a higher anchor — would encourage people to let their money grow longer untouched.19

* * *

Making Decisions About the Environment

APS Fellow Elke Weber recently had the opportunity to discuss her research with others who share her concern about climate change, including scientists, activists, and the Dalai Lama. Weber . . . shared her research on why people fail to act on environmental problems. According to her, both cognitive and emotional barriers prevent us from acting on environmental problems. Cognitively, for example, a person’s attention is naturally focused on the present to allow for their immediate survival in dangerous surroundings. This present-focused attitude can discourage someone from taking action on long-term challenges such as climate change. Similarly, emotions such as fear can motivate people to act, but fear is more effective for responding to immediate threats. In spite of these challenges, Weber said that there are ways to encourage people to change their behavior. Because people often fail to act when they feel powerless, it’s important to share good as well as bad environmental news and to set measurable goals for the public to pursue. Also, said Weber, simply portraying reduced consumption as a gain rather than a loss in pleasure could inspire people to act.

References and Further Reading:

  • 7. Stanovich, K.E., & West, R.F. (2000). Individual differences in reasoning: Implications for the rationality debate.
  • Behavioral & Brain Sciences, 23, 645–665.
  • 8. Reyna, V.F., & Lloyd, F. (2006). Physician decision making and cardiac risk: Effects of knowledge, risk perception, risk
  • tolerance, and fuzzy processing. Journal of Experimental Psychology: Applied, 12, 179–195.
  • 9. Reyna, V.F. (2004). How people make decisions that involve risk: A dual-processes approach. Current Directions in
  • Psychological Science, 13, 60–66.
  • 10. Baird, A.A., & Fugelsang, J.A. (2004). The emergence of consequential thought: Evidence from neuroscience.
  • Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 359, 1797–1804.
  • 11. Reyna, VF., & Farley, F. (2006). Risk and rationality in adolescent decision making. Psychological Science in the Public
  • Interest, 7, 1–44.
  • 12. Milkman, K.L., Chugh, D., & Bazerman, M.H. (2009). How can decision making be improved? Perspectives on
  • Psychological Science, 4, 379–383.
  • 13. Ibid.
  • 14. See Wargo, E. (2007). More than just the facts: Helping patients make informed choices. Cornell University Department
  • of Human Development: Outreach & Extension. Downloaded from http://www.human.cornell.edu/hd/outreach-extension/loader.cfm?csModule=security/getfile&PageID=43508
  • 15. Weber, E.U. (2006). Experience-based and description-based perceptions of long-term risk: Why global warming does
  • not scare us (yet). Climatic Change, 77, 103–120.
  • 16. Leisurowitz, A. (2006). Climate change risk perception and policy preferences: The role of affect, imagery, and values.
  • Climatic Change, 77, 45–72.
  • 17. Weber, E.U. (2010). What shapes perceptions of climate change? Wiley Interdisciplinary Reviews: Climate Change, 1,
  • 332–342.
  • 18. Goldstein, N.J., Cialdini, R.B., & Griskevicius, V. (2008). A room with a viewpoint: Using social norms to motivate
  • environmental conservation in hotels. Journal of Consumer Research, 35. Downloaded from http://www.csom.umn.edu/assets/118359.pdf
  • 19. Thaler, R.H. (2011, July 16). Getting the Most Out of Social Security. The New York Times. Downloaded from
  • http://www.nytimes.com/2011/07/17/business/economy/when-the-wait-for-social-security-checks-is-worth-it.html?_r=1&adxnnl=1&adxnnlx=1322835490-9f6qOJ9Sp2jSw4LKDjmYgw

More.

Related Situationist posts:

You can review hundreds of Situationist posts related to the topic of “choice myth” here.

Posted in Behavioral Economics, Choice Myth, History, Ideology, Neuroscience, Public Policy | Tagged: , , , | 1 Comment »

The Interior Situation of Atrocities

Posted by The Situationist Staff on January 10, 2012

From People’s World (an article summarizing recent research by Situationist Contributor Susan Fiske):

Why do people commit atrocities? What is responsible for brutality and the cold blooded murder of innocents carried out by Nazis, the Hutu in Rwanda, or the United States against the Vietnamese people and more recently much of the civilian population of Iraq? Some scientists believe they have found the answer.

ScienceDaily reports (“Brain’s Failure to Appreciate Others May Permit Human Atrocities,” 12-14-2011) that the part of the brain responsible for social interaction with others may malfunction resulting in callousness leading to inhumane actions towards others. Scientists at Duke and Princeton have hypothesized, in a recent study, that this brain area can “disengage” when people encounter others they think are “disgusting” and the resulting violence perpetrated against them is due to thinking these objectified others have no “thoughts and feelings.”

The study, according to ScienceDaily, considers this a “shortcoming” which could account for the genocide and torture of other peoples. Examples of this kind of objectification can be seen in the calling of Jews “vermin” by the Nazis, the Tutsi “cockroaches” by the Hutu, and the American habit of calling others “gooks” (as well as other unflattering terms).

Lasana Harris (Duke) says, “When we encounter a person, we usually infer something about their minds [do they have more than one?] Sometimes, we fail to do this, opening up the possibility that we do not perceive the person as fully human.” I wonder about this? What is meant by fully human? Surely the Hutu, for example, who had lived with the Tutsi for centuries, did not really fail to infer that they had “minds.”

Practicing something called “social neuroscience” which seems to consist of showing different people pictures while they are undergoing an MRI and then drawing conclusions from which areas of the brain do or do not “light up” when asked questions about these pictures, the scientists conducting this study discovered that an area of the brain dealing with “social cognition”– i.e., feelings, thoughts, empathy, etc., “failed to engage” when pictures of homeless people, drug addicts, and others “low on the social ladder” were shown.

Susan Fiske (Princeton) remarked, “We need to think about other people’s experience. It’s what makes them fully human to us.” ScienceDaily adds the researchers were struck by the fact that “people will easily ascribe social cognition– a belief in an internal life such as emotions– to animals and cars, but will avoid making eye contact with the homeless panhandler in the subway.”

More.

Related Situationist posts:

Image from Flickr.

Posted in Altruism, Conflict, Ideology, Implicit Associations, Neuroscience, Social Psychology | Tagged: , , | Leave a Comment »

Antonio Damasio on Mystery of Consciousness

Posted by The Situationist Staff on December 26, 2011

From TedTalks:

Every morning we wake up and regain consciousness — that is a marvelous fact — but what exactly is it that we regain? Neuroscientist Antonio Damasio uses this simple question to give us a glimpse into how our brains create our sense of self.

Related Situationist posts:

Posted in Neuroscience, Social Psychology | Tagged: , | Leave a Comment »

Mapping the Brain

Posted by The Situationist Staff on November 17, 2011

From Ted Talks:

How can we begin to understand the way the brain works? The same way we begin to understand a city: by making a map. In this visually stunning talk, Allan Jones shows how his team is mapping which genes are turned on in each tiny region, and how it all connects up.

Posted in Neuroscience, Video | Tagged: | Leave a Comment »

The Situation of Michael S. Gazzaniga

Posted by The Situationist Staff on November 15, 2011

From The New York Times, a terrific article about Michael Gazzaniga:

The scientists exchanged one last look and held their breath.

Everything was ready. The electrode was in place, threaded between the two hemispheres of a living cat’s brain; the instruments were tuned to pick up the chatter passing from one half to the other. The only thing left was to listen for that electronic whisper, the brain’s own internal code.

The amplifier hissed — the three scientists expectantly leaning closer — and out it came, loud and clear.

“We all live in a yellow submarine, yellow submarine, yellow submarine ….”

“The Beatles’ song! We somehow picked up the frequency of a radio station,” recalled Michael S. Gazzaniga, chuckling at the 45-year-old memory. “The brain’s secret code. Yeah, right!”

Dr. Gazzaniga, 71, now a professor of psychology at the University of California, Santa Barbara, is best known for a dazzling series of studies that revealed the brain’s split personality, the division of labor between its left and right hemispheres. But he is perhaps next best known for telling stories, many of them about blown experiments, dumb questions and other blunders during his nearly half-century career at the top of his field.

Now, in lectures and a new book, he is spelling out another kind of cautionary tale — a serious one, about the uses of neuroscience in society, particularly in the courtroom.

Brain science “will eventually begin to influence how the public views justice and responsibility,” Dr. Gazzaniga said at a recent conference here sponsored by the Edge Foundation.

And there is no guarantee, he added, that its influence will be a good one.

For one thing, brain-scanning technology is not ready for prime time in the legal system; it provides less information than people presume.

For another, new knowledge about neural processes is raising important questions about human responsibility. Scientists now know that the brain runs largely on autopilot; it acts first and asks questions later, often explaining behavior after the fact. So if much of behavior is automatic, then how responsible are people for their actions?

Who’s driving this submarine, anyway?

In his new book, “Who’s in Charge? Free Will and the Science of the Brain,” being published this month by Ecco/HarperCollins, Dr. Gazzaniga (pronounced ga-ZAHN-a-ga) argues that the answer is hidden in plain sight. It’s a matter of knowing where to look.

* * *

He began thinking seriously about the nature of responsibility only after many years of goofing off.

Mike Gazzaniga grew up in Glendale, Calif., exploring the open country east of Los Angeles and running occasional experiments in his garage, often with the help of his father, a prominent surgeon. It was fun; the experiments were real attempts to understand biochemistry; and even after joining the Alpha Delta Phi fraternity at Dartmouth (inspiration for the movie “Animal House”), he made time between parties and pranks to track who was doing what in his chosen field, brain science.

In particular, he began to follow studies at the California Institute of Technology suggesting that in animals, developing nerve cells are coded to congregate in specific areas in the brain. This work was captivating for two reasons.

First, it seemed to contradict common wisdom at the time, which held that specific brain functions like memory were widely — and uniformly — distributed in the brain, not concentrated in discrete regions.

Second, his girlfriend was due to take a summer job right there near Caltech.

He decided to write a letter to the director of the program, the eminent neurobiologist Roger Wolcott Sperry (emphasizing reason No. 1). Could Dr. Sperry use a summer intern? “He said sure,” Dr. Gazzaniga said. “I always tell students, ‘Go ahead and write directly to the person you want to study with; you just never know.’ ”

At Caltech that summer after his junior year, he glimpsed his future. He learned about so-called split-brain patients, people with severe epilepsy who had surgery cutting the connections between their left and right hemispheres. The surgery drastically reduced seizures but seemed to leave people otherwise unaffected.

Read the article here.

Related Situationist posts:

Mike Gazzaniga on the Split Brain

Posted in Classic Experiments, Neuroscience, Video | Tagged: , | Leave a Comment »

A Neuroscience Perspective on the Financial Crises

Posted by The Situationist Staff on October 28, 2011

Andrew Lo recently posted his paper “Fear, Greed, and Financial Crises: A Cognitive Neurosciences Perspective” on SSRN.  Here’s the abstract.

* * *

Historical accounts of financial crises suggest that fear and greed are the common denominators of these disruptive events: periods of unchecked greed eventually lead to excessive leverage and unsustainable asset-price levels, and the inevitable collapse results in unbridled fear, which must subside before any recovery is possible. The cognitive neurosciences may provide some new insights into this boom/bust pattern through a deeper understanding of the dynamics of emotion and human behavior. In this chapter, I describe some recent research from the neurosciences literature on fear and reward learning, mirror neurons, theory of mind, and the link between emotion and rational behavior. By exploring the neuroscientific basis of cognition and behavior, we may be able to identify more fundamental drivers of financial crises, and improve our models and methods for dealing with them.

* * *

Download the paper for free here.

Related Situationist posts:

Posted in Abstracts, Behavioral Economics, Emotions, Neuroscience | Tagged: , , | Leave a Comment »

Mike Gazzaniga on the Split Brain

Posted by The Situationist Staff on October 22, 2011

* * *

Related Situationist posts:

Posted in Neuroscience, Video | Tagged: , , | 1 Comment »

Steven Hyman on Neuroethics

Posted by The Situationist Staff on October 16, 2011

From The Science Network:

Steven Hyman is Professor of Neurobiology at Harvard Medical School. Hyman is a former Provost of Harvard University and Director of the National Institute of Mental Health. He is also a member of the Institute of Medicine of the National Academy of Sciences and of the American Academy of Arts and Sciences. Hyman also serves as Editor of the Annual Review of Neuroscience.

Posted in Neuroscience, Video | Tagged: , , | Leave a Comment »

The Neuro-Situation of Wins and Losses

Posted by The Situationist Staff on October 10, 2011

From Montreal Gazette:

A new National Hockey League season is upon us, Major League Baseball playoffs are in full swing and the National Football League’s regular season has been in session for about a month.

As you fixate on your television, watching every move of your favourite athletes and longing for that great play or crucial win that can serve up a rush that can approach orgasm, consider this: New research from Yale University shows even more of your brain than previously thought physically reacts to something perceived as a win or a loss.

A new study, published in the journal Neuron, outlines experiments showing how most of the brain has heightened activity if one wins or loses a competition such as rock-paper-scissors.

It was a broader effect than what was known before to be a reaction of the central part of the brain in releasing dopamine when something good happens, creating a positive feeling in an individual. Conversely, past evidence has also shown this neurotransmitter is suppressed when an unwanted outcome occurs.

The study’s lead author, Timothy Vickery, a post-doctoral fellow at Yale’s psychology department, said it’s possible that the brain has a similar kind of engagement when its owner is watching sports.

“We didn’t look at that directly in this study, but it wouldn’t be very surprising to me if those sorts of second-hand experiences had the same influence, because you’re sort of identifying with your team, and a win for your team is a win for you,” he said.

Vickery said the high engagement sports fans feel when watching a competition likely comes from the previously known function of the basal ganglia, in the middle of the brain, sending out dopamine when a positive outcome is perceived.

It has its roots, he said, in evolutionary tendencies that favour people and animals that are able to make the right choices to improve chances for survival and create results — such as finding food — that induce dopamine-fuelled feelings of joy.

Vickery said the effect can be vicarious when watching other people participate in sports.

“I think it’s fair to say that, to the extent that you experience those wins and losses as your own, it would have a similar effect on your brain as taking your own actions,” he said.

By conducting MRIs on people while they competed against a computer in games such as rock-paper-scissors, the Yale study found that most parts of subjects’ brains, even beyond the basal ganglia, had physical reactions to both wins and losses.

By analyzing the brain as a whole, Vickery said the researchers could determine whether the individual was experiencing a win or a loss, based on subtle differences in the nature of the patterns. He said it is likely this broadly based brain reaction is somehow related to established theories concerning the reward-punishment function at the brain’s centre. The study, however, could not conclude that.

“My suspicion is that it’s not unrelated, that basically that signal gets sent out from the basal ganglia . . . and sort of filters out through the brain, but we don’t know for sure where it’s coming from. There’s still a lot of work to be done.”

More.

Related Situationist posts:

Posted in Abstracts, Neuroscience, Situationist Sports | Tagged: , , | Leave a Comment »

Brain and Blame

Posted by The Situationist Staff on August 11, 2011

From The Atlantic (by David Eagleman):

On the steamy first day of August 1966, Charles Whitman took an elevator to the top floor of the University of Texas Tower in Austin. The 25-year-old climbed the stairs to the observation deck, lugging with him a footlocker full of guns and ammunition. At the top, he killed a receptionist with the butt of his rifle. Two families of tourists came up the stairwell; he shot at them at point-blank range. Then he began to fire indiscriminately from the deck at people below. The first woman he shot was pregnant. As her boyfriend knelt to help her, Whitman shot him as well. He shot pedestrians in the street and an ambulance driver who came to rescue them.

The evening before, Whitman had sat at his typewriter and composed a suicide note:

I don’t really understand myself these days. I am supposed to be an average reasonable and intelligent young man. However, lately (I can’t recall when it started) I have been a victim of many unusual and irrational thoughts.

By the time the police shot him dead, Whitman had killed 13 people and wounded 32 more. The story of his rampage dominated national headlines the next day. And when police went to investigate his home for clues, the story became even stranger: in the early hours of the morning on the day of the shooting, he had murdered his mother and stabbed his wife to death in her sleep.

It was after much thought that I decided to kill my wife, Kathy, tonight … I love her dearly, and she has been as fine a wife to me as any man could ever hope to have. I cannot rationa[l]ly pinpoint any specific reason for doing this …

Along with the shock of the murders lay another, more hidden, surprise: the juxtaposition of his aberrant actions with his unremarkable personal life. Whitman was an Eagle Scout and a former marine, studied architectural engineering at the University of Texas, and briefly worked as a bank teller and volunteered as a scoutmaster for Austin’s Boy Scout Troop 5. As a child, he’d scored 138 on the Stanford-Binet IQ test, placing in the 99th percentile. So after his shooting spree from the University of Texas Tower, everyone wanted answers.

For that matter, so did Whitman. He requested in his suicide note that an autopsy be performed to determine if something had changed in his brain—because he suspected it had.

I talked with a Doctor once for about two hours and tried to convey to him my fears that I felt [overcome by] overwhelming violent impulses. After one session I never saw the Doctor again, and since then I have been fighting my mental turmoil alone, and seemingly to no avail.

Whitman’s body was taken to the morgue, his skull was put under the bone saw, and the medical examiner lifted the brain from its vault. He discovered that Whitman’s brain harbored a tumor the diameter of a nickel. This tumor, called a glioblastoma, had blossomed from beneath a structure called the thalamus, impinged on the hypothalamus, and compressed a third region called the amygdala. The amygdala is involved in emotional regulation, especially of fear and aggression. By the late 1800s, researchers had discovered that damage to the amygdala caused emotional and social disturbances. In the 1930s, the researchers Heinrich Klüver and Paul Bucy demonstrated that damage to the amygdala in monkeys led to a constellation of symptoms, including lack of fear, blunting of emotion, and overreaction. Female monkeys with amygdala damage often neglected or physically abused their infants. In humans, activity in the amygdala increases when people are shown threatening faces, are put into frightening situations, or experience social phobias. Whitman’s intuition about himself—that something in his brain was changing his behavior—was spot-on.

Stories like Whitman’s are not uncommon: legal cases involving brain damage crop up increasingly often. As we develop better technologies for probing the brain, we detect more problems, and link them more easily to aberrant behavior. Take the 2000 case of a 40-year-old man we’ll call Alex, whose sexual preferences suddenly began to transform. He developed an interest in child pornography—and not just a little interest, but an overwhelming one. He poured his time into child-pornography Web sites and magazines. He also solicited prostitution at a massage parlor, something he said he had never previously done. He reported later that he’d wanted to stop, but “the pleasure principle overrode” his restraint. He worked to hide his acts, but subtle sexual advances toward his prepubescent stepdaughter alarmed his wife, who soon discovered his collection of child pornography. He was removed from his house, found guilty of child molestation, and sentenced to rehabilitation in lieu of prison. In the rehabilitation program, he made inappropriate sexual advances toward the staff and other clients, and was expelled and routed toward prison.

At the same time, Alex was complaining of worsening headaches. The night before he was to report for prison sentencing, he couldn’t stand the pain anymore, and took himself to the emergency room. He underwent a brain scan, which revealed a massive tumor in his orbitofrontal cortex. Neurosurgeons removed the tumor. Alex’s sexual appetite returned to normal.

The year after the brain surgery, his pedophilic behavior began to return. The neuroradiologist discovered that a portion of the tumor had been missed in the surgery and was regrowing—and Alex went back under the knife. After the removal of the remaining tumor, his behavior again returned to normal.

When your biology changes, so can your decision-making and your desires. The drives you take for granted (“I’m a heterosexual/homosexual,” “I’m attracted to children/adults,” “I’m aggressive/not aggressive,” and so on) depend on the intricate details of your neural machinery. Although acting on such drives is popularly thought to be a free choice, the most cursory examination of the evidence demonstrates the limits of that assumption.

Alex’s sudden pedophilia illustrates that hidden drives and desires can lurk undetected behind the neural machinery of socialization. When the frontal lobes are compromised, people become disinhibited, and startling behaviors can emerge. Disinhibition is commonly seen in patients with frontotemporal dementia, a tragic disease in which the frontal and temporal lobes degenerate. With the loss of that brain tissue, patients lose the ability to control their hidden impulses. To the frustration of their loved ones, these patients violate social norms in endless ways: shoplifting in front of store managers, removing their clothes in public, running stop signs, breaking out in song at inappropriate times, eating food scraps found in public trash cans, being physically aggressive or sexually transgressive. Patients with frontotemporal dementia commonly end up in courtrooms, where their lawyers, doctors, and embarrassed adult children must explain to the judge that the violation was not the perpetrator’s fault, exactly: much of the brain has degenerated, and medicine offers no remedy. Fifty-seven percent of frontotemporal-dementia patients violate social norms, as compared with only 27 percent of Alzheimer’s patients.

Changes in the balance of brain chemistry, even small ones, can also cause large and unexpected changes in behavior. Victims of Parkinson’s disease offer an example. In 2001, families and caretakers of Parkinson’s patients began to notice something strange. When patients were given a drug called pramipexole, some of them turned into gamblers. And not just casual gamblers, but pathological gamblers. These were people who had never gambled much before, and now they were flying off to Vegas. One 68-year-old man amassed losses of more than $200,000 in six months at a series of casinos. Some patients became consumed with Internet poker, racking up unpayable credit-card bills. For several, the new addiction reached beyond gambling, to compulsive eating, excessive alcohol consumption, and hypersexuality.

What was going on? Parkinson’s involves the loss of brain cells that produce a neurotransmitter known as dopamine. Pramipexole works by impersonating dopamine. But it turns out that dopamine is a chemical doing double duty in the brain. Along with its role in motor commands, it also mediates the reward systems, guiding a person toward food, drink, mates, and other things useful for survival. Because of dopamine’s role in weighing the costs and benefits of decisions, imbalances in its levels can trigger gambling, overeating, and drug addiction—behaviors that result from a reward system gone awry. Physicians now watch for these behavioral changes as a possible side effect of drugs like pramipexole. Luckily, the negative effects of the drug are reversible—the physician simply lowers the dosage, and the compulsive gambling goes away.

The lesson from all these stories is the same: human behavior cannot be separated from human biology. If we like to believe that people make free choices about their behavior (as in, “I don’t gamble, because I’m strong-willed”), cases like Alex the pedophile, the frontotemporal shoplifters, and the gambling Parkinson’s patients may encourage us to examine our views more carefully. Perhaps not everyone is equally “free” to make socially appropriate choices.

Does the discovery of Charles Whitman’s brain tumor modify your feelings about the senseless murders he committed? Does it affect the sentence you would find appropriate for him, had he survived that day? Does the tumor change the degree to which you consider the killings “his fault”? Couldn’t you just as easily be unlucky enough to develop a tumor and lose control of your behavior?

On the other hand, wouldn’t it be dangerous to conclude that people with a tumor are free of guilt, and that they should be let off the hook for their crimes?

More.

Related Situationist Posts:

Posted in Emotions, Law, Morality, Neuroscience | Tagged: , , , , | Leave a Comment »

David Eagleman on the Brain and the Law

Posted by The Situationist Staff on May 31, 2011

From :

Dr David Eagleman considers some questions relating to law and neuroscience, challenging long-held assumptions in criminality and punishment and predicting a radical new future for the legal system.

[Eagleman's examples in the first 15 minutes will  strike long-term readers of The Situationist as non-novel.  For others, that portion of the video may be a useful primer to neurolaw.]

Related Situationist Posts:

 

 

Posted in Implicit Associations, Law, Neuroscience, Video | Tagged: , , , | 1 Comment »

The Neuro-Situation of Shopping Choices

Posted by The Situationist Staff on May 16, 2011

From ScienceDaily:

Researchers at Oxford University are to study ‘neuromarketing’, a relatively new field of consumer and market research, which uses brain imaging and measurement technology to study the neural processes underlying an individual’s choice.

Neuromarketing claims to reveal how consumers assess, deliberate and choose in a variety of contexts.

According to neuromarketers this growing industry has the potential to significantly increase the effectiveness of advertising and marketing campaigns. They claim that neuromarketing will provide detailed knowledge about customer preferences and what marketing activities will stimulate buying behaviour, and make promotional campaigns more effective. It will be valuable in providing cues for the best place and prices in advertisements, and should cut the risk of marketing products that are doomed to fail. In the experts’ view, instead of relying on focus groups, neuromarketing offers the promise of ‘objective neurological evidence’ to inform organisations’ marketing campaigns.

But if neuromarketing is set to revolutionise marketing, what are the implications of this development? The study will cast light on the ‘neuro-turn’ in marketing by conducting fieldwork, interviews and documentary analysis. In addition a critical, historical assessment will consider and compare how different market research techniques can affect consumers and consumer behaviour.

The project is led by Professor Steve Woolgar, of the Saïd Business School, and is located within a larger collaborative study of the “Neuro-turn in European Social Sciences and the Humanities: Impacts of neurosciences on economics, marketing and philosophy” (acronym: NESSHI) with researchers from other parts of Europe.

Professor Woolgar said: ‘This three-year project will be the first large-scale study of how emerging neurological knowledge about human decision-making is transforming the techniques of marketers and others who seek to influence the behaviour of consumers. It has far reaching implications for what we know about how humans make their choices, the role of the brain and the factors at play in everyday decisions we all take.’

Dr Tanja Schneider, researcher on the project, said: ‘For a number of years, research has been focussed on brain imaging centres. This is now moving out of the laboratory and into practice. The research we are doing will cast light on what is already happening in this area, and will explore what is likely to develop in the near future. We know this will impact society in a major way, so it is critical to understand these developments better’.

More.

Related Situationist posts.

Posted in Marketing, Neuroscience | Tagged: , , | 1 Comment »

Shocking for Money

Posted by The Situationist Staff on April 8, 2011

From Science News:

When faced with a thorny moral dilemma, what people say they would do and what people actually do are two very different things, a new study finds. In a hypothetical scenario, most people said they would never subject another person to a painful electric shock, just to make a little bit of money. But for people given a real-world choice, the sparks flew.

The results . . . serve as a reminder that hypothetical scenarios don’t capture the complexities of real decisions.

Morality studies in the lab almost always rely on asking participants to imagine how they’d behave in a certain situation, study coauthor Oriel FeldmanHall of Cambridge University said in her presentation. But these imagined situations are missing teeth: “Whatever you choose, it’s not going to happen,” she said.

But in FeldmanHall’s study, things actually happened. “There are real shocks and real money on the table,” she said. Subjects lying in an MRI scanner were given a choice: Either administer a painful electric shock to a person in another room and make one British pound (a little over a dollar and a half), or spare the other person the shock and forgo the money. Shocks were priced in a graded manner, so that the subject would earn less money for a light shock, and earn the whole pound for a severe shock. This same choice was given 20 times, and the person in the brain scanner could see a video of either the shockee’s hand jerk or both the hand jerk and the face grimace. (Although these shocks were real, they were pre-recorded.)

When researchers gave a separate group of people a purely hypothetical choice, about 64 percent said they wouldn’t ever deliver a shock — even a mild one  — for money. Overall, people hypothetically judging what their actions would be netted only about four pounds on average.

But when there was cold, hard money involved, the data changed. A lot. A whopping 96 percent of people in the scanner chose to administer shocks for cash. “Three times as much money was kept in the real task,” FeldmanHall said. When participants saw only the hand of the person jerk as it got shocked, they chose to walk away with an “astonishing” 15.77 pounds on average out of a possible 20-pound windfall. The number dipped when participants saw both the hand and the face of the person receiving the shock: in these cases, people made off with an average of 11.55 pounds.

People grappling with the real moral dilemma — as opposed to people who had to choose in a  hypothetical situation — had heightened activity in parts of the insula, a brain center thought to be involved in emotion, the study shows. FeldmanHall said that insula activity might represent a sort of visceral tension that’s going on in the body as a person pits the desire for money against the desire to not hurt someone. These visceral conflicts within a person seem to be missing in experiments with no real stakes, she said.

More.

Related Situationist posts:

Posted in Classic Experiments, Neuroeconomics, Neuroscience, Social Psychology | Leave a Comment »

Can Meditation Make Us More Compassionate?

Posted by Adam Benforado on February 1, 2011

Last Friday, Sindya Bhanoo had an interesting little post on one of the New York Times blogs concerning recent research on the impact of meditation on the brain.

As is often the case in these mainstream media reports, I was left wanting more about the studies and less about the personal interest hook (in this case, the story of Sindya’s husband’s experiences meditating), but that was remedied easily enough by utilizing the wonders of the internet.

To me, the most interesting referenced article was a 2008 study by Antoine Lutz, Julie Brefczynski-Lewis, Tom Johnstone, and Richard Davidson on the regulation of our emotional neural circuitry through compassion meditation.

Here is the abstract:

Recent brain imaging studies using functional magnetic resonance imaging (fMRI) have implicated insula and anterior cingulate cortices in the empathic response to another’s pain. However, virtually nothing is known about the impact of the voluntary generation of compassion on this network. To investigate these questions we assessed brain activity using fMRI while novice and expert meditation practitioners generated a loving-kindness-compassion meditation state. To probe affective reactivity, we presented emotional and neutral sounds during the meditation and comparison periods. Our main hypothesis was that the concern for others cultivated during this form of meditation enhances affective processing, in particular in response to sounds of distress, and that this response to emotional sounds is modulated by the degree of meditation training. The presentation of the emotional sounds was associated with increased pupil diameter and activation of limbic regions (insula and cingulate cortices) during meditation (versus rest). During meditation, activation in insula was greater during presentation of negative sounds than positive or neutral sounds in expert than it was in novice meditators. The strength of activation in insula was also associated with self-reported intensity of the meditation for both groups. These results support the role of the limbic circuitry in emotion sharing. The comparison between meditation vs. rest states between experts and novices also showed increased activation in amygdala, right temporo-parietal junction (TPJ), and right posterior superior temporal sulcus (pSTS) in response to all sounds, suggesting, greater detection of the emotional sounds, and enhanced mentation in response to emotional human vocalizations for experts than novices during meditation. Together these data indicate that the mental expertise to cultivate positive emotion alters the activation of circuitries previously linked to empathy and theory of mind in response to emotional stimuli.

To download a free copy of the entire article, click here.

Related Situationist posts:

Posted in Abstracts, Emotions, Neuroscience, Positive Psychology | Tagged: , | Leave a Comment »

Secondhand Smoking

Posted by The Situationist Staff on January 20, 2011

From EurekaAlert:

Seeing actors smoke in a movie activated the brain areas of smokers that are known to interpret and plan hand movements, as though they too were about to light a cigarette, according to a new study in the Jan. 19 issue of The Journal of Neuroscience.

Habitual smokers repeat the same hand motions, sometimes dozens of times a day. In this study, researchers led by senior investigator Todd Heatherton, PhD, and graduate student Dylan Wagner of Dartmouth College set out to determine whether the parts of the brain that control that routine gesture could be triggered by simply seeing someone else smoke.

The authors found that seeing this familiar action — even when embedded in a Hollywood movie — evoked the same brain responses as planning to actually make that movement. These results may provide additional insight for people trying to overcome nicotine addiction, a condition that leads to one in five U.S. deaths each year.

“Our findings support prior studies that show smokers who exit a movie that had images of smoking are more likely to crave a cigarette, compared with ones who watched a movie without them,” Wagner said. “More work is needed to show whether brain activity in response to movie smoking predicts relapse for a smoker trying to quit.”

During the study, 17 smokers and 17 non-smokers watched the first 30 minutes of the movie “Matchstick Men” while undergoing functional magnetic resonance imaging (fMRI). The researchers chose the movie because it prominently features smoking scenes but otherwise lacks alcohol use, violence, and sexual content.

The volunteers were unaware that the study was about smoking. When they viewed smoking scenes, smokers showed greater brain activity in a part of the parietal lobe called the intraparietal sulcus, as well as other areas involved in the perception and coordination of actions. In the smokers’ brains specifically, the activity corresponded to the hand they use to smoke.

“Smokers trying to quit are frequently advised to avoid other smokers and remove smoking paraphernalia from their homes, but they might not think to avoid a movie with smoking content,” Wagner said. The U.S. Centers for Disease Control and Prevention has warned that exposure to onscreen smoking in movies makes adolescents more likely to smoke. According to their 2010 report, tobacco use in films has decreased in recent years, but about half of popular movies still contained tobacco imagery in 2009, including 54 percent of those rated PG-13.

Scott Huettel, PhD, of Duke University, an expert in the neuroscience of decision-making who was unaffiliated with the study, said scientists have long known that visual cues often induce drug cravings. “This finding builds upon the growing body of evidence that addiction may be reinforced not just by drugs themselves, but by images and other experiences associated with those drugs,” Huettel said.

* * *

For a sample of related Situationist posts, see

Posted in Choice Myth, Deep Capture, Entertainment, Food and Drug Law, Marketing, Neuroscience | Tagged: , | Leave a Comment »

Susan Fiske Discusses her Work on Different Types of Prejudices

Posted by The Situationist Staff on November 4, 2010

Situationist Contributor Susan Fiske discusses her research on stereotypes and prejudice and the systematic principles that influence how groups are treated in society.

* * *

* * *

For a sample of related Situationist posts, see “The Situation of Objectification,” Women’s Situational Bind,” Hey Dove! Talk to YOUR parent!,” and “You Shouldn’t Stereotype Stereotypes.”

Posted in Conflict, Distribution, Ideology, Implicit Associations, Neuroscience, Situationist Contributors, Video | Tagged: , , | 3 Comments »

Joseph LeDoux on the Neural Situation of Emotion and Memory

Posted by The Situationist Staff on October 19, 2010

Joseph LeDoux is a professor and a member of the Center for Neural Science and Department of Psychology at NYU. His work is focused on the brain mechanisms of emotion and memory. In addition to articles in scholarly journals, he is author of “The Emotional Brain: The Mysterious Underpinnings of Emotional Life” and “Synaptic Self: How Our Brains Become Who We Are.” He is a fellow of the American Association for the Advancement of Science, a fellow of the New York Academy of Science, a fellow of the American Academy of Arts and Science, and the recipient of the 2005 Fyssen International Prize in Cognitive Science. LeDoux is also a singer and songwriter in the rock band, The Amygdaloids.

* * *

For a sample of related Situationist posts, see “The Situation of Neuroeconomics and Situationist Economics,” “The Interior Situation of Complex Human Feelings,” “The Situation of Memory,” “Accidentally Us,” “The Affective Situation of Ethics and Mediation,” and Situating Emotion.”

Posted in Emotions, Neuroscience, Video | Tagged: , , , , | Leave a Comment »

Daniel Dennett To Speak at Harvard Law School

Posted by The Situationist Staff on September 27, 2010

On Tuesday, September 28th, the HLS Student Association for Law and Mind Sciences (SALMS) is hosting a talk by Tufts professor Daniel Dennett entitled Free Will, Responsibility, and the Brain.

Professor Dennett is the Austin B. Fletcher Professor of Philosophy at Tufts University, as well as the co-director for the school’s Center for Cognitive Studies.  His work examines the intersection of philosophy and cognitive science in relation to religion, biology, science, and the human mind.  Professor Dennett has also contributed greatly to the fields of evolutionary theory and psychology.

Professor Dennett will turn a critical eye on the recent influx of work regarding the impact of neuroscience on scholarly concepts of moral and legal responsibility.

He will be speaking in Pound 101 from 12:00 – 1:00 p.m. Free burritos will be provided!

* * *

For more information, e-mail salms@law.harvard.edu.

For a sample of related Situationist posts, see Daniel Dennett on the Situation of our Brain,” Dan Dennett on our Interior Situation,” Bargh and Baumeister and the Free Will Debate,” “Bargh and Baumeister and the Free Will Debate – Part II,” “The Death of Free Will and the Rise of Cheating,” Clarence Darrow on the Situation of Crime and Criminals,” “Person X Situation X System Dynamics,” “Situation” Trumps “Disposition” – Part I & Part II,” “The (Unconscious) Situation of our Consciousness – Part I, Part II, Part III, & Part IV and “Coalition of the Will-less.”

Posted in Choice Myth, Evolutionary Psychology, Law, Legal Theory, Morality, Neuroscience, Philosophy | Tagged: , , | Leave a Comment »

Interview with Professor Joshua Greene

Posted by The Situationist Staff on September 26, 2010

From The Project on Law & Mind Sciences at Harvard Law School (PLMS):

Here is an outstanding interview of Joshua Greene by Harvard Law Student Jeff Pote. The interview, titled “On Moral Judgment and Normative Questions” lasts just over 58 minutes. It was conducted as part of the Law and Mind Science Seminar at Harvard.

Bio:

Joshua D. Greene is an Assistant Professor of Psychology at Harvard University. He received his A.B. at Harvard University in 1997 where he was advised by Derek Parfit. He received his PhD in Philosophy at Princeton University in 2002 having written a dissertation on the foundation of ethics advised by David Lewis and Gilbert Harman. From 2002 to 2006, when he began at Harvard, he studied as a postdoctoral fellow at Princeton in the Neuroscience of Cognitive Control Laboratory under Jonathan Cohen. He is currently the Director of the Moral Cognition Lab.

* * *

* * *

Table of contents:

  • 00:00 — Title Frame
  • 00:23 — Introduction
  • 00:54 — How did your professional interests develop?
  • 04:58 — What are the questions that interest you?
  • 06:07 — What research projects are you currently working on?
  • 08:32 — Could you describe the original experiment that supported a dual-process view of moral judgment?
  • 13:13 — Has further research supported the dual-process view of moral judgment?
  • 16:43 — Could you explain how this, or any, psychological understanding could bear on normative questions of law and policy?
  • 24:39 — Could you provide an example of a situation where we should not rely on “blunt intuition?”
  • 30:42 — Can you see other places where psychological research illuminates normative questions of law or policy?
  • 37:40 — Do any of our moral judgments represent an objective moral reality (or moral facts)?
  • 44:38 — Could you provide an example of a “moral objectivist” solution that you find unpersuasive?
  • 49:33 — What is the problem of “free will” and what is its relevance for legal responsibility and punishment?
  • 56:26 — How will this emerging scientific understanding of the human animal affect law and moral philosophy?

Duration: 58:04

* * *

For a sample of related Situationist posts, see “Joshua Greene To Speak at Harvard Law School,” “2010 Law and Mind Sciences Conference,”  The Interior Situation of Honesty (and Dishonesty),” “Moral Psychology Primer,” Law & the Brain,” “Pinker on the Situation of Morality,” “The Science of Morality,” and Your Brain and Morality.”

Posted in Experimental Philosophy, Morality, Neuroscience, Video | Tagged: , , , | 2 Comments »

 
Follow

Get every new post delivered to your Inbox.

Join 833 other followers

%d bloggers like this: