When Greg Dunn finished his Ph.D. in neuroscience at Penn in 2011, he bought himself a sensory deprivation tank as a graduation present. The gift marked a major life transition, from the world of science to a life of meditation and art.
Now a full-time artist living in Philadelphia, Dunn says he was inspired in his grad-student days by the spare beauty of neurons treated with certain stains. The Golgi stain, for example, will turn one or two neurons black against a golden background. ”It has this Zen quality to it that really appealed to me,” Dunn said.
Our world is full of cyclic phenomena: For example, many people experience their attention span changing over the course of a day. Maybe you yourself are more alert in the morning, others more in the afternoon. Bodily functions cyclically change or “oscillate” with environmental rhythms, like light and dark, and this in turn seems to govern our perception and behaviours. One might conclude that we are slaves to our own circadian rhythms, which in turn are slaves to environmental light-dark cycles.
A hard-to-prove idea in neuroscience is that such couplings between rhythms in the environment, rhythms in the brain, and our behaviours are also present at much finer time scales. Molly Henry and Jonas Obleser from the Max Planck Research Group “Auditory Cognition” now followed up on this recurrent idea by investigating the listening brain.
This idea holds fascinating implications for the way humans process speech and music: Imagine the melodic contour of a human voice or your favourite piece of music going up and down. If your brain becomes coupled to, or “entrained” by, these melodic changes, Henry and Obleser reasoned, then you might also be better prepared to expect fleeting but important sounds occurring in what the voice is saying, for example, a “d” versus a “t.”
The simple “fleeting sound” in the scientists’ experiment was a very short and very hard-to-detect silent gap (about one one-hundredth of a second) embedded in a simplified version of a melodic contour, which slowly and cyclically changed its pitch at a rate of three cycles per second (3 Hz).
To be able to track each listener’s brain activity on a millisecond basis, Henry and Obleser recorded the electroencephalographic signal from listeners’ scalps. First, the authors demonstrated that every listener’s brain was “dragged along” (this is what entrainment, a French word, literally means) by the slow cyclic changes in melody; listeners’ neural activity waxed and waned. Second, the listeners’ ability to discover the fleeting gaps hidden in the melodic changes was by no means constant over time. Instead, it also “oscillated” and was governed by the brain’s waxing and waning. The researchers could predict from a listener’s slow brain wave whether or not an upcoming gap would be detected or would slip under the radar.
Why is that? “The slow waxings and wanings of brain activity are called neural oscillations. They regulate our ability to process incoming information,” Molly Henry explains. Jonas Obleser adds that “from these findings, an important conclusion emerges: All acoustic fluctuations we encounter appear to shape our brain’s activity. Apparently, our brain uses these rhythmic fluctuations to be prepared best for processing important upcoming information.”
The researchers hope to be able to use the brain’s coupling to its acoustic environment as a new measure to study the problems of listeners with hearing loss or people who stutter.
(via Science Daily)
Rejection and heartbreak can have effects every bit as physical as cuts and bruises, and understanding why could change your life.
It is only in the past 10 years that we have begun to unravel the basis of these hurt feelings in the brain. Scientists have found that the sting of rejection fires up the same neural pathways as the pain from a burn or bruise. Besides explaining why some people have thicker skins than others, this fact reveals an intimate link between your social life and your health - you really can die of loneliness.
Our language has long borrowed physical terms to describe our darkest emotions, with phrases such as “she broke my heart”, “he burned me”, and “he stabbed me in the back”. Such comparisons occur around the world: Germans talk about being emotionally “wounded”, while Tibetans describe rejection as a “hit in the heart”.
Although these expressions were always taken to be metaphorical, there had been some early hints that more was afoot. Animal studies in the 1990s, for instance, showed that morphine not only relieves pain after injury, but can also reduce the grief of rat pups separated from their mother.
The dorsal anterior cingulate cortex (dACC) region is known to be an important part of the brain’s “pain network”, determining how upsetting we find an injury. The response can vary depending on the situation; bumping your head might seem like a big deal in the office, but during a football game you might barely notice the blow.
Crucially, the more distressing you find an injury, the more the dACC lights up, those who reported feeling worst after the rejection showed the greatest activity in this region.
Other studies confirmed the link, finding that social rejection provokes not just the dACC but also the anterior insula, another part of the pain network that responds to our distress at a cut finger or broken bone. But although these results all suggest that our anguish after an insult is the same as our emotional response to an injury, it took until last year to show how those feelings might spill over into tangible bodily sensations.
Ethan Kross at the University of Michigan recruited 40 people who had been through a break-up within the past six months and asked them to view a photo of their ex while reclining in an fMRI scanner. He also instructed them to think in detail about the break-up. After a brief intermission, the volunteers’ forearms were given a painful jolt of heat, allowing Kross to compare brain activity associated with the two situations.
As expected, the dACC and the anterior insula lit up in both cases. But surprisingly, the brain’s sensory centres, which reflect the physical discomfort that accompanies a wound, also showed pronounced activity - the first evidence that the feeling of heartbreak can literally hurt (PNAS, vol 108, p 6270).
Cementing the connection between physical pain and emotional anguish, further studies have found that the two experiences sometimes feed off one another. When people feel excluded, they are more sensitive to the burn of a hot probe, and submerging a hand in ice water for 1 minute leads people to report feeling ignored and isolated.
Numbing the hurt
The converse is also true: soothing the body’s response to pain can alleviate the sting of an insult. Nathan DeWall of the University of Kentucky, Lexington, recruited 62 students who either dosed themselves up on two paracetamol (acetaminophen) pills every day for three weeks, or took a placebo. Each evening, the students completed a questionnaire measuring their feelings of rejection during the day. By the end of the three weeks, the group on paracetamol had developed significantly thicker skins, reporting fewer hurt feelings during their day-to-day encounters.
“The idea that you can actually affect people’s experience socially with what is seen as such a mild, common drug [as paracetamol], that was a rather important validation,” says Geoff MacDonald at the University of Toronto, Canada, one of the authors of the study. “This is exactly the kind of thing you would expect if this social pain thing is really true.” Needless to say, due to the harmful side-effects of pain-killing drugs, you should not try this for yourself.
The work might explain why certain people find it harder to withstand the rough and tumble of their social lives than others. Extroverts have been shown to have a higher pain tolerance than introverts, and this is mirrored by their greater tolerance for social rejection.
These diverse reactions may be partly genetic. Eisenberger’s team has shown that people with a small mutation to the gene OPRM1, which codes for one of the body’s opioid receptors, are more likely to slip into depressed feelings after rejection than are those without the mutation. This same mutation also makes people more sensitive to physical pain, and they typically need more morphine following surgery.
Importantly, these receptors are particularly dense in the dACC. As you might expect, in people with the mutation, the dACC tends to react more strongly to perceived insults (PNAS, vol 106, p 15079).
When you consider our ancestors’ dependence on their social connections for survival, it makes sense for us to have evolved to feel rejection so keenly. Being kicked out of a tribe would have been akin to a death sentence, exposing our predecessors to starvation and predation. As a result, we needed a warning system that alerts us to a potential spat, preventing us from causing
further offence and teaching us to toe the line in the future. The pain network, able to give us a jolt when we face physical injury from a fire or knife edge, would have been ideally equipped to curb our social behaviour.
Some have taken this line of thinking further, suggesting it might hold the secret to some of the more mysterious symptoms of loneliness. People who are lonely tend to have an increase in the expression of genes for inflammation, particularly in immune cells, and a decrease in the expression of antiviral genes.
Why would the body deal with isolation in this way? “That was kind of a puzzle to us for the last five or 10 years,” says Steve Cole, a behavioural geneticist at the University of California, Los Angeles. An answer began to emerge when he looked at the way different conditions affect people with different social lives. Viruses spread quickly among large groups of people, whereas life-threatening bacterial infections generally come from wounds which our ancestors may have been more likely to receive when alone, without the protection of their peers. As a result, Cole suggests, our immune system may be “listening in” on our brain’s signals of social status. If it looks as if we are enjoying a lively social life in a big group, we are geared up to deal with viruses; if we feel alone, the dACC and other regions tune up inflammation, which helps us battle bacterial infection.
(via New Scientist)
For the very first time researchers have streamed braille patterns directly into a blind patient’s retina, allowing him to read four-letter words accurately and quickly with an ocular neuroprosthetic device. The device, the Argus II, has been implanted in over 50 patients, many of who can now see color, movement and objects. It uses a small camera mounted on a pair of glasses, a portable processor to translate the signal from the camera into electrical stimulation, and a microchip with electrodes implanted directly on the retina.
The study was authored by researchers at Second Sight, the company who developed the device, and has been published in Frontiers in Neuroprosthetics on the 22nd of November.
“In this clinical test with a single blind patient, we bypassed the camera that is the usual input for the implant and directly stimulated the retina. Instead of feeling the braille on the tips of his fingers, the patient could see the patterns we projected and then read individual letters in less than a second with up to 89% accuracy,” explains researcher Thomas Lauritzen, lead author of the paper.
Similar in concept to successful cochlear implants, the visual implant uses a grid of 60 electrodes — attached to the retina — to stimulate patterns directly onto the nerve cells. For this study, the researchers at Second Sight used a computer to stimulate six of these points on the grid to project the braille letters. A series of tests were conducted with single letters as well as words ranging in length from two letters up to four. The patient was shown each letter for half a second and had up to 80% accuracy for short words.
“There was no input except the electrode stimulation and the patient recognized the braille letters easily. This proves that the patient has good spatial resolution because he could easily distinguish between signals on different, individual electrodes.” says Lauritzen.
According to Silvestro Micera at EPFL’s Center for Neuroprosthetics and scientific reviewer for the article, “this study is a proof of concept that points to the importance of clinical experiments involving new neuroprosthetic devices to improve the technology and innovate adaptable solutions.”
Primarily for sufferers of the genetic disease Retinitis Pigmentosa (RP), the implant Argus II has been shown to restore limited reading capability of large conventional letters and short words when used with the camera. While reading should improve with future iterations of the Argus II, the current study shows how the Argus II could be adapted to provide an alternative and potentially faster method of text reading with the addition of letter recognition software. This ability to perform image processing in software prior to sending the signal to the implant is a unique advantage of Argus II.
(via Science Daily)
Common wisdom has it that if the visual cortex in the brain is deprived of visual information in early infanthood, it may never develop properly its functional specialization, making sight restoration later in life almost impossible.
Scientists at the Hebrew University of Jerusalem and in France have now shown that blind people — using specialized photographic and sound equipment — can actually “see” and describe objects and even identify letters and words.
The images are converted into “soundscapes,” using a predictable algorithm, allowing the user to listen to and then interpret the visual information coming from the camera. The blind participants using this device reach a level of visual acuity technically surpassing the world-agreed criterion of the World Health Organization (WHO) for blindness, as published in a previous study by the same group.
The resulting sight, though not conventional in that it does not involve activation of the ophthalmological system of the body, is no less visual in the sense that it actually activates the visual identification network in the brain.
“The adult brain is more flexible that we thought,” says Prof. Amedi. In fact, this and other recent research from various groups have demonstrated that multiple brain areas are not specific to their input sense (vision, audition or touch), but rather to the task, or computation they perform, which may be computed with various modalities.
Are spirit mediums really communicating with the dead? My Magic 8 Ball says “Outlook not so good.”
But a new brain study of Brazilian mediums shows that something decidedly strange is occurring during the famous “trance state,” and no one has a ready answer to explain exactly what’s going on.
Ten mediums—five less expert and five experienced—were injected with a radioactive tracer to capture their brain activity during normal writing and during the practice of psychography, which involves allegedly channeling written communication from the “other side” while in a trance-like state. The subjects were scanned using SPECT (single photon emission computed tomography) to highlight the areas of the brain that are active and inactive during the practice.
The mediums ranged from 15 to 47 years of automatic writing experience, performing up to 18 psychographies per month. All were right-handed, in good mental health, and not currently using any psychiatric drugs. All reported that during the study they were able to reach their usual trance-like state during the psychography task and were in their regular state of consciousness during the control task.
Here’s what happened: The experienced psychographers showed lower levels of activity in the left hippocampus (limbic system), right superior temporal gyrus, and the frontal lobe regions of the brain during psychography compared to their normal (non-trance) writing. The frontal lobe areas are associated with reasoning, planning, generating language, movement, and problem solving, which means that the mediums were experiencing reduced focus, lessened self-awareness and fuzzy consciousness during psychography.
For the less experienced mediums, exactly the opposite was observed–increased levels of activity in the same frontal areas during psychography compared to normal writing, and the difference was significant compared to the experienced mediums. What this probably means is that the less experienced mediums were trying really hard. The force is not yet strong with them.
But here’s the interesting part: the writing samples produced were analyzed and it was found that the complexity scores for the psychographed content were higher than those for the control writing across the board. In particular, the more experienced mediums showed higher complexity scores, which typically would require more activity in the frontal and temporal lobes–but that’s precisely the opposite of what was observed.
To put this another way, the low level of activity in the experienced mediums’ frontal lobes should have resulted in vague, unfocused, obtuse garble. Instead, it resulted in more complex writing samples than they were able to produce while not entranced.
Why? No one’s sure.
The researchers speculate that maybe as frontal lobe activity decreases, “the areas of the brain that support mediumistic writing are further disinhibited (similar to alcohol or drug use) so that the overall complexity can increase.” In a similar manner, they say, improvisational music performance is associated with lower levels of frontal lobe activity which allows for more creative activity.
The big problem with that explanation is that improvisational music performance and alcohol/drug consumption states are, in the researchers’ words, ”quite peculiar and distinct from psychography.”
“While the exact reason is at this point elusive, our study suggests there are neurophysiological correlates of this state,” says study co-author Andrew Newberg, MD, director of Research at the Jefferson-Myrna Brind Center of Integrative Medicine.
Neurophysiological correlates indeed, but to what?
The study appears in the November 16th edition of the online journal PLOS ONE.
What’s the Latest Development?
To whom would you bequeath all the memories of your life? That is a question we may one day face if cellular preservation continues to advance, allowing neural connections to be stored and “read” after a person’s biological death. One group, the Brain Preservation Foundation, is offering a $100,000 prize to the first scientific team to demonstrate “that the entire synaptic connectivity (‘connectome’) of mammalian brains can be perfectly preserved using either chemical preservation or more expensive cryopreservation techniques.”
What’s the Big Idea?
Some chemical preservation procedures already allow for certain species’ connectomes—zebrafish, for example—to be scanned and uploaded. While the reading process is not yet sophisticated enough to extract entire memories, that is the direction in which research is going. John Smart of the Brain Preservation society believes that “having the option of chemical brain preservation at death, if the science is validated, may help all our societies become significantly more science-, future-, progress-, preservation-, sustainability-, truth and justice-, and community-oriented in coming years.”
(via Big Think)
me, in a magnetic field and pulses of radio wave energy
Contrary to the prevailing theories that music and language are cognitively separate or that music is a byproduct of language, theorists at Rice University’s Shepherd School of Music and the University of Maryland, College Park (UMCP) advocate that music underlies the ability to acquire language.
“Spoken language is a special type of music,” said Anthony Brandt, co-author of a theory paper published online this month in the journal Frontiers in Cognitive Auditory Neuroscience. “Language is typically viewed as fundamental to human intelligence, and music is often treated as being dependent on or derived from language. But from a developmental perspective, we argue that music comes first and language arises from music.”
Brandt, associate professor of composition and theory at the Shepherd School, co-authored the paper with Shepherd School graduate student Molly Gebrian and L. Robert Slevc, UMCP assistant professor of psychology and director of the Language and Music Cognition Lab.
“Infants listen first to sounds of language and only later to its meaning,” Brandt said. He noted that newborns’ extensive abilities in different aspects of speech perception depend on the discrimination of the sounds of language — “the most musical aspects of speech.”
The paper cites various studies that show what the newborn brain is capable of, such as the ability to distinguish the phonemes, or basic distinctive units of speech sound, and such attributes as pitch, rhythm and timbre.
The authors define music as “creative play with sound.” They said the term “music” implies an attention to the acoustic features of sound irrespective of any referential function. As adults, people focus primarily on the meaning of speech. But babies begin by hearing language as “an intentional and often repetitive vocal performance,” Brandt said. “They listen to it not only for its emotional content but also for its rhythmic and phonemic patterns and consistencies. The meaning of words comes later.”
Brandt and his co-authors challenge the prevailing view that music cognition matures more slowly than language cognition and is more difficult. “We show that music and language develop along similar time lines,” he said.
Infants initially don’t distinguish well between their native language and all the languages of the world, Brandt said. Throughout the first year of life, they gradually hone in on their native language. Similarly, infants initially don’t distinguish well between their native musical traditions and those of other cultures; they start to hone in on their own musical culture at the same time that they hone in on their native language, he said.
The paper explores many connections between listening to speech and music. For example, recognizing the sound of different consonants requires rapid processing in the temporal lobe of the brain. Similarly, recognizing the timbre of different instruments requires temporal processing at the same speed — a feature of musical hearing that has often been overlooked, Brandt said.
“You can’t distinguish between a piano and a trumpet if you can’t process what you’re hearing at the same speed that you listen for the difference between ‘ba’ and ‘da,’” he said. “In this and many other ways, listening to music and speech overlap.” The authors argue that from a musical perspective, speech is a concert of phonemes and syllables.
“While music and language may be cognitively and neurally distinct in adults, we suggest that language is simply a subset of music from a child’s view,” Brandt said. “We conclude that music merits a central place in our understanding of human development.”
Brandt said more research on this topic might lead to a better understanding of why music therapy is helpful for people with reading and speech disorders. People with dyslexia often have problems with the performance of musical rhythm. “A lot of people with language deficits also have musical deficits,” Brandt said.
More research could also shed light on rehabilitation for people who have suffered a stroke. “Music helps them reacquire language, because that may be how they acquired language in the first place,” Brandt said.
The research was supported by Rice’s Office of the Vice Provost for Interdisciplinary Initiatives, the Ken Kennedy Institute for Information Technology and the Shepherd School of Music.
(via Science Daily)
The human capacity of self-perception, self-reflection and consciousness development are among the unsolved mysteries of neuroscience. Despite modern imaging techniques, it is still impossible to fully visualize what goes on in the brain when people move to consciousness from an unconscious state. The problem lies in the fact that it is difficult to watch our brain during this transitional change. Although this process is the same, every time a person awakens from sleep, the basic activity of our brain is usually greatly reduced during deep sleep. This makes it impossible to clearly delineate the specific brain activity underlying the regained self-perception and consciousness during the transition to wakefulness from the global changes in brain activity that takes place at the same time.
Scientists from the Max Planck Institutes of Psychiatry in Munich and for Human Cognitive and Brain Sciences in Leipzig and from Charité in Berlin have now studied people who are aware that they are dreaming while being in a dream state, and are also able to deliberately control their dreams. Those so-called lucid dreamers have access to their memories during lucid dreaming, can perform actions and are aware of themselves – although remaining unmistakably in a dream state and not waking up. As author Martin Dresler explains, “In a normal dream, we have a very basal consciousness, we experience perceptions and emotions but we are not aware that we are only dreaming. It’s only in a lucid dream that the dreamer gets a meta-insight into his or her state.”
By comparing the activity of the brain during one of these lucid periods with the activity measured immediately before in a normal dream, the scientists were able to identify the characteristic brain activities of lucid awareness.
“The general basic activity of the brain is similar in a normal dream and in a lucid dream,” says Michael Czisch, head of a research group at the Max Planck Institute of Psychiatry. “In a lucid state, however, the activity in certain areas of the cerebral cortex increases markedly within seconds. The involved areas of the cerebral cortex are the right dorsolateral prefrontal cortex, to which commonly the function of self-assessment is attributed, and the frontopolar regions, which are responsible for evaluating our own thoughts and feelings. The precuneus is also especially active, a part of the brain that has long been linked with self-perception.” The findings confirm earlier studies and have made the neural networks of a conscious mental state visible for the first time.
(via Science Daily)
Hang on tight while we grab the next page