“Tetraphobia: a practice to avoid instances of the number 4. It is a superstition most common in East Asian and Southeast Asian regions. n Cantonese-speaking regions in China, 14 and 24 are considered more unlucky than the individual 4, since 14 sounds like “will certainly die” (實死) and 24 like “easy to die” (易死). Where East Asian and Western cultures blend, such as in Hong Kong and Singapore, it is possible in some buildings that both 13 and 14 are skipped as floor numbers along with all the other 4’s.
This is the building I work at. The main elevators show every floor on the buttons but you cannot access them. Here I have found the service elevator to prove there are no such floors.”
Loved this cultural curiosity from my friend’s travels abroad. In numerology, 13 reduces down to 4 (1+3). For this reason, many feng shui consultants advise against these numbers when buildings are being outfitted. It is not uncommon to see these numbers skipped on addresses as well.
One of the stranger personality assessments.
Rejection and heartbreak can have effects every bit as physical as cuts and bruises, and understanding why could change your life.
It is only in the past 10 years that we have begun to unravel the basis of these hurt feelings in the brain. Scientists have found that the sting of rejection fires up the same neural pathways as the pain from a burn or bruise. Besides explaining why some people have thicker skins than others, this fact reveals an intimate link between your social life and your health - you really can die of loneliness.
Our language has long borrowed physical terms to describe our darkest emotions, with phrases such as “she broke my heart”, “he burned me”, and “he stabbed me in the back”. Such comparisons occur around the world: Germans talk about being emotionally “wounded”, while Tibetans describe rejection as a “hit in the heart”.
Although these expressions were always taken to be metaphorical, there had been some early hints that more was afoot. Animal studies in the 1990s, for instance, showed that morphine not only relieves pain after injury, but can also reduce the grief of rat pups separated from their mother.
The dorsal anterior cingulate cortex (dACC) region is known to be an important part of the brain’s “pain network”, determining how upsetting we find an injury. The response can vary depending on the situation; bumping your head might seem like a big deal in the office, but during a football game you might barely notice the blow.
Crucially, the more distressing you find an injury, the more the dACC lights up, those who reported feeling worst after the rejection showed the greatest activity in this region.
Other studies confirmed the link, finding that social rejection provokes not just the dACC but also the anterior insula, another part of the pain network that responds to our distress at a cut finger or broken bone. But although these results all suggest that our anguish after an insult is the same as our emotional response to an injury, it took until last year to show how those feelings might spill over into tangible bodily sensations.
Ethan Kross at the University of Michigan recruited 40 people who had been through a break-up within the past six months and asked them to view a photo of their ex while reclining in an fMRI scanner. He also instructed them to think in detail about the break-up. After a brief intermission, the volunteers’ forearms were given a painful jolt of heat, allowing Kross to compare brain activity associated with the two situations.
As expected, the dACC and the anterior insula lit up in both cases. But surprisingly, the brain’s sensory centres, which reflect the physical discomfort that accompanies a wound, also showed pronounced activity - the first evidence that the feeling of heartbreak can literally hurt (PNAS, vol 108, p 6270).
Cementing the connection between physical pain and emotional anguish, further studies have found that the two experiences sometimes feed off one another. When people feel excluded, they are more sensitive to the burn of a hot probe, and submerging a hand in ice water for 1 minute leads people to report feeling ignored and isolated.
Numbing the hurt
The converse is also true: soothing the body’s response to pain can alleviate the sting of an insult. Nathan DeWall of the University of Kentucky, Lexington, recruited 62 students who either dosed themselves up on two paracetamol (acetaminophen) pills every day for three weeks, or took a placebo. Each evening, the students completed a questionnaire measuring their feelings of rejection during the day. By the end of the three weeks, the group on paracetamol had developed significantly thicker skins, reporting fewer hurt feelings during their day-to-day encounters.
“The idea that you can actually affect people’s experience socially with what is seen as such a mild, common drug [as paracetamol], that was a rather important validation,” says Geoff MacDonald at the University of Toronto, Canada, one of the authors of the study. “This is exactly the kind of thing you would expect if this social pain thing is really true.” Needless to say, due to the harmful side-effects of pain-killing drugs, you should not try this for yourself.
The work might explain why certain people find it harder to withstand the rough and tumble of their social lives than others. Extroverts have been shown to have a higher pain tolerance than introverts, and this is mirrored by their greater tolerance for social rejection.
These diverse reactions may be partly genetic. Eisenberger’s team has shown that people with a small mutation to the gene OPRM1, which codes for one of the body’s opioid receptors, are more likely to slip into depressed feelings after rejection than are those without the mutation. This same mutation also makes people more sensitive to physical pain, and they typically need more morphine following surgery.
Importantly, these receptors are particularly dense in the dACC. As you might expect, in people with the mutation, the dACC tends to react more strongly to perceived insults (PNAS, vol 106, p 15079).
When you consider our ancestors’ dependence on their social connections for survival, it makes sense for us to have evolved to feel rejection so keenly. Being kicked out of a tribe would have been akin to a death sentence, exposing our predecessors to starvation and predation. As a result, we needed a warning system that alerts us to a potential spat, preventing us from causing
further offence and teaching us to toe the line in the future. The pain network, able to give us a jolt when we face physical injury from a fire or knife edge, would have been ideally equipped to curb our social behaviour.
Some have taken this line of thinking further, suggesting it might hold the secret to some of the more mysterious symptoms of loneliness. People who are lonely tend to have an increase in the expression of genes for inflammation, particularly in immune cells, and a decrease in the expression of antiviral genes.
Why would the body deal with isolation in this way? “That was kind of a puzzle to us for the last five or 10 years,” says Steve Cole, a behavioural geneticist at the University of California, Los Angeles. An answer began to emerge when he looked at the way different conditions affect people with different social lives. Viruses spread quickly among large groups of people, whereas life-threatening bacterial infections generally come from wounds which our ancestors may have been more likely to receive when alone, without the protection of their peers. As a result, Cole suggests, our immune system may be “listening in” on our brain’s signals of social status. If it looks as if we are enjoying a lively social life in a big group, we are geared up to deal with viruses; if we feel alone, the dACC and other regions tune up inflammation, which helps us battle bacterial infection.
(via New Scientist)
Chinese poet and palindromist Su Hui lost her husband to a concubine in the fourth century. To console her grief and to lure him back, she composed an ingenious array of 841 characters that can be read forward, backward, horizontally, vertically, and diagonally.
Each seven-character segment corresponds to a poetic line, and can be read in either direction. At the end of each segment, “you encounter a junction of meridians and can choose which direction to go,” explains anthologist David Hinton. “You can begin anywhere, and the poem ends after four lines have been chosen. This structure generates 2,848 possible poems.”
It’s said that Su Hui’s husband was so moved that he sent away the concubine and rejoined her.
(via Futility Closet)
Contrary to the prevailing theories that music and language are cognitively separate or that music is a byproduct of language, theorists at Rice University’s Shepherd School of Music and the University of Maryland, College Park (UMCP) advocate that music underlies the ability to acquire language.
“Spoken language is a special type of music,” said Anthony Brandt, co-author of a theory paper published online this month in the journal Frontiers in Cognitive Auditory Neuroscience. “Language is typically viewed as fundamental to human intelligence, and music is often treated as being dependent on or derived from language. But from a developmental perspective, we argue that music comes first and language arises from music.”
Brandt, associate professor of composition and theory at the Shepherd School, co-authored the paper with Shepherd School graduate student Molly Gebrian and L. Robert Slevc, UMCP assistant professor of psychology and director of the Language and Music Cognition Lab.
“Infants listen first to sounds of language and only later to its meaning,” Brandt said. He noted that newborns’ extensive abilities in different aspects of speech perception depend on the discrimination of the sounds of language — “the most musical aspects of speech.”
The paper cites various studies that show what the newborn brain is capable of, such as the ability to distinguish the phonemes, or basic distinctive units of speech sound, and such attributes as pitch, rhythm and timbre.
The authors define music as “creative play with sound.” They said the term “music” implies an attention to the acoustic features of sound irrespective of any referential function. As adults, people focus primarily on the meaning of speech. But babies begin by hearing language as “an intentional and often repetitive vocal performance,” Brandt said. “They listen to it not only for its emotional content but also for its rhythmic and phonemic patterns and consistencies. The meaning of words comes later.”
Brandt and his co-authors challenge the prevailing view that music cognition matures more slowly than language cognition and is more difficult. “We show that music and language develop along similar time lines,” he said.
Infants initially don’t distinguish well between their native language and all the languages of the world, Brandt said. Throughout the first year of life, they gradually hone in on their native language. Similarly, infants initially don’t distinguish well between their native musical traditions and those of other cultures; they start to hone in on their own musical culture at the same time that they hone in on their native language, he said.
The paper explores many connections between listening to speech and music. For example, recognizing the sound of different consonants requires rapid processing in the temporal lobe of the brain. Similarly, recognizing the timbre of different instruments requires temporal processing at the same speed — a feature of musical hearing that has often been overlooked, Brandt said.
“You can’t distinguish between a piano and a trumpet if you can’t process what you’re hearing at the same speed that you listen for the difference between ‘ba’ and ‘da,’” he said. “In this and many other ways, listening to music and speech overlap.” The authors argue that from a musical perspective, speech is a concert of phonemes and syllables.
“While music and language may be cognitively and neurally distinct in adults, we suggest that language is simply a subset of music from a child’s view,” Brandt said. “We conclude that music merits a central place in our understanding of human development.”
Brandt said more research on this topic might lead to a better understanding of why music therapy is helpful for people with reading and speech disorders. People with dyslexia often have problems with the performance of musical rhythm. “A lot of people with language deficits also have musical deficits,” Brandt said.
More research could also shed light on rehabilitation for people who have suffered a stroke. “Music helps them reacquire language, because that may be how they acquired language in the first place,” Brandt said.
The research was supported by Rice’s Office of the Vice Provost for Interdisciplinary Initiatives, the Ken Kennedy Institute for Information Technology and the Shepherd School of Music.
(via Science Daily)
In Japan, kotodama is the belief that mystical powers dwell in words and names. A curse will result in sakanagi — side effects of casting a spell that eventually return to the caster. Those who curse someone have a price to pay of their own.
Transderivational search (often abbreviated to TDS) is a psychological and cybernetics term, meaning when a search is being conducted for a fuzzy match across a broad field. In computing the equivalent function can be performed using content-addressable memory.
Unlike usual searches, which look for literal (i.e. exact, logical, or regular expression) matches, a transderivational search is a search for a possible meaning or possible match as part of communication, and without which an incoming communication cannot be made any sense of whatsoever. It is thus an integral part of processing language, and of attaching meaning to communication.
A psychological example of TDS is in Ericksonian hypnotherapy, where vague suggestions are used that the patient must process intensely to find their own meanings for, thus ensuring that the practitioner does not intrude his own beliefs into the subject’s inner world.
Because TDS is a compelling, automatic and unconscious state of internal focus and processing (i.e. a type of everyday trance state), and often a state of internal lack of certainty, or openness to finding an answer (since something is being checked out at that moment), it can be utilized or interrupted, in order to create, or deepen, trance.
TDS is a fundamental part of human language and cognitive processing. Arguably, every word or utterance a person hears, for example, and everything they see or feel and take note of, results in a very brief trance while TDS is carried out to establish a contextual meaning for it.
Although TDS is often associated with spoken language, it can be induced in any perceptual system. Thus Milton Erickson’s “hypnotic handshake” is a technique that leaves the other person performing TDS in search of meaning to a deliberately ambiguous use of touch.
Language deprivation experiments have been attempted several times through history, isolating infants from the normal use of spoken or signed language in an attempt to discover the fundamental character of human nature or the origin of language.
The American literary scholar Roger Shattuck called this kind of research study “The Forbidden Experiment” due to the exceptional deprivation of ordinary human contact it requires. Although not designed to study language, similar experiments on non-human primates utilising complete social deprivation resulted in psychosis.
Ancient records suggest that this kind of experiment was carried out from time to time, though the authenticity of these records is unconfirmable. An early record of an experiment of this kind can be found in Herodotus’s Histories. According to Herodotus, after carrying out such an experiment, the Egyptian pharaoh Psamtik I concluded the Phrygian race must predate the Egyptians since the child had first spoken something similar to the Phrygian word bekos, meaning “bread.”
An alleged experiment carried out by Holy Roman Emperor Frederick II in the 13th century saw young infants raised without human interaction in an attempt to determine if there was a natural language that they might demonstrate once their voices matured. It is claimed he was seeking to discover what language would have been imparted unto Adam and Eve by God.
Computational models decode and reconstruct neural responses to speech.
The brain’s electrical activity can be decoded to reconstruct which words a person is hearing, researchers report today in PLoS Biology1.
Brian Pasley, a neuroscientist at the University of California, Berkeley, and his colleagues recorded the brain activity of 15 people who were undergoing evaluation before unrelated neurosurgical procedures. The researchers placed electrodes on the surface of the superior temporal gyrus (STG), part of the brain’s auditory system, to record the subjects’ neuronal activity in response to pre-recorded words and sentences.
The STG is thought to participate in the intermediate stages of speech processing, such as the transformation of sounds into phonemes, or speech sounds, yet little is known about which specific features, such as syllable rate or volume fluctuations, it represents.
“A major goal is to figure out how the human brain allows us to understand speech despite all the variability, such as a male or female voice, or fast or slow talkers,” says Pasley. “We build computational models that test hypotheses about how the brain accomplishes this feat, and then see if these models match the brain recordings.”
To analyse the data from the electrode recordings, the researchers used an algorithm designed to extract key features of spoken words, such as the time period and volume changes between syllables.
They then entered these data into a computational model to reconstruct ‘voicegrams’ showing how these features change over time for each word. They found that these voicegrams could reproduce the sounds the patients heard accurately enough for individual words to be recognized.
During speech perception, the brain encodes and interprets complex acoustic signals composed of multiple frequencies that change over timescales as small as ten-thousandths of a second. The latest findings are a step towards understanding the processes by which the human brain converts sounds into meanings, and could have a number of important clinical applications.
“If we can better understand how each brain area participates in this process,” says Pasley, “we can start to understand how these neural mechanisms malfunction during communication disorders such as aphasia.”
Pasley and his team are interested in the similarities between perceived and imagined speech. “There is some evidence that perception and imagery may be pretty similar in the brain,” he says.
These similarities could eventually lead to the development of brain–computer interfaces that decode brain activity associated with the imagined speech of people who are unable to communicate, such as stroke patients or those with motor neurone disease or locked-in syndrome.
Sophie Scott, a neuroscientist at University College London, who studies speech perception and production, says that she has some reservations about the accuracy of the voicegrams. She would also like to see the pattern of responses for non-speech stimuli, such as music or unintelligible sounds, for comparison. But the authors “did an amazing job of transforming recordings of the neural responses to speech and relating these to the original sounds,” she says. “This approach may enable them to start determining the kinds of transformations and representations underlying normal speech perception.”
“I have not here been considering the literary use of language, but merely language as an instrument for expressing and not for concealing or preventing thought. Others have come near to claiming that all abstract words are meaningless, and have used this as a pretext for advocating a kind of political quietism. Since you don’t know what Fascism is, how can you struggle against Fascism? One need not swallow such absurdities as this, but one ought to recognise that the present political chaos is connected with the decay of language, and that one can probably bring about some improvement by starting at the verbal end.
If you simplify your English, you are freed from the worst follies of orthodoxy. You cannot speak any of the necessary dialects, and when you make a stupid remark its stupidity will be obvious, even to yourself. Political language — and with variations this is true of all political parties, from Conservatives to Anarchists — is designed to make lies sound truthful and murder respectable, and to give an appearance of solidity to pure wind. One cannot change this all in a moment, but one can at least change one’s own habits, and from time to time one can even, if one jeers loudly enough, send some worn-out and useless phrase — some jackboot, Achilles’ heel, hotbed, melting pot, acid test, veritable inferno, or other lump of verbal refuse — into the dustbin, where it belongs.”
Orwell knew a thing or two, but that people do not care enough to understand what they hear is part of the trouble, that they are so distracted by the immediate sphere of life they do not look beyond their own actions too closely ensures the state of our decay. The truth, or the absense thereof, resides in the action our leaders take. It’s there to see for those who are willing to observe. In a perfect world politicians would speak succinctly, but it isn’t and they certainly do not. And it is because I agree with Orwell’s view of language that I must play devil’s advocate a little.
Is it the politician’s fault for using convoluted language so that the citizen cannot easily see the truth or is it the citizen’s fault that he does not pursue clarity in the midst of confusion?
It is a bit of a chicken and egg scenario. If one can transcend the paradox, this mental exercise illustrates that the answer ulitmately does not matter; both are responsible for the existence of the other. And so too is true for the politician and the citizen. It is not enough to begin within as Orwell urges us, we must endeavor to understand what we do not before we can gain the momentum to triumphantly put doublespeak to rest.
Consider “democracy”. Western governments enforce uniformity with violence to ensure the “public good”. And they define the “truth”.
While you are veering off into a whole other discussion of the many issues in democracy, I won’t contest there is much to say on that, but staying on topic, Orwell was speaking of language and how politicians manipulate it. The truths in question are the actual facts underlying events that are covered with lies through obscure language. Yes, politics defines the public “truth” — the faux truth we’ll place inside quotes. They shape it into a form that will be more digestible to the masses but they aren’t magicians who can undo what has been done. Every cover-up leaves behind the residue of its creation.
We certainly don’t have a perfect society here but we’re not confronted with prevalent military street violence like some unfortunate people in other areas of the world. At least not yet. And the sad thing is some of these democracies, ours, others, have had a hand in that violence. So unless you want to see that continue more of us need to get up and demand justice. And that begins by understanding the language orchestrating this public ignorance. Nothing is stopping us from digging deeper and taking it upon ourselves to get the full story rather than passively accepting what we hear from one or two mainstream sources as is common. And it is people who feel passionately such as yourself who must seize the opportunity and avenues of freedom left to us. You can comfortably rabble-rouse from your computer chair or you can shine light where it is needed most. We are not so powerless as they would have you believe.
Hang on tight while we grab the next page