Odds and ends - themes and trends
Resilient circadian oscillator revealed in individual cyanobacteria
By IRINA MIHALCESCU, WEIHONG HSING, STANISLAS LEIBLER
source: www.nature.com 430, 81 - 85/01 July 2004
Circadian oscillators, which provide internal daily periodicity, are found in a variety of living organisms, including mammals, insects, plants, fungi and cyanobacteria. Remarkably, these biochemical oscillators are resilient to external and internal modifications, such as temperature and cell division cycles. They have to be 'fluctuation (noise) resistant' because relative fluctuations in the number of messenger RNA and protein molecules forming the intracellular oscillators are likely to be large.
In multicellular organisms, the strong temporal stability of circadian clocks, despite molecular fluctuations, can easily be explained by intercellular interactions. Here we study circadian rhythms and their stability in unicellular cyanobacteria Synechoccocus elongatus.
Low-light-level microscopy has allowed us to measure gene expression under circadian control in single bacteria, showing that the circadian clock is indeed a property of individual cells.
Our measurements show that the oscillators have a strong temporal stability with a correlation time of several months.
In contrast to many circadian clocks in multicellular organisms, this stability seems to be ensured by the intracellular biochemical network, because the interactions between oscillators seem to be negligible.
Correspondence and requests for materials should be addressed to I.M. (email@example.com).
What On Earth Is Music?
Source: www.oxtrust.org.uk/30 June 2004
Eureka! The Museum for Children is developing a radical new gallery that will use sound, music and creativity to help children engage with science thanks to a £117,000 award from NESTA (the National Endowment for Science, Technology & the Arts), the organisation that invests in UK creativity and innovation.
Based in Halifax, West Yorkshire, Eureka! is the first purpose-built museum aimed at children in the UK and has attracted nearly 4 million visitors since it opened in 1992.
Eureka!'s latest project, SoundSpace, is a new permanent exhibition due to open on 21st July 2004.
The exhibition is an innovative collaboration between the acclaimed sound artist Thor McIntyre-Burnie and Dutch exhibition designers Northern Light CoDesign.
It aims to use music and creativity to help children to understand science, technology, engineering and maths.
The gallery will include:
A stage, complete with dramatic sound and lighting, for visitors to take part in a 10-minute performance with alien Orby, who needs help to understand and interpret music and sound from Planet Earth.
An immersive 'Sensory Soundscape' where children will be surrounded by a medley of lights and a constantly evolving sound collage that will respond to the visitors' movements.
A DJ-style studio, where visitors can mix their own music using beats, samples and sounds from nature.
A music matrix with flashing patterns that revolutionises the way music is created, where children can generate their own unique musical compositions and select how they wish their music to sound - and look.
NESTA is supporting SoundScape over six months from June 2004, through its Learning programme.
The funding will go towards developing the 'Sensory Soundscapes' elements of the project, and will enable Eureka! to be as innovative as possible in harnessing emerging technologies.
Tudor Gwynn, Exhibitions Manager at Eureka!, said:
"Our aim is to create a cross-disciplinary learning environment, where children can investigate science informally. We want to blur boundaries and empower children. Above all, we want them to have fun while they learn. The NESTA funding gives us the chance to create a truly innovative exhibition rather than recycling tried and tested techniques"
NESTA (the National Endowment for Science, Technology & the Arts
EUREKA - The museum for Children
`The Sounds in Space' - Interactive workshops exploring the spatial & contextual relationship of sound & music
Fylkingen Artists In Residence, April 2001
Northern Light CoDesign
Differential Corticostriatal Plasticity during Fast and Slow Motor Skill Learning in Mice
Rui M. Costa, Dana Cohen and Miguel A. L. Nicolelis
Motor skill learning usually comprises "fast" improvement in performance within the initial training session and "slow" improvement that develops across sessions. Previous studies have revealed changes in activity and connectivity in motor cortex and striatum during motor skill learning.
However, the nature and dynamics of the plastic changes in each of these brain structures during the different phases of motor learning remain unclear.
By using multielectrode arrays, we recorded the simultaneous activity of neuronal ensembles in motor cortex and dorsal striatum of mice during the different phases of skill learning on an accelerating rotarod.
Mice exhibited fast improvement in the task during the initial session and also slow improvement across days. Throughout training, a high percentage of striatal (57%) and motor cortex (55%) neurons were task related; i.e., changed their firing rate while mice were running on the rotarod.
Improvement in performance was accompanied by substantial plastic changes in both striatum and motor cortex. We observed parallel recruitment of task-related neurons in both structures specifically during the first session. Conversely, during slow learning across sessions we observed differential refinement of the firing patterns in each structure.
At the neuronal ensemble level, we observed considerable changes in activity within the first session that became less evident during subsequent sessions.
These data indicate that cortical and striatal circuits exhibit remarkable but dissociable plasticity during fast and slow motor skill learning and suggest that distinct neural processes mediate the different phases of motor skill learning.
Babies babble in sign language too
Exclusive from New Scientist Print Edition. Subscribe and get 4 free issues.
source: Alison Motluk/www.newscientist.com/5 July 04
Babies exposed to sign language babble with their hands, even if they are not deaf. The finding supports the idea that human infants have an innate sensitivity to the rhythm of language and engage it however they can, the researchers who made the discovery claim.
Everyone accepts that babies babble as a way to acquire language, but researchers are polarised about its role. One camp says that children learn to adjust the opening and closing of their mouths to make vowels and consonants by mimicking adults, but the sounds are initially without meaning.
The other side argues that babbling is more than just random noise-making. Much of it, they contend, consists of phonetic-syllabic units - the rudimentary forms of language.
Laura-Ann Petitto at Dartmouth College in Hanover, New Hampshire, a leader in this camp, has argued that deaf babies who are exposed to sign language learn to babble using their hands the way hearing babies do with their mouths.
Petitto believes that the hand-babbling is functionally identical to verbal babbling - only the input is different. But critics counter that deaf children cannot be directly compared with their hearing counterparts.
Now Petitto and her colleagues have tested three hearing babies who, because their parents are deaf, were exposed only to sign. Three control infants had hearing, speaking parents.
To analyse the hand movements of the six children, the researchers placed infrared-emitting diodes on the babies' hands, forearms and feet. Sensors tracked the movements of the babies' limbs as they engaged in a variety of tasks, including grasping for toys and watching two people communicate.
Petitto reasoned that if her opponents were right, then what the babies did with their hands would be irrelevant - and indistinguishable. Instead the team found that the two groups had different hand movements.
Sign-exposed babies produced two distinct types of rhythmic hand activity, a low-frequency type at 1 hertz and a high-frequency one at 2.5 hertz. The speech-exposed babies had only high-frequency moves.
There was a "unique rhythmic signature of natural language" to the low-frequency movements. "What is really genetically passed on," Petitto says, "is a sensitivity to patterns."
But Peter MacNeilage, of the University of Texas at Austin, is not persuaded. "She makes a blanket statement that there is an exact correspondence between the structures of speech and sign," he says. "But there is no accepted evidence for this view at the level of phonological structure or in the form of a rhythm common to speech and sign."
Early blindness gives musical edge
source: Reuters/straitstimes.asia1.com.sg/JULY 17, 2004 SAT
Infants who go blind at a very young age develop musical abilities that are measurably better than those who lose their sight later in life or retain full vision, according to a new study.
It has long been known that blind people are far better than their sighted counterparts at orientating themselves by sound. But now, scientists at Canada's University of Montreal have found that blind people are also up to 10 times better at discerning pitch changes than the sighted - but only when they went blind before the age of two.
'This research confirms that blind people are indeed better at pitch discrimination than normal, sighted people,' lead researcher Pascal Belin said.
'You have great musicians who are blind, and a lot of piano-tuners are blind. But until this study, there was no quantifiable evidence to demonstrate that blind people were indeed better.'
But crucially, Mr Belin's team found that there was no difference in pitch change detection between sighted people and those who went blind later.
'We found that the superiority was correlated with the age of blindness,' he said.
'Only the subjects who had become blind before the age of two had a clearly superior performance. Late blind subjects - people who became blind after the age of five - were no different from the control subjects.'
The research, published in the science journal, Nature, attributed the clear differences in performance to brain plasticity - the formative period when the infant brain is akin to a sponge and soaks up all sorts of stimuli.
'When these people became blind, the part of their brain that would have been used to process visual information reorganises to take over other functions - in particular, auditory information,' Mr Belin said.
'And the earlier this reorganisation takes place, the more efficient it is.'
Pascal Belin Professeur adjoint
Article in nature: Neuropsychology: Pitch discrimination in the early blind
Absolute pitch in blind musicians - PDF
Visual cortex activation in blind humans during sound discrimination.
Are Schools Leaving Arts Behind?
By Karen MacPherson, Post-Gazette National Bureau
source: Karen MacPherson/ww.post-gazette.com/July 12, 2004
Arts educators cheered when the arts were declared a "core" academic subject under the "No Child Left Behind" education reform measure signed into law two years ago by President Bush.
Since then, the cheers have turned to consternation as school districts around the nation have cut classroom time and funding for art and music. School officials say they need to focus attention and money on reading and math because students are tested annually on these subjects under the NCLB law.
Arts educators, joined by organizations of teachers, parents, school administrators and school boards, are fighting back. They are mounting national campaigns to preserve art classes, citing research that shows a strong correlation between schooling in the arts and academic success.
The National Art Education Association has created a "Tips For Parent Advocacy" booklet to help parents lobby for arts education in their schools, and the Arts Education Partnership has produced a guide, "No Subject Left Behind,'' designed to help state and local education leaders apply for federal arts funding.
Arts education will take center stage this week at the national conference of the Education Commission of the States, a Denver-based group of governors, legislators and state education officials. Gov. Mike Huckabee of Arkansas, the commission's incoming chairman, has chosen arts education as the theme of his tenure.
The issue also will be a focus at this week's convention of the National Assembly of State Arts Agencies and Americans for the Arts.
Donna Collins, executive director of the Ohio Alliance for Arts Education, is part of a convention panel that will explore ways to ensure the continuation and improvement of arts education in American public schools.
"We all want a high-quality education for our children and we want schools to be accountable for providing that,'' Collins said. "But we can't just be focused on reading, writing and science.''
A Phi Delta Kappa/Gallup poll done last year found that 80 percent of Americans have at least a "fair amount" of concern that arts and other subjects will be downgraded because of the NCLB law's focus on assessing student improvement only in reading, math and, eventually, science.
The law requires annual math and reading tests from third through eighth grades. It also sets higher educational standards for high school teachers, while establishing penalties for schools that consistently fail to improve.
Under NCLB, arts education was listed as a core subject for the first time in federal law. But reports released over the past several months have documented that arts classes are getting squeezed out because the law doesn't require that students be tested for proficiency in art, music, dance or drama.
Many people also see arts classes as "academic frills," so they often are the first ones eliminated when school districts run short of money.
As a result, art, music and other arts classes may become a "lost curriculum," said Brenda Welburn, executive director of the National Association of State Boards of Education. "The fact is, however, that these subjects should be considered as fundamental to a child's education as the three 'R's.'"
A recent report by the Council for Basic Education, a Washington, D.C.-based education nonprofit, found that schools are spending substantially less time on the arts -- as well as social studies, civics, geography and languages -- since NCLB became law.
The report, billed as the first to examine how NCLB is influencing instructional time in key subject areas, said such a trend is worrisome in light of research showing that active involvement in music, art and related subjects helps students do better in more traditional academic disciplines.
The report, titled "Academic Atrophy: The Condition of the Liberal Arts in America's Public Schools," found that schools with large numbers of minority students have been particularly affected by cutbacks in arts education. The authors said this was troubling in light of studies that suggest the arts can help blacks and Hispanics close the achievement gap with whites and Asians.
"Truly high expectations cannot begin and end with mathematics, science and reading," the report's authors stated. "Though we must certainly strive to close racial achievement gaps in mathematics and reading, we run the risk of substituting one form of inequity for another, ultimately denying our most vulnerable students the full liberal arts curriculum our most privileged youth receive almost as a matter of course."
One elementary principal who participated in the council study said improving his school's arts program helped give him a "hook" to engage students who otherwise were difficult to motivate. As a result, the principal said he was able to raise his school's readings scores, once among the lowest in his school district, to the highest in first, fourth and sixth grades, according to the report.
"The tendency to sacrifice time for the arts to extend time for mathematics and reading may ultimately prove counterproductive, especially for students at greatest risk of becoming disengaged from school," the report noted.
Michael Petrilli, a senior official of the U.S. Department of Education, said the NCLB law wasn't meant to undercut support for the arts.
"While accountability in the law is focused on reading and math, the two most basic subjects, there is a lot in the law that supports other academic subjects, such as the arts," he said. "The spirit of No Child Left Behind is to make sure that every child in America gets the kind of well-rounded education once reserved for children of the elite."
But Michael Blakeslee, deputy executive director of the National Association for Music Education, said there's obviously a need to convince school districts of the importance of the arts in achieving the goals of the NCLB law.
"School board members aren't evil people sitting up late at night trying to get rid of music education,'' Blakeslee said. "They are honestly trying to do the best for their constituents. We need to make sure they understand the stakes of cutting or eliminating music and other arts.''
John Broomall, executive director of the Pennsylvania Alliance for Arts Education, said arts educators must convince the public that students "don't need less of any academic area."
"There is no question that many children in schools are in deep academic trouble, and we've got to do whatever we can to help pull them out of it," he said. "But you don't start by taking things away from them."
National Art Education Association
Tips For Parent Advocacy
Gov. Mike Huckabee of Arkansas
National Assembly of State Arts Agencies
Americans for the Arts
Ohio Alliance for Arts Education
Council for Basic Education An Independent Voice for Educational Excellence
"Academic Atrophy: The Condition of the Liberal Arts in America's Public Schools," PDF
National Association for Music Education
Pennsylvania Alliance for Arts Education
Oscillations of heart rate and respiration synchronize during poetry recitation
Dirk Cysarz1, Dietrich von Bonin2 Helmut Lackner, Peter Heusser, Maximilian Moser, and Henrik Bettermann
Quelle: ajpheart.physiology.org/PMID: 15072959
Objective of this study was to investigate the synchronization between low frequency breathing patterns and respiratory sinus arrhythmia (RSA) of heart rate during guided recitation of poetry, i.e. recitation of hexameter verse from ancient Greek literature performed in a therapeutic setting. 20 healthy volunteers performed three different types of exercises with respect to a cross-sectional comparison: recitation of hexameter verse, controlled breathing and spontaneous breathing.
Each exercise was divided into three successive measurements: a 15-minute baseline measurement (S1), 20 minutes of exercise and a 15-minute effect measurement (S2).
Breathing patterns and RSA were derived from respiratory traces and electrocardiograms, respectively, which were recorded simultaneously using an ambulatory device. The synchronization was then quantified by the index which has been adopted from the analysis of weakly coupled chaotic oscillators. During recitation of hexameter verse was high, indicating prominent cardiorespiratory synchronization.
The controlled breathing exercise showed cardiorespiratory synchronization to a lesser extent and all resting periods (S1 and S2) had even fewer cardiorespiratory synchronization. During spontaneous breathing cardiorespiratory synchronization was minimal and hardly observable. The results were largely determined by the extent of a low frequency component in the breathing oscillations that emerged from the design of hexameter recitation.
In conclusion, recitation of hexameter verse exerts a strong influence on RSA by a prominent low frequency component in the breathing pattern, generating a strong cardiorespiratory synchronization.
To whom correspondence should be addressed. E-mail: firstname.lastname@example.org.
Human intelligence determined by volume and location of gray matter tissue in brain
Single `intelligence center' in brain unlikely, UCI study also finds
General human intelligence appears to be based on the volume of gray matter tissue in certain regions of the brain, UC Irvine College of Medicine researchers have found in the most comprehensive structural brain-scan study of intelligence to date.
The study also discovered that because these regions related to intelligence are located throughout the brain, a single ìintelligence center,î such as the frontal lobe, is unlikely.
Dr. Richard Haier, professor of psychology in the Department of Pediatrics and long-time human intelligence researcher, and colleagues at UCI and the University of New Mexico used MRI to obtain structural images of the brain in 47 normal adults who also took standard intelligence quotient tests. The researchers used a technique called voxel-based morphometry to determine gray matter volume throughout the brain which they correlated to IQ scores. Study results appear on the online version of NeuroImage.
Previous research had shown that larger brains are weakly related to higher IQ, but this study is the first to demonstrate that gray matter in specific regions in the brain is more related to IQ than is overall size. Multiple brain areas are related to IQ, the UCI and UNM researchers have found, and various combinations of these areas can similarly account for IQ scores. Therefore, it is likely that a person's mental strengths and weaknesses depend in large part on the individual pattern of gray matter across his or her brain."This may be why one person is quite good at mathematics and not so good at spelling, and another person, with the same IQ, has the opposite pattern of abilities", Haier said.
While gray matter amounts are vital to intelligence levels, the researchers were surprised to find that only about 6 percent of all the gray matter in the brain appears related to IQ.
"There is a constant cascade of information being processed in the entire brain, but intelligence seems related to an efficient use of relatively few structures, where the more gray matter the better, Haier said. ìIn addition, these structures that are important for intelligence are also implicated in memory, attention and language.î
The findings also suggest that the brain areas where gray matter is related to IQ show some differences between young-adult and middle-aged subjects. In middle age, more of the frontal and parietal lobes are related to IQ; less frontal and more temporal areas are related to IQ in the younger adults.
The research does not address why some people have more gray matter in some brain areas than other people, although previous research has shown the regional distribution of gray matter in humans is highly heritable. Haier and his colleagues are currently evaluating the MRI data to see if there are gender differences in IQ patterns.
Haier's colleagues in the study include Dr. Michael T. Alkire and Kevin Head of UCI and Drs. Rex E. Jung and Ronald A. Yeo of the University of New Mexico. The National Institute of Child Health and Human Development supported the study.
Universität von Kalifornien
Phonological Processing in Adults Who Stutter: Electrophysiological and Behavioral Evidence
By Christine Weber-Fox, Rebecca M.C. Spencer, John E. Spruill III, and Anne Smith
Event-related brain potentials (ERPs), judgment accuracy and reaction times (RTs) were obtained for 11 adults who stutter and 11 normally fluent speakers as they performed a rhyme judgment task of visually presented word pairs.
Half of the word pairs (i.e., prime, target) were phonologically and orthographically congruent across words. That is, the words looked orthographically similar and rhymed (e.g., THROWN, OWN) or did not look similar and did not rhyme (e.g., CAKE, OWN).
The phonologic and orthographic information across the remaining pairs was incongruent. That is, the words looked similar but did not rhyme (e.g., GOWN, OWN) or did not look similar but rhymed (e.g., CONE, OWN). Adults who stutter and those who are normally fluent exhibited similar phonologic processing as indexed by ERPs, response accuracy and RTs.
However, longer RTs for adults who stutter indicated their greater sensitivity to the increased cognitive loads imposed by phonologic/orthographic incongruency. Also, unlike the normally fluent speakers, the adults who stutter exhibited a right hemisphere asymmetry in the rhyme judgment task, as indexed by the peak amplitude of the rhyming effect (difference wave) component.
Overall, these findings do not support theories of the etiology of stuttering that posit a core phonologic processing deficit.
Rather we provide evidence that adults who stutter are more vulnerable to increased cognitive loads and display greater right hemisphere involvement in late cognitive processes.
Read the Fulltext here:
Family words came first for early humans
source: Anna Gosline/www.newscientist.com/26 July 04
One of a Neanderthal baby's first words was probably "papa", concludes one of the most comprehensive attempts to date to make out what the first human language was like.
Many of the estimated 6000 languages now spoken share common words and meanings, notably for kin names like "mama" and "papa". That has led some linguists to suggest that these words have been carried through from humans' original proto-language, spoken at least 50,000 years ago.
But without information on exactly how often these words occur across distantly related languages, there has been little evidence to support that claim.
What is more, some words of similar sound and meaning, such as the English "day" and the Spanish "dia", are known to have arisen independently.
Now Pierre Bancel and Alain Matthey de l'Etang from the Association for the Study of Linguistics and Prehistoric Anthropology in Paris have found that the word "papa" is present in almost 700 of the 1000 languages for which they have complete data on words for close family members.
Those languages come from all the 14 or so major language families. And the meaning of "papa" is remarkably consistent: in 71 per cent of cases it means father or a male relative on the father's side.
"There is only one explanation for the consistent meaning of the word 'papa': a common ancestry," Bancel says. He presented the findings at the Origins of Language and Psychosis conference in Oxford, UK, in July 2004.
But debate over whether modern languages carry the remnants of the language spoken at the dawn of humanity is likely to continue.
Don Ringe, a linguist at the University of Pennsylvania in Philadelphia, says that babies may simply associate the first sound they can make with the first people they see - their parents.
That, too, would lead to words like "papa" acquiring similar meaning in many languages.
Even Bancel admits that there will never be conclusive proof. "We have no Neanderthals around to ask."
It's The Brain Not The Body That Hits The Wall
source: James Randerson /www.alphagalileo.org/July 2004
Fatigue is in the mind, not the muscles. But it can still have a serious impact on athletic performance. The finding could lead to treatments for conditions like chronic fatigue syndrome, or the development of illicit performance-enhancing drugs.
Traditionally, fatigue was viewed as the result of over-worked muscles ceasing to function properly. But evidence is mounting that our brains make us feel weary after exercise (New Scientist, 20 March, p 42). The idea is that the brain steps in to prevent muscle damage.
Now Paula Robson-Ansley and her colleagues at the University of Cape Town in South Africa have demonstrated that a ubiquitous body signalling molecule called interleukin-6 plays a key role in telling the brain when to slow us down. Blood levels of IL-6 are 60 to 100 times higher than normal following prolonged exercise, and injecting healthy people with IL-6 makes them feel tired.
To work out if IL-6 affects performance, Robson-Ansley injected seven club-standard runners with either IL-6 or a placebo and recorded their times over 10-kilometres. A week later, the experiment was reversed. On average they ran nearly a minute faster after receiving the placebo, a significant difference since their finishing times were around 41 minutes. The findings will appear in the Canadian Journal of Applied Physiology.
Robson-Ansley has a personal interest because her own athletic career was cut short in part by a condition called underperformance syndrome. She was training for the British rowing squad for the 1996 Olympic games, when her obsessive schedule tipped her body over the edge. "Suddenly a 5-kilometre run felt like I'd run a marathon the next day," she recalls. She hopes her research will lead to treatments for UPS and chronic fatigue syndrome.
One approach would be to block IL-6 receptors in the brain using antibodies. This has already had some success in tackling symptoms of chronic fatigue, but it raises fears that unscrupulous athletes could try the same technique to train too hard. But IL-6 has many effects on the body so blocking its action could be counterproductive, or even dangerous. IL-6 receptors may also be less sensitive in top athletes.
University of Cape Town
Brain not body makes athletes feel tired
Studies into placebo effect and empathy suggest how the brain encodes subjective experience |
By Eugene Russo Courtesy of Fabrizio Benedetti
Quelle: www.the-scientist.com/Aug. 2, 2004
During a deep brain stimulation clinical trial, researchers detected elements of the placebo effect. The pre-placebo neuron was recorded from the left subthalamic nucleus as a control. The post-placebo neuron was recorded from the right subthalamic nucleus. Other neurons demonstrated a similiar decrease in activity.
Revealing the complexities of the pain experience may offer a window into the mind-body interaction. Several recent studies into the placebo effect, human empathy, and their apparent interconnectedness are providing insight into the human subjective experience.
Such investigations, says Jon-Kar Zubieta, associate professor in psychiatry and radiology at the University of Michigan, help scientists understand the intersection of physical and emotional states. "The placebo effect gets at the core of how individuals react and modulate environmental events, whether positive or negative in nature," he says. If harnessed, the regulatory mechanisms involved could point to better treatments for pain, depression, and stress.
In earlier work, University of Turin physiology professor Fabrizio Benedetti showed that administering an opioid-blocking drug could reverse the psychological placebo effect.1 "People started believing there was something real there," says Columbia University assistant professor Tor Wager, lead author of a recent placebo effect study on functional magnetic resonance imaging (fMRI) .
Wager's group took a different tack, uncovering regions of the brain that showed decreased activity during the placebo effect.2 In one trial, they told subjects that they were administering a powerful analgesic cream. In another, the subjects received the same cream but were told it has no effect. When subjects were experiencing the placebo effect, a subset of known pain-sensitive brain regions showed a signal reduction of 20% to 25%.
In a subsequent study, Benedetti's group observed patterns of neuronal firing, not visible via neuroimaging, that corresponded with Wager's findings.3 His group performed single-neuron recording in patients with Parkinson disease who had been administered a sham treatment.
Investigators took advantage of a legitimate therapy, deep-brain stimulation, which involves the chronic stimulation of the two subthalamic nuclei via electrodes implanted in the subject's brain. But when only a mock procedure was performed, and the patients were told that they'd have relief, scientists noted less muscle rigidity and decreased neuronal activity.
According to Benedetti, the placebo treatment interrupted the typical pattern of Parkinson neurons, namely bursts of activity. In both his study and the Wager study, placebos suppress activity in pain pathways and motor pathways. "Unfortunately we do not know why and how," says Benedetti.
EMPATHY AND ANTICIPATION
Wager's group also separated pain from the anticipation of pain. They compared brain images of subjects just after they'd been given a cue that pain was coming, to brain images after the experience of pain and with and without placebo.
Two areas of the brain were more active with placebo: the dorsal lateral prefrontal cortex, an area integral to working memory, and the orbital frontal cortex, an area known to be involved in evaluating stimuli. "You see the cue, the pain is coming up, but [the subject thinks], 'It's not going to be so bad because I've had the placebo,'" says Wager. "So there's an active process there."
The brain areas involved in anticipation of pain relief have significant overlap with those involving empathy, according to a study that accompanied Wager's.4 "You can imagine that these areas code for your subjective experience of how unpleasant something is, and not for the objective input," says lead author Tania Singer, a research fellow in the functional imaging laboratory at University College London.
The empathy study compared the fMRI brain images of persons when they received a painful stimulus to when they observed a signal indicating that a loved one was receiving a painful stimulus. Thus, subjects empathized with the emotions of others in the total absence of any external emotional sensory stimuli. Singer found that only the anterior cingulate cortex and the anterior insula were activated for both the response to one's own pain and to that of another's pain. The suggestion: Only those parts of the pain experience associated with the affective and not the sensory evaluation are involved in empathy. "It's as though you have a central circuitry for pain processing," says Wager.
"These studies," says Zubieta, "more precisely get at how the brain is modulating different forms of experience, whether emotional in the case of empathy of pain, or things like placebo effects, which is also the expectation of a positive outcome. ... For the first time we're beginning to see the true mind-body kind of connection and integration."
Contact: Eugene Russo (email@example.com)
1. E. Russo, "The biological basis of the placebo effect," The Scientist, 16:30, Dec. 9, 2002.
2. Wager et al., "Placebo-induced changes in fMRI in the anticipation and experience of pain," Science, 303:1162-7, Feb. 20, 2004.
3. F. Benedetti et al., "Placebo-responsive Parkinson patients show decreased activity in single neurons of subthalamic nucleus," Nat Neurosci, 7:587-8, June 2004.
4. T. Singer et al., "Empathy for pain involves the affective but not sensory components of pain," Science, 303:1157-61, Feb. 20, 2004.
Dr Tania Singer (PhD)
Social facilitation of wound healing
By Courtney E. Detilliona, Tara K. S. Crafta, Erica R. Glaspera, Brian J. Prendergasta and A. Courtney DeVries, , a, b
It is well documented that psychological stress impairs wound healing in humans and rodents. However, most research effort into influences on wound healing has focused on factors that compromise, rather than promote, healing.
In the present study, we determined if positive social interaction, which influences hypothalamic-pituitary-adrenal (HPA) axis activity in social rodents, promotes wound healing.
Siberian hamsters received a cutaneous wound and then were exposed to immobilization stress. Stress increased cortisol concentrations and impaired wound healing in isolated, but not socially housed, hamsters.
Removal of endogenous cortisol via adrenalectomy eliminated the effects of stress on wound healing in isolated hamsters. Treatment of isolated hamsters with oxytocin (OT), a hormone released during social contact and associated with social bonding, also blocked stress-induced increases in cortisol concentrations and facilitated wound healing.
In contrast, treating socially housed hamsters with an OT antagonist delayed wound healing.
Taken together, these data suggest that social interactions buffer against stress and promote wound healing through a mechanism that involves OT-induced suppression of the HPA axis. The data imply that social isolation impairs wound healing, whereas OT treatment may ameliorate some effects of social isolation on health.
Time estimation: The effect of cortically mediated attention
By Anthony Chaston, and Alan Kingstone
Do people tend to underestimate time when their attention is engaged?
Studies supporting this idea have routinely confounded attentional manipulations with changes in other factors, such as response complexity and memory load.
The aim of the present study was to obtain the first direct evidence that attentional engagement mediated by cortical brain mechanisms affects time estimation.
Participants were asked to perform a visual search task that either should not demand attention (simple feature search) or should demand cortical attentional engagement (conjunction search).
Observers searched through 2, 4, 8, 16, 24, 32, or 40 items, for blocks of 40 or 60 trials. At the conclusion of each block participants were required to provide a written estimate of block duration.
This time estimate was prospective in nature because subjects knew in advance that they would be asked to produce the estimate.
Results showed that an attentionally demanding conjunction search task produced a large underestimation of time. And as the engagement of attention increased so did the underestimation of time.
These findings provide strong support for an attentional model of prospective time estimation that is subserved by cortical brain mechanisms.
Universität von Alberta
The U of A Department of Psychology
Calmodulin and Munc13 Form a Ca2+ Sensor/Effector Complex that Controls Short-Term Synaptic Plasticity
By Harald J. Junge, Jeong-Seop Rhee, Olaf Jahn, Frederique Varoqueaux, Joachim Spiess, M. Neal Waxham, Christian Rosenmund, and Nils Brose
Source: Cell, Vol 118, 389-401, 6 August 2004
The efficacy of synaptic transmission between neurons can be altered transiently during neuronal network activity.
This phenomenon of short-term plasticity is a key determinant of network properties; is involved in many physiological processes such as motor control, sound localization, or sensory adaptation; and is critically dependent on cytosolic [Ca2+].
However, the underlying molecular mechanisms and the identity of the Ca2+ sensor/effector complexes involved are unclear.
We now identify a conserved calmodulin binding site in UNC-13/Munc13s, which are essential regulators of synaptic vesicle priming and synaptic efficacy. Ca2+ sensor/effector complexes consisting of calmodulin and Munc13s regulate synaptic vesicle priming and synaptic efficacy in response to a residual [Ca2+] signal and thus shape short-term plasticity characteristics during periods of sustained synaptic activity.
Cell August 6, 2004: 118 (3)
Fulltext & picts (GER) PDF (92 KB)
Department of Molecular Neurobiology/Director: Dr. Nils Brose
Eine kleine Animation/A Little Animation
Pinyon jays use transitive inference to predict social dominance
By GUILLERMO PAZ-Y-MI--O, ALAN B. BOND, ALAN C. KAMIL & RUSSELL P. BALDA
Quelle: /www.nature.com/12 August 2004
Living in large, stable social groups is often considered to favour the evolution of enhanced cognitive abilities, such as recognizing group members, tracking their social status and inferring relationships among them.
An individual's place in the social order can be learned through direct interactions with others, but conflicts can be time-consuming and even injurious. Because the number of possible pairwise interactions increases rapidly with group size, members of large social groups will benefit if they can make judgments about relationships on the basis of indirect evidence.
Transitive reasoning should therefore be particularly important for social individuals, allowing assessment of relationships from observations of interactions among others. Although a variety of studies have suggested that transitive inference may be used in social settings, the phenomenon has not been demonstrated under controlled conditions in animals.
Here we show that highly social pinyon jays (Gymnorhinus cyanocephalus) draw sophisticated inferences about their own dominance status relative to that of strangers that they have observed interacting with known individuals.
These results directly demonstrate that animals use transitive inference in social settings and imply that such cognitive capabilities are widespread among social species.
Correspondence and requests for materials should be addressed to
Dr. Guillermo Paz-y-Mio
The School of Biological Sciences
Eocene evolution of whale hearing
By SIRPA NUMMELA, J. G. M. THEWISSEN, SUNIL BAJPAI, S. TASEER HUSSAIN & KISHOR KUMAR
The origin of whales (order Cetacea) is one of the best-documented examples of macroevolutionary change in vertebrates.
As the earliest whales became obligately marine, all of their organ systems adapted to the new environment. The fossil record indicates that this evolutionary transition took less than 15 million years, and that different organ systems followed different evolutionary trajectories.
Here we document the evolutionary changes that took place in the sound transmission mechanism of the outer and middle ear in early whales. Sound transmission mechanisms change early on in whale evolution and pass through a stage (in pakicetids) in which hearing in both air and water is unsophisticated.
This intermediate stage is soon abandoned and is replaced (in remingtonocetids and protocetids) by a sound transmission mechanism similar to that in modern toothed whales.
The mechanism of these fossil whales lacks sophistication, and still retains some of the key elements that land mammals use to hear airborne sound.
Correspondence and requests for materials should be addressed
to J.G.M.T. (firstname.lastname@example.org ).
THEWISSEN, Johannes G. M., Ph.D., Associate Professor
Music improves dopaminergic neurotransmission: demonstration based on the effect of music on blood pressure regulation
By Denetsu Sutoo, and Kayo Akiyama
The mechanism by which music modifies brain function is not clear. Clinical findings indicate that music reduces blood pressure in various patients.
We investigated the effect of music on blood pressure in spontaneously hypertensive rats (SHR).
Previous studies indicated that calcium increases brain dopamine (DA) synthesis through a calmodulin (CaM)-dependent system. Increased DA levels reduce blood pressure in SHR.
In this study, we examined the effects of music on this pathway.
Systolic blood pressure in SHR was reduced by exposure to Mozart's music (K.205), and the effect vanished when this pathway was inhibited.
Exposure to music also significantly increased serum calcium levels and neostriatal DA levels.
These results suggest that music leads to increased calcium/CaM-dependent DA synthesis in the brain, thus causing a reduction in blood pressure.
Music might regulate and/or affect various brain functions through dopaminergic neurotransmission, and might therefore be effective for rectification of symptoms in various diseases that involve DA dysfunction.
Effect of sighs on breathing memory and dynamics in healthy infants
David N Baldwin, Bela Suki, J J Pillow, Hanna L Roiha, Stefan Minocchieri, and Urs Frey
Deep inspirations (sighs) play a significant role in altering lung mechanical and airway wall function however their role in respiratory control remains unclear.
We examined whether sighs act via a resetting mechanism to improve control of the respiratory regulatory system.
Effects of sighs on system variability, short-range and long-range memory and stability were assessed in 25 healthy term infants at 1 month of age (mean 36 days, range: 28-57 days) during quiet sleep.
Variability was examined using moving window coefficient of variation (CV), short-range memory with autocorrelation function and long-range memory using detrended fluctuation analysis.
Stability was examined by studying the behaviour of the attractor using phase-space plots.
Variability of tidal volume (VT) and minute ventilation (V'E) increased during the initial 15 breaths post-sigh. Short-range memory of (VT) decreased during the 50 breaths preceding sigh, becoming uncorrelated (random) during the 10 breath pre-sigh window.
Short-range memory increased post-sigh for the entire 50 breaths when compared to the randomised dataset and for 20 breaths when compared to the pre-sigh window. Similar but shorter lasting changes were noted in (V'E).
No change in long-range memory was seen after sigh. CV and range of points located within a defined attractor segment increased after sigh.
Thus, control of breathing in healthy infants shows long-range stability and improvement in short-range memory and variability after sigh.
These results add new evidence that the role of sighs is not purely mechanical.
To whom correspondence should be addressed. E-mail: email@example.com.
Source: Karen Lurie/www.sciencentral.com/12.8.04
When schools cut their budgets, the arts are often first to go. That could be a big mistake. Recently, more and more research has found that the arts can improve kids' interest in school, and their grades, too. As this ScienCentral News video reports, one new study even says music lessons could make kids smarter.
We know that music lessons help cultivate kids' musical abilities. But could they actually make kids smarter?
At the University of Toronto at Mississauga, psychology professor Glenn Schellenberg has come up with some intriguing new evidence.
Through a newspaper ad, Schellenberg recruited suburban six-year-olds with an offer of free music lessons. After being inundated with replies from interested parents, Schellenberg picked 144 children at random. Then he randomly assigned them to 36 weeks of either piano lessons, or singing lessons, or drama lessons, or no lessons at all. On a standard IQ test given a year apart, the kids who had music lessonsópiano or singingógained the most points.
Because the children also had had a year of school in the intervening year, and because schooling does enhance IQ, Schellenberg found that "all the children, regardless of the condition to which they were assigned, had little increases in IQ of three or four points. The increase in the two music groups was slightly higher, but significantly higher than the increase in the two control groupsóthe drama group or the no-lessons group."
A musician himself, Schellenberg concludes that music lessons offer "something special that promotes intellectual development." Why? "I started taking piano lessons when I was five, and throughout elementary school, junior high and high school, I practiced every morning before I went to school," he says. "My life was actually much different from somebody who's not taking music lessons, and we know that our experiences change us. So it seemed possible that music lessons might have some impact on development." But Schellenberg isn't ready to say yet whether there are effects on children's development that are specific to music lessons, and that may affect all children roughly the same way.
So why did Schellenberg find that music lessons improved his six-year olds' IQs? The simplest explanation, he says, is that "we know that schooling increases IQ. So, music lessons may be some additional form of schooling or school-like activity that causes a slightly larger increase in IQ," he speculates. On the other hand, "it's also possible that music lessons are an activity that involves all these little componentsópracticing; learning to memorize pieces; learning about musical structures, intervals, scales, chords, and so on; learning to express yourself emotionally in music; reading music. Either that unique constellation of activities, or one of them in particular, could be promoting the effect we saw in the six-year-olds as well."
Schellenberg's study resonates with James Catterall, an education professor at the University of California at Los Angeles who is also a musician. In 1999, for Champions of Change: The Impact of the Arts on Learning, a report for the Arts Education Partnership, Catterall reported on 25,000 kids he had followed through four years of high school. He found that those involved in music were more successful in school, and that music made a particular difference in the achievement of disadvantaged students. "By the time the poor kids involved in music had reached twelfth grade, fully 33 percent of them were doing solid eleventh-grade mathematics. In the general population, only 20 percent reach that level," Catterall says.
Catterall remains very interested in how music might affect children's intellectual and emotional growth. "It's got lots of implications for all kinds of ways kids think and behave, because you're fundamentally impacting the brain and how it reasons, how it relates one thing to another," he says. He has helped produce a 2004 report for the Arts Education Partnership, the Arts and Education: New Opportunities for Research, that calls for more research on how the arts affect children's intellectual and social development. He'd also like to know whether music lessons' effects last, and why.
"When you study visual art or dance or drama, or music, what you want to do is to take a look at kids in natural environments, over longer periods of time, to see what arts lessons really lead to," says Catterall. "We need longer term studies and we need to do a little more thinking about what it is we're really measuring."
But Schellenberg points out that following his six-year-olds for several more years might strike a false note. "To do a longer experiment of the same nature would be problematic because you start to see different dropout rates in each of the four conditions, and that would ruin the validity of the experiment," he says.
Schellenberg doubts that his six-year olds will enjoy lasting benefits from only 36 weeks of lessons. Still, from preliminary observations of undergraduates, he thinks years of music lessons could make a lasting difference. Catterall agrees. "The work on what the arts mean in terms of academic achievement and social development is definitely inspiring people to think more about keeping the arts in and putting more arts in the schools," says Catterall. "And one outcome is that you have more art in the lives of kids. That to me is very important and valuable in its own right."
This story was funded in part by Carnegie Corporation, promoting the advancement and diffusion of knowledge and understanding.
Schellenberg's research appeared in the August, 2004 issue of Psychological Science and was funded by the Natural Sciences and Engineering Research Council of Canada. Catterall's two reports were funded by the Arts Education Partnership.
University of Toronto at Mississauga
Standard IQ test
The Arts Education Partnership
Abnormal cortical voice processing in autism
By Helene Gervais, Pascal Belin, Nathalie Boddaert, Marion Leboyer, Arnaud Coez, Ignacio Sfaello, Catherine Barthelemy, Francis Brunelle, Yves Samson, & Monica Zilbovicius
source: www.nature.com/ 801 - 802 (2004)
Impairments in social interaction are a key feature of autism and are associated with atypical social information processing.
Here we report functional magnetic resonance imaging (fMRI) results showing that individuals with autism failed to activate superior temporal sulcus (STS) voice-selective regions in response to vocal sounds, whereas they showed a normal activation pattern in response to nonvocal sounds.
These findings suggest abnormal cortical processing of socially relevant auditory information in autism.
Correspondence should be addressed to
MÙnica Zilbovicius Email: firstname.lastname@example.org
Read more: Fulltext PDF
A Functional Genomics Strategy Reveals Rora as a Component of the Mammalian Circadian Clock
By Trey K. Sato , Satchidananda Panda, Loren J. Miraglia, Teresa M. Reyes, Radu D. Rudic, Peter McNamara, Kinnery A. Naik , Garret A. FitzGerald , Steve A. Kay , and John B. Hogenesch
source: www.neuron.org/19 August 2004
The mammalian circadian clock plays an integral role in timing rhythmic physiology and behavior, such as locomotor activity, with anticipated daily environmental changes.
The master oscillator resides within the suprachiasmatic nucleus (SCN), which can maintain circadian rhythms in the absence of synchronizing light input.
Here, we describe a genomics-based approach to identify circadian activators of Bmal1, itself a key transcriptional activator that is necessary for core oscillator function.
Using cell-based functional assays, as well as behavioral and molecular analyses, we identified Rora as an activator of Bmal1 transcription within the SCN.
Rora is required for normal Bmal1 expression and consolidation of daily locomotor activity and is regulated by the core clock in the SCN.
These results suggest that opposing activities of the orphan nuclear receptors Rora and Rev-erb a, which represses Bmal1 expression, are important in the maintenance of circadian clock function.
Correspondence: John B. Hogenesch
Genomics Institute of the Novartis Research Foundation, 10675 John J. Hopkins Drive, San Diego, CA 92121 USA
Department of Neuropharmacology, The Scripps Research Institute, 10550 North Torrey Pines Road, La Jolla, CA 92037 USA
Laboratory of Neuronal Structure and Function, Salk Institute, 10010 North Torrey Pines Road, La Jolla, CA 92037 USA
Center for Experimental Therapeutics, University of Pennsylvania School of Medicine, 153 Johnson Pavilion, 3620 Hamilton Walk, Philadelphia, PA 19104 USA
Phenomix Corporation, 11099 North Torrey Pines Road, La Jolla, CA 92037 USA
AudioID - Automatic Identification & Fingerprinting of Audio
Fuelled by the digital revolution, an overwhelming amount of audio material has become available to today's consumers. Finding desired content efficiently has become a key issue in this context. The AudioID system by the Fraunhofer Institute for Digital Media Technology IDMT, per forms an automatic identifi cation/re cognition of audio data based on a database of registered works and de livers the required information, e.g. title or name of the artist, in real-time. The underlying feature technology is part of the international ISO/IEC MPEG-7 audio standard of the Moving Pictures Expert Group. As an example, the AudioID recognition sys tem could be used to pick up sound from a micro phone and instantly deliver all re le vant information associated to the song.
The basic concept behind a fingerprinting system is to identify a piece of audio content by extracting a compact and unique signature from it (so-called content-based identification). In a training phase, such signatures are created from a set of known audio material, and finally stored in a database.
Unknown content can then be identified by comparing its signature to the ones contained in the database.
Performance of the AudioID System In order to assess the system's recognition performance, the registered audio items are subjected to a wide range of signal manipulations which influence the audio signal's quality (e.g. equalization, acoustic transmission or MP3 encoding/decoding).
Similar to human recognition behavior, which is surprisingly to le rant even to bad sounding signal alterations, the system is designed to be robust against acoustic interference. Depending on the type of signal distortion applied, the achieved recognition rates are typ ically better than 99% with a recog nition speed (on standard PC hardware) several orders of magni tude faster than the audio playback time.
Automatic Identification & Fingerprinting of Audio AudioID and MPEG-7 Audio The AudioID system relies on a description core which has been standardized within the new MPEG-7 Audio Stan dard (Mul ti me dia content description interface - part 4: Audio ISO/IEC In ter na tio nal Stan dard 15938-4) and thus brings a number of benefits which are associated with using an open standard rather than proprietary solutions:
- Identification relies on a published, open feature format rather than proprietary solutions.
- Also, MPEG-7 based signatures are likely to be produced as part of the standard metadata package which will accompany future advanced media formats.
- Due to the exact and standardized specification of the descriptor, inter operability is guaranteed on a worldwide basis, i.e. every search engine relying on the MPEG-7 specification will be able to use compliant descriptions, wherever they may have been produced.
As a unique feature, AudioID MPEG-7 signatures are scalable, i.e. they allow a fle xi ble trade-off between signature compactness and recognition robustness.
Applications There are a number of attractive applications for such an engine, including:
Identifying Music and Linking to Metadata
Today most of the music distributed is provided either without or only with medium-specific additional information about the content (CDs, CD-Text, ID3v2 tag format). The AudioID system helps to automatically link metadata from a database to a specific piece of music and thus represents a universal solution for all types of audio formats .
Automatic audio identification allows consumers to tag music using small handheld devices (e.g. PDAs or cell phones) and thus is well suited to stimulate music sales.
AudioID can identify and protocol broadcast audio program material without the need for special process ing of the content, as required in the case of watermarking. This allows e.g.:
content on the Internet Automated search of illegal content on the Internet via AudioID is an efficient way of securing audio-related intellectual property by monitoring the Internet. Today's search restrictions on file names and extensions (such as .mp3) belong to the past since AudioID examines the real audio content rather than just tag information.
Automatic audio identification may be an alternative approach to copy protection of music or conditional access of music and is considerably less prone to tampering than watermarking techniques.
Furthermore, no penalty in audio quality is incurred compared to watermarking.
Regulation of Copulation Duration by period and timeless in Drosophila melanogaster
By Laura M. Beaver and Jadwiga M. Giebultowicz
The circadian clock involves several clock genes encoding interacting transcriptional regulators.
Mutations in clock genes in Drosophila melanogaster, period (per), timeless (tim), Clock (Clk), and cycle (cyc), produce multiple phenotypes associated with physiology, behavior, development, and morphology.
It is not clear whether these genes always work as clock components or may also act in some unknown pleiotropic fashion. We report here that per and tim are involved in a novel, male-specific phenotype that affects behavioral timing on the order of minutes.
Males lacking per or tim copulate significantly longer than males with normal per or tim function, while females do not show this effect. No correlation between fertility and extended copulation duration was found. Several lines of evidence suggest that the time in copula (TIC) is not regulated by the known clock mechanism. First, the period of free-running clock oscillations does not appear to affect this phenotype. Second, constant light, which abolishes the clock function, does not alter TIC.
Finally, mutations in the positively acting clock transcription factors, Clk and cyc, do not affect TIC. Our study extends the repertoire of behavioral functions involving per and tim genes and uncovers another time scale over which these genes may act.
Jadwiga M. Giebultowicz
Jadwiga M. Giebultowicz
Department of Zoology, Oregon State University, Corvallis, OR 97331 USA
Music: a new cause of primary spontaneous pneumothorax
M Noppen1, S Verbanck1, J Harvey2, R Van Herreweghe1, M Meysman1, W Vincken1 and M Paiva3
Most cases of primary spontaneous pneumothorax are thought to be caused by air leaks at so-called "emphysema-like changes" or in areas of pleural porosity at the surface of the lung.
Environmental pressure swings may cause air leaks as a result of transpulmonary pressure changes across areas of trapped gas in the distal lung.
This is the first report of music as a specific form of air pressure change causing pneumothorax (five episodes in four patients).
While rupture of the interface between the alveolar space and pleural cavity in these patients may be linked to the mechanical effects of acute transpulmonary pressure differences caused by exposure to sound energy in association with some form of distal air trapping, we speculate that repetitive pressure changes in the high energy-low frequency range of the sound exposures is more likely to be responsible.
Exposure to loud music should be included as a precipitating factor in the history of patients with spontaneous pneumothorax.
Correspondence to: Dr M Noppen
Head, Interventional Endoscopy Clinic, Academic Hospital AZ VUB, 101 Laarbeeklaan, B-1090 Brussels, Belgium;
Vocal-Tract Filtering by Lingual Articulation in a Parrot
GabriÎl J.L. Beckers, Brian S. Nelson, and Roderick A. Suthers
Human speech and bird vocalization are complex communicative behaviors with notable similarities in development and underlying mechanisms.
However, there is an important difference between humans and birds in the way vocal complexity is generally produced.
Human speech originates from independent modulatory actions of a sound source, e.g., the vibrating vocal folds, and an acoustic filter, formed by the resonances of the vocal tract (formants). Modulation in bird vocalization, in contrast, is thought to originate predominantly from the sound source, whereas the role of the resonance filter is only subsidiary in emphasizing the complex time-frequency patterns of the source (e.g., but see).
However, it has been suggested that, analogous to human speech production, tongue movements observed in parrot vocalizations modulate formant characteristics independently from the vocal source.
As yet, direct evidence of such a causal relationship is lacking.
In five Monk parakeets, Myiopsitta monachus, we replaced the vocal source, the syrinx, with a small speaker that generated a broad-band sound, and we measured the effects of tongue placement on the sound emitted from the beak.
The results show that tongue movements cause significant frequency changes in two formants and cause amplitude changes in all four formants present between 0.5 and 10 kHz.
We suggest that lingual articulation may thus in part explain the well-known ability of parrots to mimic human speech, and, even more intriguingly, may also underlie a speech-like formant system in natural parrot vocalizations.
Gabriel J.L. Beckers
Fachzeitschrift "Current Biology"
Myiopsitta monachus (GER)
Experience can change the 'light-from-above' prior
Wendy J Adams, Erich W Graf & Marc O Ernst
Source: www.nature.com/07 September 2004
To interpret complex and ambiguous input, the human visual system uses prior knowledge or assumptions about the world.
We show that the 'light-from-above' prior, used to extract information about shape from shading is modified in response to active experience with the scene.
The resultant adaptation is not specific to the learned scene but generalizes to a different task, demonstrating that priors are constantly adapted by interactive experience with the environment.
Marc O. Ernst, Ph.D. E-Mail: email@example.com
Department of Psychology, University of Southampton, Southampton
Max Planck Institute for Biological Cybernetics, Tübingen, Germany
Nature Neuroscience, Vorab-Veröffentlichung 5. September 2004, Oktober 2004
Biophysics - Robustness properties of circadian clock architectures
Jörg Stelling, Ernst Dieter Gilles, and Francis J. Doyle III
Robustness, a relative insensitivity to perturbations, is a key characteristic of living cells.
However, the specific structural characteristics that are responsible for robust performance are not clear, even in genetic circuits of moderate complexity.
Formal sensitivity analysis allows the investigation of robustness and fragility properties of mathematical models representing regulatory networks, but it yields only local properties with respect to a particular choice of parameter values.
Here, we show that by systematically investigating the parameter space, more global properties linked to network structure can be derived. Our analysis focuses on the genetic oscillator responsible for generating circadian rhythms in Drosophila as a prototypic dynamical cellular system.
Analysis of two mathematical models of moderate complexity shows that the tradeoff between robustness and fragility is largely determined by the regulatory structure. Rank-ordered sensitivities, for instance, allow the correct identification of protein phosphorylation as an influential process determining the oscillator's period.
Furthermore, sensitivity analysis confirms the theoretical insight that hierarchical control might be important for achieving robustness.
The complex feedback structures encountered in vivo, however, do not seem to enhance robustness per se but confer robust precision and adjustability of the clock while avoiding catastrophic failure.
Originalveröffentlichung: Robustness properties of circadian clock architectures
Prof. Dr.-Ing. Ernst Gilles
Professor Francis J. Doyle Ph.D.
Max Planck Institute for Dynamics of Complex Technical Systems, Magdeburg, Germany
Department of Chemical Engineering, University of California, Santa Barbara
Surround sound can be delivered to consumers more efficiently
Source: Press Release University of Surrey//www.alphagalileo.org/09 September 2004
Recent research conducted by scientists at the University of Surrey in collaboration with Bang & Olufsen and the BBC, shows that surround sound can be delivered to the consumer more efficiently by taking into account the results of perceptual tests.
Although improvements in the audio quality of consumer entertainment systems such as DVDs, CDs, digital TV, home cinema and computer games are technically possible, they may no longer be necessary.
In fact, the intelligent limiting of sound quality, based on the results of perceptual tests using real programme material, could enable media companies to use network capacity to increase the number and types of services available, while still delivering good surround sound quality.
The research also revealed that if visual and audio images are presented simultaneously, such as when watching TV and films or playing computer games, subjects' perception of sound quality was altered.
Describing the outcomes of the project, Dr Francis Rumsey, the project leader, said ìwe were surprised to find just how much we could reduce the sound quality of some of the channels in a typical five channel home cinema system without our listening subjects reporting a large change in overall quality.
These were expert listeners, highly trained as sound engineers, so they would have noticed if there were major changes. Although we can't claim that all material can be delivered with these sorts of compromises, there is certainly a lot of typical music and audio-visual material that could be treated in this way.
We have developed a prototype expert system that can be used to predict the resulting quality when certain changes are made to the sound quality of specific audio channel.
This could be used by broadcasters and other media delivery organisations or equipment manufacturers to optimise the use of their networks. It means that they can still deliver good quality surround sound to their audiences without necessarily taking up as much bandwidth as would be needed normally.
Our results could be used in conjunction with existing technology that is used to squeeze high quality audio into a small space, such as MPEG and Dolby Digital coders.
Stroke study suggests humans can live without dreams - Dreamless woman remains healthy - One stroke victim has gone a year without dreaming - but feels just fine.
Source: Helen Pilcher//www.nature.com/10 September 2004
A woman who stopped dreaming after a stroke is helping researchers unravel the mysteries of sleep.
The 73-year-old patient was admitted to hospital after a stroke disrupted blood flow to an area at the back of her brain, called the occipital lobe. At first, her symptoms were not unusual - she lost some vision and was weak on one side of the body. But as the initial problems faded a few days later, a new symptom emerged: the woman had stopped dreaming.
Her story is recorded in the Annals of Neurology1.
She used to experience 3 to 4 dreams per week, says Claudio Bassetti, now of University Hospital Zurich in Switzerland, who studied the woman. After the stroke, she had no dreams for a whole year, yet her sleep and mental functions appeared otherwise unaffected.
People have been fascinated by dreams for centuries. Psychologist Sigmund Freud believed that dreams offer a release for repressed feelings. Others think they help us empty our minds at the end of a busy day, or solve problems as we sleep.
But the stroke study suggests that humans can live without dreams. "I don't think they have any real function," comments Jim Horne, who studies sleep at Loughborough University, UK.
"I think that dreams are the cinema of the mind," he continues. "They help to keep the brain entertained while we are asleep."
Bassetti, however, cautions against drawing firm conclusions from a single case. "How dreams are generated, and what purpose they might serve, are completely open questions at this point," he says.
To try to discover what was going on, Bassetti's team monitored the woman's brain waves as she slept. The researchers took 4 night-long recordings over 6 weeks.
The patient reported no dreams even when woken in the midst of rapid eye movement (REM) sleep, which is normally associated with dreaming. But to the researchers' surprise, her sleep pattern was perfectly normal.
This shows that REM sleep and dreaming do not always go hand in hand, says Bassetti. The occipital lobe, which was damaged by the woman's stroke, is likely to play an important role in dreaming. But different neural areas, such as the brain stem and midbrain, are thought to control REM sleep.
The study also backs up reports of patients who lost both their dreams and their REM sleep for up to a year after taking certain antidepressant drugs. "These people don't go mad," says Horne. They are completely normal and have no memory problems.
At present, the functions of REM sleep are as elusive as those of dreaming. Adults spend a quarter of their nightly slumber in REM sleep, scattered throughout the night. The remaining time is spent deeper in unconsciousness. So REM may simply bring the brain back from deep sleep periodically to help us wake up if we need to, says Horne.
But the function may be different in newborns, who typically spend around 8 hours per day in REM sleep. Here, the sleep pattern may be related to brain development.
Prof. Dr. Claudio Bassetti
Brain scans show hypnosis at work - Being hypnotized really does have a physical effect on the brain
From the BA Festival of Science, Exeter, UK
Source: www.nature.com/09 September 2004
A brain-imaging study has shed light on why some people are more susceptible than others to hypnosis. By hinting at the brain processes involved, the analysis also suggests that hypnosis - both the stage and therapeutic varieties - does have genuine effects on the brain's workings.
Those who are easily hypnotized show different activity in a brain region called the anterior cingulate gyrus, which is involved in planning our future actions, reports John Gruzelier of Imperial College London.
In a hypnotic trance, the function of this region may be impaired, he says, meaning that subjects are more likely to follow a hypnotist's suggestion: "The hypnotist tells you to go with the flow, and so you don't evaluate what you're doing."
This is consistent with the idea that those who are easiest to hypnotize tend to describe themselves as generally letting go of their inhibitions quite easily, Gruzelier told the British Association Festival of Science in Exeter, UK, on Thursday.
Some experts have argued that hypnotism is not a real physiological phenomenon at all, but rather the result of hypnotists imposing themselves on their subjects, who may be simply swept along. Stage hypnotists are often accused of intimidating their 'volunteers' into playing along for the sake of the show.
This effect is certainly part of the picture in performance hypnotism, says Gruzelier. "Lots of it is due to personality and persuasiveness, but then that's showbusiness," he told. Such tactics can cause people to ignore the potential of genuine hypnosis to ease painful diseases, he adds: "Unquestionably, stage hypnotists give hypnotism a bad name."
"Humans like to comply; they don't like to be embarrassed," agrees Peter Naish, who studies hypnosis at the Open University in Milton Keynes, UK. But he insists that underneath the coercion used by charismatic stage acts, a physiological effect is occurring. "The evidence really is there; hypnosis is not miraculous," he adds.
Gruzelier studied 24 subjects, half of whom were categorized as succumbing easily to hypnotism, and half of whom were resistant. He scanned the volunteers' brains while they tackled a problem called the Stroop task, a test of mental flexibility that requires subjects to categorize a list of colours presented in a different colour - the word 'green' printed in blue, say - depending either on the name or the actual colour.
Gruzelier tested the subjects before and after they underwent a standard procedure used by hypnotists to put their subjects into a trance. In resistant subjects, the anterior cingulate gyrus was less strongly activated after the procedure than before, showing that their brains were working less hard as they got better at planning how to complete the task.
But in hypnotized volunteers, the anterior cingulate, and the regions that govern it, were more strongly activated when they were in a trance, showing that they were struggling harder to plot their actions, Gruzelier reported. He suspects that this impaired ability to plan for oneself makes people more suggestible.
This process may underlie hypnotists' ability to influence their subjects' behaviour, be it stopping smoking or barking like a dog whenever they hear Elvis Presley. Subjects frequently report that they feel compelled to do something even though they know they don't really want to.
Gruzelier also suspects that hypnotism may interfere with subjects' evaluation of future emotions such as embarrassment. A region in the brain's medio-frontal cortex, close to the anterior cingulate, governs our perception of how we will feel if we take a certain course of action, he says. If connections between the two regions are impaired, stage volunteers might happily act without thinking.
That may well be the final weapon in the showbiz hypnotist's arsenal, says Gruzelier. By not only making volunteers suggestible but also taking away their sense of shame, the possibilities for public ridicule are immense. "The structure that monitors the emotional consequences of future actions becomes disconnected," he suggests. "So you make a fool of yourself."
Professor John Gruzelier
British Association Festival of Science in Exeter
What is gyrus cinguli?
Left and right ears not created equal as newborns process sound
Challenging decades of scientific belief that the decoding of sound originates from a preferred side of the brain, UCLA and University of Arizona scientists have demonstrated that right-left differences for the auditory processing of sound start at the ear.
Reported in the Sept. 10 edition of Science, the new research could hold profound implications for rehabilitation of persons with hearing loss in one or both ears, and help doctors enhance speech and language development in hearing-impaired newborns.
"From birth, the ear is structured to distinguish between various types of sounds and to send them to the optimal side in the brain for processing," explained Yvonne Sininger, Ph.D., visiting professor of head and neck surgery at the David Geffen School of Medicine at UCLA. "Yet no one has looked closely at the role played by the ear in processing auditory signals."
Scientists have long understood that the auditory regions of the two halves of the brain sort out sound differently. The left side dominates in deciphering speech and other rapidly changing signals, while the right side leads in processing tones and music. Because of how the brain's neural network is organized, the left half of the brain controls the right side of the body, and the left ear is more directly connected to the right side of the brain.
Prior research had assumed that a mechanism arising from cellular properties unique to each brain hemisphere explained why the two sides of the brain process sound differently. But Sininger's findings suggest that the difference is inherent in the ear itself. "We always assumed that our left and right ears worked exactly the same way," she said. "As a result, we tended to think it didn't matter which ear was impaired in a person. Now we see that it may have profound implications for the individual's speech and language development."
Working with co-author Barbara Cone-Wesson, Ph.D., associate professor of speech and hearing sciences at the University of Arizona, Sininger studied tiny amplifiers in the outer hair cells of the inner ear. "When we hear a sound, tiny cells in our ear expand and contract to amplify the vibrations," explained Sininger. "The inner hair cells convert the vibrations to neural cells and send them to the brain, which decodes the input." "These amplified vibrations also leak back out to the ear in a phenomena call otoacoustic emission (OAE)," added Sininger. "We measured the OAE by inserting a microphone in the ear canal."
In a six-year study, the UCLA/UA team evaluated more than 3,000 newborns for hearing ability before they left the hospital. Sininger and Cone-Wesson placed a tiny probe device in the baby's ear to test its hearing. The probe emitted a sound and measured the ear's OAE.
The researchers measured the babies OAE with two types of sound. First, they used rapid clicks and then sustained tones. They were surprised to find that the left ear provides extra amplification for tones like music, while the right ear provides extra amplification for rapid sounds timed like speech.
"We were intrigued to discover that the clicks triggered more amplification in the baby's right ear, while the tones induced more amplification in the baby's left ear," said Sininger. "This parallels how the brain processes speech and music, except the sides are reversed due to the brain's cross connections."
"Our findings demonstrate that auditory processing starts in the ear before it is ever seen in the brain," said Cone-Wesson. "Even at birth, the ear is structured to distinguish between different types of sound and to send it to the right place in the brain."
Previous research supports the team's new findings. For example, earlier research shows that children with impairment in the right ear encounter more trouble learning in school than children with hearing loss in the left ear.
"If a person is completely deaf, our findings may offer guidelines to surgeons for placing a cochlear implant in the individual's left or right ear and influence how cochlear implants or hearing aids are programmed to process sound," explained Cone-Wesson. "Sound-processing programs for hearing devices could be individualized for each ear to provide the best conditions for hearing speech or music."
"Our next step is to explore parallel processing in brain and ear simultaneously," said Sininger. "Do the ear and brain work together or independently in dealing with stimuli? How does one-sided hearing loss affect this process? And finally, how does hearing loss compare to one-sided loss in the right or left ear?"
Sininger, Yvonne S. PhD.
Analytic solutions of the radiation modes problem and the active control of sound power
by Dr C Maury and Professor SJ Elliott
Source: papers from the Royal Society Journals/15 Sep 2004
Nowadays, considerable research in aerospace, aeronautics and ground transports concentrates on the control of the noise radiated from vibrating structures. Here mathematical solutions are given to the problem of determining a set of optimal surface velocity patterns which best radiate sound.
The exact nature of these solutions provides an insight that has not previously been identified. They could be incorporated into the design of noise control systems acting to suppress highly radiating velocity patterns. It might enhance the efficiency of such systems for controlling low-frequency sounds in severe applications such as reducing the flow-induced noise transmitted through aircraft fuselages.
Dr Cedric Maury, Institute of Sound and Vibration Research, University of Southampton, SOUTHAMPTON SO17 1BJ
Onset of Frailty in Older Adults and the Protective Role of Positive Affect
Glenn V. Ostir, Kenneth J. Ottenbacher, and Kyriakos S. Markides,University of Texas Medical Branch at Galveston
Source: Psychology and Aging, 2004, Vol. 19, No. 3, 402-408
©2004 American Psychological Association
The aim of this study was to examine the longitudinal association between positive affect and onset of frailty for 1,558 initially nonfrail older Mexican Americans from the Hispanic Established Populations for Epidemiological Studies of the Elderly database.
The incidence of frailty increased 7.9% during the 7-year follow-up period. High positive affect was found to significantly lower the risk of frailty. Each unit increase in baseline positive affect score was associated with a 3% decreased risk of frailty after adjusting for relevant risk factors.
Findings add to a growing positive psychology literature by showing that positive affect is protective against the functional and physical decline associated with frailty.
Glenn V. Ostir, Ph.D.
Psychology & Aging (Bd. 19, Nr. 3, S. 402)
Children Creating Core Properties of Language: Evidence from an Emerging Sign Language in Nicaragua