Scientists have developed a brain-computer interface (BCI) designed to restore the ability to communicate in people with spinal cord injuries and neurological disorders such as amyotrophic lateral sclerosis (ALS). This system has the potential to work more quickly than previous BCIs, and it does so by tapping into one of the oldest means of communications we have--handwriting.
There is no long-term benefit to surgically placing tubes in a young child's ears to reduce recurrent ear infections, compared with giving oral antibiotics, a randomized trial determined.
The brain alters our sense of time to synchronize our joint perception of sound and vision. A new study finds that this recalibration depends on brain signals constantly adapting to our environment to sample, order and associate competing sensory inputs together.
Biologists have identified specific neural firing patterns that can induce stuttering and stammering in songbirds. The discovery offers a model system that could enable researchers to uncover the origins of speech dysfunction in humans, and possible treatment to restore normal speech.
Researchers at Linköping University, Sweden, have made several discoveries on the functioning mechanisms of the inner hair cells of the ear, which convert sounds into nerve signals that are processed in the brain. The results, presented in the scientific journal Nature Communications, challenge the current picture of the anatomical organisation and workings of the hearing organ, which has prevailed for decades.
A gene called GAS2 plays a key role in normal hearing, and its absence causes severe hearing loss, according to a study led by researchers in the Perelman School of Medicine at the University of Pennsylvania.
A new study led by Wits University scientist, Professor Jonah Choiniere, used CT scanning and detailed measurements of the relative size of the eyes and inner ears of nearly 100 living bird and extinct dinosaur species, to investigate how the sensory adaptations of these two groups compared. The team found that a diminutive theropod named Shuvuuia, had extraordinary hearing and night vision, suggesting that Shuvuuia could have hunted in complete darkness.
Humans are inherently emotional and to better understand them, robots need to recognize emotions from human speech. Due to the complexity of auditory perception models, however, emotion recognition is a challenging task. In a new study, researchers from Japan and China design a novel feature that captures temporal and contextual information and extracts temporal variation of emotion using a parallel neural network architecture, opening doors to future applications in more complex speech analysis tasks.
People's ability to perceive speech sounds has been deeply studied, specially during someone's first year of life, but what happens during the first hours after birth? Are babies born with innate abilities to perceive speech sounds, or do neural encoding processes need to age for some time?
Smart assistant devices often need to perform speech translation, which does not always produce the desired voice identity due to drawbacks in the conventional voice conversion (VC) model. In a new study, researchers from Japan Advanced Institute of Science and Technology designed a VC model that mimics and controls speaker voice identity during speech translation using two deep learning based training frameworks, opening doors to voice modification, voice restoring, and voice cloning applications.