The LENA™ (Language Environment Analysis) system automatically labeled infant and child vocalisations from recordings and thereafter an automatic acoustic analysis designed by the researchers showed that pre-verbal vocalisations of very young children with autism are distinctly different from those of typically developing children with 86 percent accuracy.
The system also differentiated typically developing children and children with autism from children with language delay based on the automated vocal analysis.
The researchers analysed 1,486 all-day recordings from 232 children (or more than 3.1 million automatically identified child utterances) through an algorithm based on the 12 acoustic parameters associated with vocal development.
The most important of these parameters proved to be the ones targeting syllabification, the ability of children to produce well-formed syllables with rapid movements of the jaw and tongue during vocalisation. Infants show voluntary control of syllabification and voice in the first months of life and refine this skill as they acquire language.
The autistic sample showed little evidence of development on the parameters as indicated by low correlations between the parameter values and the children's ages (from 1 to 4 years).
On the other hand, all 12 parameters showed statistically significant development for both typically developing children and those with language delays.
The research team, led by D. Kimbrough Oller, professor and chair of excellence in audiology and speech language pathology at the University of Memphis, called the findings a proof of concept that automated analysis of massive samples of vocalisations can now be included in the scientific repertoire for research on vocal development.LENA, which allow the inexpensive collection and analysis of magnitudes of data unimagined in language research before now, could significantly impact the screening, assessment and treatment of autism and the behavioral sciences in general.
Since the analysis is not based on words, but rather on sound patterns, the technology theoretically could potentially be used to screen speakers of any language for autism spectrum disorders, Warren said. "The physics of human speech are the same in all people as far as we know."