According to observations, children with autism frequently speak more slowly than similarly developing kids. They differ in their speech in other ways, most notably in tone, intonation, and rhythm. It is very challenging to consistently and objectively describe these “prosodic” distinctions, and it has been decades since their roots have been identified. Researchers from Northwestern University and Hong Kong collaborated on a study to shed light on the causes and diagnoses of this illness. This method uses machine learning to find speech patterns in autistic children that are similar in Cantonese and English. Researchers may now be able to discriminate between hereditary and environmental influences on autistic individuals’ communication skills, allowing them to understand the disorder’s causes better and create novel treatments. The incredible results were also recently published in the journal PLOS One.
The study team effectively created a supervised machine learning algorithm to recognize speech variations linked to autism. The training dataset consisted of recordings of young kids with and without autism describing their versions of the events described in the wordless children’s picture book “Frog, Where Are You?” in English and Cantonese. Given the structural differences between English and Cantonese, the researchers hypothesized that any speech pattern similarities observed in autistic children across both languages were probably due to hereditary factors. The researchers also saw a spectrum of variation that would indicate more flexible aspects of speech that might make for effective intervention targets. Researchers made a considerable advancement by using machine learning to pinpoint the speech patterns that were most indicative of autism. They were previously constrained by the English language bias in autism research and the subjectivity of humans when it comes to categorizing speech variations between individuals with and without autism.