| Welcome to Global Village Space

Tuesday, April 16, 2024

Elon Musk says human language will cease to exist in 10 years?

Elon Musk, the billionaire tech entrepreneur predicts that given the rising progress in brain technology & artificial intelligence, the spoken language will soon terminate. The process of evolution has enabled humans to speak different languages making the power of expression easy. If Musk's prediction goes right, what method of expression will humans adopt?

Elon Musk may have chosen a bizarre, cyborg-like name for his newborn child – just try pronouncing “X Æ A-12” – but the billionaire entrepreneur says spoken language itself may soon become obsolete with the rise of new brain tech.

Appearing on another episode of the Joe Rogan Experience on Thursday, the UFC-commentator-slash-podcast-host congratulated Musk on the birth of his sixth son this week, but couldn’t help but ask about the infant’s unique, headline-grabbing name.

“How do you say the name? Is it a placeholder?” Rogan asked, drawing an awkward laugh from Musk.

https://twitter.com/TruthQuest11/status/1258675152684044288

“Well, first of all, my partner is the one that mostly came up with the name… She’s great at names,” Musk said, adding: “It’s just X, the letter X, and then the Æ is pronounced ‘ash,’ and A-12 is my contribution” – which he says stands for “Archangel-12,” the CIA recon aircraft later developed into the SR-71 Blackbird, the “coolest plane ever.”

As the conversation drifted into neural nets and artificial intelligence, Musk said “Neuralink” technology – a battery-powered device implanted directly into the skull – could be rolled out within the next year, and potentially “fix almost anything that is wrong with the brain.”

Read more: Elon Musk launches Starlink satellites to provide worldwide internet access

Eventually, in addition to curing disorders like epilepsy, he said language itself could be made obsolete thanks to the new tech – and perhaps unpronounceable baby names along with it.

“You would be able to communicate very quickly and with far more precision… I’m not sure what would happen to language,” he said, explaining that human beings are “already partly a cyborg, or an AI symbiote” whose ‘hardware’ is merely in need of an upgrade.

Asked about how long it might take before mankind goes mute, Musk said it could happen in five to 10 years in a “best-case scenario” if the technology continues to develop at its current rapid pace. Of course, even in the entrepreneur’s brave new world, he said some might still choose to speak for “sentimental reasons,” even when “mouth noises” are but a primitive vestige of the past.

Rise of new brain technology: Humanity’s new horizons of achievements

Someday, people who have lost their ability to speak may get their voice back. A new study demonstrates that electrical activity in the brain can be decoded and used to synthesize speech.

The study, published on Wednesday in Nature, reported data from five patients whose brains were already being monitored for epileptic seizures, with stamp-size arrays of electrodes placed directly on the surfaces of their brains. This may alter spoken language.

As the participants read off hundreds of sentences—some from classic children’s stories such as Sleeping Beauty and Alice in Wonderland—the electrodes monitored slight fluctuations in the brain’s voltage, which computer models learned to correlate with their speech.

Read more: Internet from Space: Elon Musk Halfway to fulfill his Dream

This translation was accomplished through an intermediate step, which connected brain activity with a complex simulation of a vocal track—a setup that builds on recent studies that found the brain’s speech centers encode the movements of lips, tongue, and jaw.

“It’s a very, very elegant approach,” says Christian Herff, a postdoctoral researcher at Maastricht University who studies similar brain-activity-to-speech methods. This may alter spoken language.

The device marks the latest in a rapidly developing effort to map the brain and engineer methods of decoding its activity. Just weeks ago, a separate team including Herff published a model in the Journal of Neural Engineering that also synthesized speech from brain activity using a slightly different approach, without the simulated vocal tract.

“Speech decoding is an exciting new frontier for brain-machine interfaces,” says the University of Michigan’s Cynthia Chestek, who was not involved in either study. “And there is a subset of the population that has a really big use for this.”

RT with additional input from GVS News Desk