Intel’s Lama Nachman and Peter Scott-Morgan: Two Scientists, One a ‘Human Cyborg’

“I will continue to evolve, dying as a human, living as a cyborg.”

Lama Nachman, Intel fellow and director of Intel’s Anticipatory Computing Lab, works to help Dr. Peter Scott-Morgan communicate. Previously, Nachman helped physicist Stephen Hawking to speak. Nachman and her team developed the Assistive Context-Aware Toolkit, software that helps people with severe disabilities communicate through keyboard simulation, word prediction and speech synthesis. (Credit: Intel Corporation)

“I will continue to evolve, dying as a human, living as a cyborg.”

That’s British roboticist Dr. Peter Scott-Morgan. In 2017, he received a diagnosis of with motor neurone disease (MND), also known as ALS or Lou Gehrig’s disease. MND attacks one’s brain and nerves and eventually paralyzes all muscles, even those that enable breathing and swallowing.

Doctors told the 62-year-old scientist he’d probably die by the end of 2019, but Scott-Morgan had other plans: He wants to replace all his organs with machinery to become the “world’s first full cyborg.” Scott-Morgan began his transformation late last year when he underwent a series of operations to extend his life using technology.

He now relies on synthetic speech and has developed a lifelike avatar of his face for more effective communication with others. “Peter 2.0 is now online,” Scott-Morgan announced after his surgeries late last year. “This is MND with attitude.”

Later this week, England’s Channel 4, a public-service television network, will air Scott-Morgan’s extraordinary journey.

Among the team of technologists working with Scott-Morgan is Lama Nachman, Intel fellow and director of Intel’s Anticipatory Computing Lab.

Nachman helped famed physicist Stephen Hawking speak; now she and her team are helping Scott-Morgan.

British roboticist Dr. Peter Scott-Morgan, who has motor neurone disease, began in 2019 to undergone a series of operations to extend his life using technology. (Credit: Cardiff Productions)

For almost eight years, Nachman helped Hawking communicate his almost mythical intellectual achievements through an open-source platform she and her team helped develop, called the Assistive Context-Aware Toolkit (ACAT). The software helps people with severe disabilities communicate through keyboard simulation, word prediction and speech synthesis. For Hawking, it was a tiny muscle in his cheek that he twitched to trigger a sensor on his glasses that would interface with his computer to type sentences. For Scott-Morgan, Nachman’s team added gaze tracking, which allows him to stare at letters on his computer screen to form sentences, as well as word prediction capabilities.

“How can technology empower people? That’s been a thread in my life all along.”

A Palestinian growing up as a child in Kuwait, Nachman recalls neighbors calling her to fix their broken electronics and appliances. “I’ve always had this interest in figuring out the latest and greatest technologies and playing with them and breaking them and fixing them,” Nachman says.

Nachman’s team works on context-aware computing and human artificial intelligence (AI) collaboration technologies that can help elderly in their homes, students who might not thrive in standard classrooms and technicians in manufacturing facilities. “I’ve always felt that technology can empower people who are most marginalized,” Nachman says. “It can level the playing field and bring more equity into society, and that is most obvious for people with disabilities.”

While Hawking wanted more control over his conversations, Nachman says, “Peter is open to greater experimentation and the idea of he and the machine learning together. As a result, we have been researching how to build a response-generation capability that can listen to the conversation and suggest answers that he can quickly choose from or nudge in a different direction.”

Intel’s Anticipatory Computing Lab team that developed Assistive Context-Aware Toolkit includes (from left) Alex Nguyen, Sangita Sharma, Max Pinaroc, Sai Prasad, Lama Nachman and Pete Denman. Not pictured are Bruna Girvent, Saurav Sahay and Shachi Kumar. (Credit: Lama Nachman)

While this isn’t as accurate as Hawking’s preference, Nachman says Scott-Morgan is willing to forego control in exchange for intuitive collaboration with his AI-powered communication interface because of the speed it affords him.

“My ventilator is a lot quieter than Darth Vader’s.”

Scott-Morgan is known for his wit and self-effacing humor, and he wants to be able to show that with his artificial voice. In addition to decreasing the latency, or “silence gaps,” between Scott-Morgan and another conversing, Nachman’s team is looking into how Scott-Morgan can express emotion. When we’re conversing normally with another, we’re looking at multiple cues like expressions and tone, not just the words. For Scott-Morgan, the team is researching an AI system that listens to what’s going on and then prompts alternative suggestions and tones according to different criteria.

Someday, Scott-Morgan and others might use brainwaves to control their voices.

Nachman said some of her team’s research focuses on people who cannot move any part of their body, not even a twitch of their cheeks or eyes. For them, Nachman says, brain-computer interfaces (BCIs) include skullcaps equipped with electrodes that monitor brainwaves, like an electroencephalogram test. Nachman says she and her team are looking to add BCIs to ACAT to ensure no one is left behind.

As AI gets smarter, Nachman is particularly interested in exploring ways to preserve human control while giving the AI system greater agency so “the two diverse actors are working in concert to achieve better outcomes together.”

More: Peter: The Human Cyborg Q&A | PETER 2.0: World’s first full cyborg, Dr Peter Scott-Morgan – Episode #1 | PETER 2.0: The Cyborg is now online – Dr Peter Scott-Morgan – Episode #2 | PETER 2.0 Research Streams: Verbal Spontaneity – Dr. Peter Scott-Morgan Becomes a Cyborg – Episode #3

Tags: Artificial Intelligenceassistive context-aware toolkitLama NachmanPeter Scott-MorganUnited Kingdom News