Sunnyvale, Calif. – May 9, 2023 – Researchers from NTT Research, Inc., the Massachusetts Institute of Technology (MIT) and several optical computing companies have demonstrated an approach to optically driven deep neural networks (DNN) that they say resolves the memory-access bottleneck in resource-constrained edge devices, enabling significant energy usage and latency reductions.
Conceived by NTT Research’s Physics and Informatics (PHI) Lab Senior Scientist Ryan Hamerly, the approach to optically driven deep neural networks, called Netcast, was demonstrated in work performed by MIT Ph.D. candidate Alexander Sludds, under the joint supervision of Dr. Hamerly and MIT Professor Dirk Englund, and with the help of a wide group of collaborators. Their research was summarized in an paper titled “Delocalized photonic deep learning on the internet’s edge,” and published in the October 20 issue of Science, one of the world’s top academic journals.
Deep neural networks, now pervasive in science and engineering, feature several layers of interconnected neurons or nodes hidden between input and output layers. Once trained and calibrated with appropriate weights, DNN models can enable rapid classification of data and execution of other computationally intensive tasks, such as image or speech recognition.
A critical bottleneck in such tasks on present-day computing devices relates to matrix algebra. Despite the offloading of DNN inference to cloud servers and the development of non-digital technologies, including optical computing, memory access and multiply-accumulate (MAC) functions have remained points of congestion. Offloading also adds security risks because data from the edge must be shared with the cloud.
As the lead author in a 2019 paper on optical neural networks and photoelectric multiplication, Dr. Hamerly proposed addressing those challenges by encoding the DNN model in an optical signal and streaming it into an edge processor. Expanding on that proposal, the Science article introduces Netcast and its components: a smart transceiver that could be integrated into cloud computing infrastructure and a time-integrating optical receiver that resides in the client. The article explains how the scheme achieved up to 98.8% accuracy in image recognition, cut server-side latency by performing actual computation at the client and vastly outperformed the status quo in energy consumption.
“The difference is that a GPU high-bandwidth memory link consumes around 100 watts and only connects to a single GPU, whereas a Netcast link can consume milliwatts, and by using trivial optical fan-out, one server can deploy a DNN model to many edge clients simultaneously,” Dr. Hamerly said. “This broadcasting of DNNs is one reason why we chose the term Netcast.”
Once the idea of a single physicist, Netcast now has a wide range of contributors who have helped bring it to reality. The Science article’s sixteen co-authors are affiliated with NTT Research; with MIT’s Research Laboratory of Electronics, MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and MIT’s Lincoln Laboratory; and with the optical computing companies Elenion (purchased by Nokia), Luminous and Lightmatter. The project has enjoyed support from NTT Research, which announced a five-year joint research agreement with MIT in November 2019 that included the goal of developing photonic accelerators for deep learning. Corresponding author Alexander Sludds discussed Netcast at Coherent Network Computing (CNC) 2022, an NTT Research-organized conference that took place October 24–26 at Stanford University.
“We are delighted to see Dr. Hamerly’s first Netcast paper and congratulate him, Professor Englund and the entire team that supported the experimental demonstration of this innovative application of optical computing to deep learning,” PHI Lab Director Yoshihisa Yamamoto said. “As non-digital computing architectures continue to emerge, we anticipate more explorations of how best to amplify their benefits.”
To demonstrate Netcast experimentally, researchers at MIT connected the smart transceiver to the client receiver over 86 km of deployed optical fiber in the Boston area. The test itself involved a digital classification task. Using a three-layer model with 100 neurons per hidden layer and 1,000 handwritten images drawn randomly from the MNIST dataset (digits 1–10), the test yielded results comparable with the model’s baseline accuracy of 98.7% when used locally, and 98.8% when using 3 THz of bandwidth over fiber. At the Netcast client, optical energy consumption has been demonstrated as scalable to less than 1 photon per MAC, a fundamental quantum limit highly relevant in photon-starved scenarios. Overall client-side energy consumption can plummet three orders of magnitude below what is possible in existing digital semiconductors. It is possible, but not proven and an open question left for future work, that the laws of quantum mechanics could secure from eavesdropping a DNN model’s weights, which can be costly to derive. Because the hardware used in this demonstration of Netcast is readily available, the authors indicate that it could potentially deliver a near-term impact, although which applications to target also remains an open question.
The PHI Lab has broadly advocated for new approaches to computing, such as the Coherent Ising Machine (CIM), an information processing platform based on photonic oscillator networks. The scope of CNC 2022, which drew 40 speakers and more than 30 poster presenters, encompassed principles and technologies related to “novel computing machines that utilize analog physical dynamics.” In addition to the joint research with MIT, the PHI Lab has similar arrangements with ten other universities. These include the California Institute of Technology (CalTech), Cornell, Harvard, Michigan, Notre Dame, Stanford, Swinburne University of Technology, the Tokyo Institute of Technology, the University of Michigan and the University of Tokyo. The NASA Ames Research Center in Silicon Valley and 1QBit, a private quantum computing software company, have also entered joint research agreements with the PHI Lab. In March 2022, NTT Research joined the MIT AI Hardware program as an inaugural industrial member.