Get more content like this in your Inbox monthly!
Our newsletter consists of curated articles from our top authors.
In the paper, the LightOn scientists demonstrate in detail how they expedited an AI model training, using what they claim to be the first one of the first optical co-processors.
LightOn, a Paris-based start-up developing optical computing hardware (OCH) for Artificial Intelligence (AI) applications, has published a technical paper on Arvix.com. In the paper, the LightOn scientists demonstrate in detail how they expedited an AI model training, using what they claim to be the first one of the first optical co-processors.
The paper, published earlier this month, details how their co-processor- the Optical Processing Unit (OPU) - helped train an AI model to successfully recognise handwritten digits using the preexisting MNIST data set. The MNIST dataset has a training set of 60,000 examples of handwritten digits and has a test set of 10,000 examples. The LightOn OPU-trained AI model was able to recognise these handwritten digits with a 95.8% accuracy without being trained on a graphic card and achieved 97.6% accuracy when trained on a graphic card.
This discovery opens several advantages for energy-efficient and faster AI model training. The photonic integrated circuits, the basis of LightOn’s chip, require lesser energy as compared to their electronic counterparts because light produces less heat than electricity. The photonics integrated circuits are also less vulnerable to changes such as temperature, electromagnetic fields and other interference. While photonic designs are infamous for latency-related problems, the newer designs have improved up to 10,000 times as compared to the silicon-based designs at lower energy consumption, and in fact, a few model workloads have been observed to run 100 times faster as compared to the electronic chips.
“By switching from the off-axis to a phase-shifting holography scheme, it will be possible to scale input and output size up to 106 and perform calculations involving more than a trillion parameters,” state the researchers. They team up this innovation with direct feedback alignment (DFA), a machine-learning approach. DFA trains a model on random predictions, enabling each layer to update independently of other layers. The LightOn researchers encoded the chip with vector, a numerical representation, with a component designed to adjust the light. As the light beam passes the diffuser, an inference pattern is created and detected by the camera. Due to this, the chip has the ability to generate random model error prediction at gigantic scales.
The LightOn co-processor ran at 1.5 kHz and consumed 30 watts of power while performing 1,500 random projections per second. Thus, the LightOn co-processor is more energy-efficient than an average graphic card!
However, the LightOn hardware’s most noteworthy feature is that it can be fitted into a standard server or workstation. The innovation is not sans limitations such as speed limitations. Therefore LightOn is exploring the hybrid approach of combining optical circuits with silicon ones to divide parts that run between these two mediums.
Luiz André Barroso to receive 2020 Eckert-Mauchly Award
IIT-H develops AI-enabled COVID-19 test kit