By 2020, you could have an exascale speed-of-light optical computer on your desk

By 2020, you could have an exascale speed-of-light optical computer on your desk
Optalysys, a UK technology company, says it’s on-target to demonstrate a novel optical computer, which performs calculations at the speed of light, in January 2015.

Optalysys optical computing: Multiple lasers, firing through multiple liquid crystal grids

Optalysys, a UK technology company, says it’s on-target to demonstrate a novel optical computer, which performs calculations at the speed of light, in January 2015. If all goes to plan, Optalysys says its tech — which is really unlike anything you’ve ever heard of before — can put an exascale supercomputer on your desk by 2020.

When we talk about optical computing, we’re actually referring to a fairly large number of different and competing technologies. At its most basic, optical computing refers to computing that uses light instead of electricity. When we’ve previously written about optical computing, we’re usually referring to chips and computers that have replaced their internal wiring with optical waveguides, and some kind of optical transistor that is controlled by photons instead of electrons. There are also optoelectronic devices, which use a mix of the two (usually optical interconnects and electronic transistors).

In the case of Optalysys, optical computing is something else entirely. At this point, because the Optalysys tech is rather complex, you should probably watch the video embedded below — not only will it probably do a better job than me at explaining it, but it’s also narrated by the adorable Heinz Wolff. If you can’t watch the video, read on and I’ll try my best.

It goes something like this. You start with a low-power laser. This laser is then directed through a massive liquid crystal grid. This grid works in much the same way as a liquid crystal display. By applying electricity to each “pixel,” the laser light passing through it is affected. Complex calculations would turn hundreds or thousands of these pixels on or off. After the laser has passed through this grid, the beam is picked up by a receiver. By analyzing the beam’s diffraction and Fourier optics, matrix multiplication and Fourier transforms can be combined to perform complex maths. You can also have multiple pixel grids in sequence or parallel, significantly boosting the complexity and parallelism of the optical computer. There’s a little more technical info on the Optalysys website, but not much.

Moving away from the technical nitty-gritty, Optalysys’s optical computer is exciting for two main reasons: It consumes very little power, and there’s essentially no limit on how parallel you can make it. There’s no direct analogy to transistor-based logic, but you could almost think of every liquid-crystal pixel as a tiny processing core (or at least a tiny transistor). In a normal computer chip, while there is some parallelism, most things happen very sequentially, with each core (and each transistor) working mostly in serial. In an Optalysis optical computer, the laser beam hits every single pixel at the same time — it essentially performs hundreds or thousands (or millions?) of small computations in parallel, at the speed of light.

An Optalysys optical computer, on a desktop

In terms of power consumption, HPCwire saysthe estimated running costs of an Optalysys computer would be in the region of $3,500 per year. I think this is based on the assertion by Optalysys that it will be able to deliver a desktop-size computer with exascaleperformance. Compare this to the world’s fastest supercomputer, which would cost around $21 million in energy costs if it was run at peak (~34 petaflops) performance for a year.  [Read: HP bets it all on The Machine, a new computer architecture based on memristors and silicon photonics.]

Optalysys says that its technology is already at NASA Technology Readiness Level (TRL) 4, meaning it’s ready for full-scale testing in a laboratory environment. By January 2015, Optalysys says it will have a 340-gigaflop prototype ready to go. By 2017, the company wants to have two commercial systems in place: A big data analysis system that will add 1.32 petaflops of grunt to an existing, conventional supercomputer — and a standalone Optical Solver supercomputer, which will start at 9 petaflops. Optalysys thinks its Optical Solver could scale up to 17.1 exaflops by 2020. This seems like a very bold statement for an entirely novel and untested method of computing — but given how conventional computing has mostly stagnated by this point, I hope the folks at Optalysys can follow through.

Comments