Skip To Content
ADVERTISEMENT

Accelerating AI with Photonic Chips

Nick Harris portrait

Lightmatter cofounder and CEO Nick Harris. [Image: Courtesy of Lightmatter]

As deep-learning and artificial-intelligence computing techniques have seen explosive growth, researchers have increasingly looked at how integrated photonics might help to accelerate AI computing—and also cut the considerable energy costs inherent in electronics-only approaches. One company that is working to commercialize photonic chips for AI is Lightmatter.

Founded in late 2017, Lightmatter had snagged US$33 million in series A start-up funding by early 2019, which has helped the company build up key staff, develop and refine its product line and ready it for launch. In early May 2021, Lightmatter announced that it has raised another US$80 million in a series B round, through an investment group including Viking Global Investors, GV (formerly Google Ventures), Hewlett Packard Enterprises, Lockheed Martin and others.

OPN recently had the opportunity to talk with Lightmatter cofounder and CEO Nick Harris about the company’s journey and its next steps, as it stands on the cusp of moving its products into the marketplace.

Let’s talk first about the financing news that was announced at the beginning of last month—US$80 million in series B capital. What will Lightmatter be doing with that chunk of cash?

Nick Harris: A couple of things. We have hardware that we’re going to be selling next year, and so a big chunk of that money goes into funding the production of that hardware. And we’re also building the team that’s involved in sales and production—taking this technology and packaging it in a way that’s consumable for companies, and speaks their language and ultimately sells it to them.

We had a number of other exciting announcements with this financing round. One is that we have brought on Olivia Nottebohm, who was vice president at Google Cloud and then chief operating officer at Dropbox; we’re really excited to have her expertise on the go-to-market side. We just brought on a VP of sales, a head of product and a lot of the team members that go into really productizing this kind of thing.

So that’s what the money’s for. And we’re growing a bunch—we already have about 74 employees now, and we double about every year.

It’s been quite a whirlwind trip for this company in the past few years. As I understand it, at this point you’ve got three product lines you’re developing or maturing—Envise, for acceleration of AI computing; Passage, for interconnects; and a software stack called Idiom. Could you talk a bit about what these are and where they stand at this point?

Lightmatter chip photo

Photo of Lightmatter photonic chip. [Image: Courtesy of Lightmatter]

Envise is our photonic AI accelerator. It’s general purpose—it’s not just for image recognition or natural language processing; it’s general across AI. It’s very high performance in terms of throughput, and very low in energy consumption.

Our goal with this Envise is to really try to help with scaling AI and its minimizing environmental footprint. If you look at chips that are out there right now, Nvidia’s chip draws about 450 watts—that’s an insanely hot computer chip. Our chip is looking at about 80 watts, and is multiple times faster.

We think that that will reduce the roadmap for energy consumption for compute and interconnect, especially as it relates to AI. The U.S. Department of Energy has estimated that about 10% of the planet’s energy consumption will go to compute and interconnect by 2030. We want to take a chunk out of that.

And then, to use that computer, you need software. Our Idiom software acts as a layer that lives beneath the popular machine-learning frameworks, like TensorFlow and PyTorch. Idiom is also able to automatically detect the configuration of processors—if you add another node, it will know it’s there, and the compiler will generate a program that will run across the cluster.

In 2017, you were a lead author of a paper demonstrating a multilayer neural network on a silicon photonic chip, using an array of Mach–Zehnder interferometers. How does what Lightmatter is doing now relate to that work?

The point of that paper was that you could build a single chip, and put down an entire neural net, and then shine light at the front and get the answer at the end. What we’re doing now in Lightmatter is really very different. We don’t unroll the neural net and then shine light through it. Our product actually acts a lot more like a tensor processing unit. What you’re really doing is you’re kind of taking a layer of a neural net, and you break it up into tiles, and you do matrix vector products to compose the solution.

So this is an accelerator for the matrix multiplication that kind of sits at the center of these deep-learning systems.

Yes, but there’s a lot more going on. If you actually look at the operations, we have a lot of other hardware that’s on that chip.

Another market for Lightmatter is in optical data interconnects, through the Passage product. What is the real selling proposition of these chips over fiber optics for interconnects?

“The U.S. Department of Energy has estimated that about 10% of the planet’s energy consumption will go to compute and interconnect by 2030. We want to take a chunk out of that.”
—Nick Harris, Lightmatter

Well, there’s a few points to think about. Passage is a wafer-scale optical interconnect; it’s kind of like a chocolate bar, in that you can snap off as many squares as you want. So you could have a two by two, six by eight—name your size, as long as it fits in a 300-mm wafer.

And on these chips we have waveguides—up to thousands of them. They cross each of the dies, and you can dynamically program the interconnect between the whole system. So there’s integrated photonics—waveguides, and optical switches, lasers, all of that—sitting side by side with transistors. It allows you to be both an interconnect and a switch.

What’s the big advantage over fiber optics? I think fiber optics are great. But one of the challenges, when you’re actually building products, is that if you look at the cost of attaching fibers to chips, it’s quite expensive. And it’s a flow, when you work with companies who assemble these things, that they currently don’t support. So the companies that are selling fiber optic solutions are doing in-house packaging a lot of the time to attach fibers. It’s super expensive.

And then another thing to note: fibers are huge. They’re 127 microns, 150 microns; you can only fit maybe 100 on a chip. With Passage, we’re using silicon photonics waveguides. So the pitch is about a micron or two microns. It’s a factor of 40 improvement in density.

I’d like to go back a few years, to how the company got started. When did you first start thinking about this as a business opportunity? Was there some sort of “aha moment”?

I had been developing programmable photonics in Dirk Englund’s research group at MIT since about 2013. I had always had an interest in building a company, but it wasn’t clear what it would be. I was also looking into academia as a career choice, and I almost took that route.

But I was working at the time on quantum computing—which I thought was very exciting, but I knew that it was going to be 10 or 15 years away. And at the same time, I’m building these silicon photonics processors that can do quantum computing, but also lots of different applications. They’re just programmable platforms. And it became clear that there was something to do there.

Lightmatter server rack

Lightmatter rack, with one blade out. [Image: Courtesy of Lightmatter]

And then, when the AI boom hit at MIT, there was this class, Intro to Machine Learning. All my friends were going to this thing; 400 or 500 people were signed up for this class; students were spilling out into the hallways, and people were streaming their own screens to their friends outside. And, it was like, “Wow, this is crazy; we have to know what’s going on.”

So my interest in building cool stuff, and the AI boom, and silicon photonics being commercially viable all kind of phased together at the same time. But I wasn’t totally sold on the idea until we did some business competitions. We entered the MIT 100K, through one of the MIT Sloan School of Business classes, and won that; and we also went over to Harvard and took their competition, which is called the Harvard President’s Challenge.

And after that, people were just sort of asking to write checks, to invest. And it became pretty clear that this was something that might be a good idea.

What about the process of building a company? What have you found to be some of the challenges and the big milestones there?

I think the biggest milestone is that when you start a company, you have to convince someone to leave their usually safe job to join you. And if they have a family and these sorts of things, there’s this intrinsic feeling: Am I sure this is a good idea if I’m going to mess with this person’s life? So I think every entrepreneur, when they’re starting a company for the first time, goes through that. And I remember that was a bit nerve wracking.

But then it really just became a game of collecting the nicest and most talented people that I could find. I learned a lot from MIT on that—there is a lot of talent there, but you have to be careful about not finding people who are talented but hard to work with. And so I took away that lesson from school.

There’s a lot to build in a company. Engineering organizations and science organizations make a lot of sense to me, because I’m a scientist, and so building that out hasn’t been too hard. There’s been a lot more learning on sales and product and marketing and all these other things. That’s been a huge learning experience.

And now you’re transitioning from what’s fundamentally an R&D company to a company that’s actually shipping product. How does that feel? I would think it brings a whole other set of challenges.

That’s right. I think that, coming from a science background, my intuition—and that of my cofounder, Darius Bunandar—was that we should always build the best thing that’s possible to build. But what you learn from a few years of development is, you sometimes have to back off a few things, and really focus on: It had better work. Let’s simplify things for production.

So that’s one of the things that we’ve been transitioning into. And that was a big learning point, and I’d say it’s a bit stressful. Because you can’t just say, Oh, I can build that. There has to be a stable supply chain, with multiple suppliers who can build this component; you have to have partners that you can trust, and build all the relationships with the partners and have them bought into your company’s mission. There’s a whole ecosystem that lives around your company that you really have to have supporting you. And that’s a lot of work.

You and your colleagues have made a huge effort and are now moving into selling your first products to early adopters. What is the longer-term vision for Lightmatter, in your view?

“ I believe, strongly, that optical computing can be a big part of the future. I also see a lot of other opportunities out there.”
—Nick Harris, Lightmatter

I think that a lot of people are starting to realize that there are real challenges with electronics-based technology. And I believe, strongly, that optical computing can be a big part of the future. I also see a lot of other opportunities out there.

In general, I think it’s the case that we’re at a juncture in the history of computing, where the world will need to explore a lot of things. And we want to be a big part of that story. In the near term, it’ll be in data center and AI computing. But eventually, we’d like to move into autonomous driving and all the other things that go into the future of computing and AI.

I think that’s the story we’re building. There’s just a lot you can do with photonics besides the interconnects. It’s an exciting technology platform.

Publish Date: 09 June 2021

Add a Comment