Crucial Next Steps in the Microchip Revolution

For the last couple of days, I have been celebrating the new wafer scale chip launched by Cerebras of Los Altos, California.

To review, wafer scale has long been an industry dream. Rather than cutting up the wafer into thousands of tiny chips… putting them in packages 20 times bigger… and reconnecting them again on printed circuit boards with 250 times bigger wires… you do it all on the wafer.

As I wrote, the Cerebras device is a stunner. A new integrated circuit (IC), a processor on a dinner plate rather than on a flake of silicon. Called the “giganto,” it is a neural net artificial intelligence training chip integrated across a 12-inch silicon wafer bearing more than a trillion transistors.

Put into a chassis as the CS-1 computer, the giganto already has several customers who want to address the vast and time-consuming demands of AI machine learning.

I know these guys think they have answers, but self-driving cars dependent on machine learning to avoid obstacles in their paths have a problem when the training process for the maps takes as long as six weeks.

The giganto is a big step forward. Even using relatively conservative 16-nanometer feature sizes, the giganto can deploy roughly 1,000 times more transistors than the state-of-the-art in the industry (pushing the envelope with TSMC’s leading edge 7-nanometer geometries).

Fifty times bigger than NVIDIA’s graphics processors (normally arrayed in the hundreds over ten racks of apparatus for the task), giganto consumes five times less total power. Obviating much of the overhead apparatus of a GPU trainer, the Cerebras computer uses one-thirtieth the space and produces three times the performance.

Again, big silicon yields a smaller footprint.

However, lest CEO Andrew Feldman and his technical chief Gary Lauterbach rest on their laurels, they should contemplate the history of the first integrated circuit.

An Important Lesson from Tech History

Sometimes, history lies. Case in point: When history says that the first IC was created at Texas Instruments by now Nobelist William Kilby in September 1958 when the company lab had emptied out for vacation.

Kilby’s kludge was an unmanufacturable pastiche with one transistor, a capacitor, and a couple resisters linked by gold wires looping between “mesas” protruding above the surface of a germanium substrate. It never made it out of the lab except to be interred behind glass in a museum.

Earlier, Texas Instruments played an indelible role in the history of semiconductors by fabricating the first silicon transistors and combining them into a transistor radio.

But the real integrated circuit was created that November by Robert Noyce of Fairchild Semiconductor using a planar process on silicon with aluminum metalization (look, Ma, no looping wires!). Invented by his Belgian colleague Jean Hourni, the planar process was suited for the bulk manufacturing technique developed by Gordon Moore, Jay Last, and others at Fairchild.

Now, my sage counsel Nick Tredennick, himself the designer of the historic 68000 microprocessor in the Mac, cautions me not to go overboard on the Cerebras.

I am willing to acknowledge that unless the folks at Cerebras move ahead to perfect it, the giganto may more resemble the Kilby kludge than the Noyce IC.

The darn thing may save space in the data center, but as I wrote in Life After Google, data centers may already have passed their prime. A datacenter is mostly a gigantic refrigerator chiefly devoted to getting rid of heat. Like in a datacenter, what is really gigantic in the Cerebras CS-1 is its cooling equipment.

Moreover, to deliver refined power at exactly .8 volts to its trillion transistors in 400,000 core processors, it can’t just run lines across the silicon like an ordinary chip. The giganto requires a fiberglass cover to target power vertically through a million minuscule copper posts to nodes on the chip.

Hey, don’t knock it. It works. But the thousands of engineers around the world could do better as they work on bare chiplets and dielets and silicon interconnect fabrics as part of other schemes of wafer scale.

Today, passives (that cannot be miniaturized without badly skewing their performance) consume some 80 percent or more of the printed circuit board. The greatest challenge of the new era is to subdue the clunky passives — capacitors, resistors, and inductors — and bring them into the nanoscale regime.

Watch This Space Closely

That’s why my favorite project, still deep in stealth, mobilizes a “new phase of matter” to integrate all the passives — resistors, capacitors and inductors — in nanoscale forms.

Then we can have a total integrated circuit at wafer scale that scales and may not have to be encased in a refrigerator. After all, your own graphics processors in your laptop or teleputer are air cooled.

Ultimately, with the continuing advance of bandwidth three times faster than the advance of processing, the industry will continue to waste communications to save processing and refrigeration.

A new second generation blockchain can pay you for the use of the mostly dormant chips in your laptop to mount a global air cooled exacomputer.

That would be a computer utilizing planetary scale integration ― not on a chip or on a wafer. From Poland to Los Angeles, such projects are under way in an efflorescence of new electronics combining the nano with the giganto. Watch for it.

Meanwhile, look out for plans for a Cerebras IPO.

Regards,

George Gilder
Editor, Gilder’s Daily Prophecy

You May Also Be Interested In:

George Gilder

George Gilder is the most knowledgeable man in America when it comes to the future of technology — and its impact on our lives.

He’s an established investor, writer, and economist with an uncanny ability to foresee how new breakthroughs will play out, years in advance.

And he’s certainly no stranger to the financial newsletter...

View More By George Gilder