AI Blazes Path Towards Dissipationless Electronics
Share
Search:
Explore by topic
FOR THE TECHNOLOGY INSIDER
Topics
Follow IEEE Spectrum
Support IEEE Spectrum
IEEE Spectrum is the flagship publication of the IEEE — the world’s largest professional organization devoted to engineering and applied sciences. Our articles, podcasts, and infographics inform our readers about developments in technology, engineering, and science.
About IEEE Contact & Support Accessibility Nondiscrimination Policy Terms IEEE Privacy Policy
© Copyright 2022 IEEE — All rights reserved. A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity.
IEEE websites place cookies on your device to give you the best user experience. By using our websites, you agree to the placement of these cookies. To learn more, read our Privacy Policy.
Enjoy more free content and benefits by creating an account
Saving articles to read later requires an IEEE Spectrum account
The Institute content is only available for members
Downloading full PDF issues is exclusive for IEEE Members
Access to Spectrum's Digital Edition is exclusive for IEEE Members
Following topics is a feature exclusive for IEEE Members
Adding your response to an article requires an IEEE Spectrum account
Create an account to access more content and features on IEEE Spectrum, including the ability to save articles to read later, download Spectrum Collections, and participate in conversations with readers and editors. For more exclusive content and features, consider Joining IEEE .
Join the world’s largest professional organization devoted to engineering and applied sciences and get access to all of Spectrum’s articles, archives, PDF downloads, and other benefits. Learn more →
Close
Enjoy more free content and benefits by creating an account
Create an account to access more content and features on IEEE Spectrum, including the ability to save articles to read later, download Spectrum Collections, and participate in conversations with readers and editors. For more exclusive content and features, consider Joining IEEE .
52s
3 min read
MIT researchers discovered hidden magnetic properties in multi-layered electronic material by analyzing polarized neutrons using neural networks.
Ella Maru Studio/MIT
topological insulators superconductor quantum computing qubits MIT artificial intelligence
A new AI algorithm has been developed that offers to drastically trim back the time needed to iterate designs of a promising new material called the topological insulator.
The potential of topological insulators —which feature the strange property of being insulators on the inside but conductors on the outside—has transfixed electronics researchers for the last decade. One area of interest has been achieving electronics without dissipation, or loss to heat. For years the only material that seemed to offer electronics without resistivity were superconductors . However, superconductors lack a degree of robustness and were susceptible to the most minute of disturbances.
Topological insulators seemed to offer a reasonable alternative to the fragility of superconductors. However, to develop and perfect a topological insulator meant first understanding how a material's magnetic and non-magnetic layers interact—including the induced magnetism in the non-magnetic layer—a phenomenon called the “ magnetic proximity effect .” To detect this phenomenon researchers use a technique known as polarized neutron reflectometry (PNR) to analyze how magnetic structure varies as a function of depth in multilayered materials.
PNR, in other words, was a necessary element of developing topological insulators, but it's also been a substantial slowdown in the process of exploring and iterating new possible materials. Both PNR's inherent complexities and the vast amounts of data it produces have been a challenge.
“In traditional methods, people needed to spend time guessing at tens of parameters again and again. With this AI approach there is no need to guess—and it’s painless.”
—Mingda Li, MIT
However, now researchers at MIT have developed an artificial intelligence (AI) algorithm for sorting through all the PNR data to help researchers significantly reduce the data analysis time.
“It has reduced the analysis from days to minutes without exaggeration,” said Mingda Li , professor at MIT and the principal research in this work. “In traditional methods, people needed to spend time guessing at tens of parameters again and again. With this AI approach there is no need to guess—and it’s painless.”
PNR starts by aiming two polarized neutron beams with opposing spins at a sample. Those beams are reflected off the sample and collected on a detector. If one of the neutrons comes in contact with a magnetic flux, such as those found inside a magnetic material, it will change its spin state, resulting in different signals measured from the spin up and spin down neutron beams. As a result, the proximity effect can be detected if a thin layer of a normally non-magnetic material—placed immediately adjacent to a magnetic material—is shown to become magnetized: the magnetic proximity effect.
The PNR signal, as it's first fed into the AI, is a complex signal that's difficult to deconvolve. But in doubling the resolution of the signal, the AI is able, essentially, to amplify the proximity effect component of the signal, thus making the data easier to interpret. In the group's work, their algorithm could discern proximity effect properties at length scales of 0.5 nm. (The typical spatial extent of the proximity effect, Li said, is on the order of one nanometer, so the AI is able to resolve to the size scales needed.)
The AI method succeeds over traditional algorithms, Li said, because it transforms the PNR data into a hidden " latent space "—a sort of simplified but still useful representation of compressed data—that makes analysis much easier.
To leverage this ability to transform data into latent space, each piece of PNR data is first labeled according to the particular parameters most relevant to the researchers. The algorithm then looks for nuanced links between different data points and amplifies them, in contrast to the conventional method of treating each data point independently.
The MIT researchers built their algorithm from PyTorch , the open-source machine learning framework.
“We are not an AI group designing things like convolutional neural networks, but the package is powerful enough to be adopted in existing research facilities, like the [U.S.] National Institute of Standards and Technology ,” said Li.
In addition to locating the proximity effect in PNR data, Li said, it can also be used for finding other nuanced spectral signals, such as SARS-CoV-2 virus in lipid bilayers (which is also measured by PNR). He also envisions using their algorithm to find materials that can host qubits for quantum computing. “Those are direct applications without need to modify much of [the] codes,” Li added.
In fact, quantum computing applications, Li said, are the most immediate applications for this AI beyond the PNR data mining.
“There has been some recent controversies in identifying whether some material systems may host qubits," Li said. "This work will improve the resovability and help on that.”
From Your Site Articles
Dexter Johnson is a contributing editor at IEEE Spectrum, with a focus on nanotechnology.
The Conversation (0)
11 min read
Edmon de Haro
YOU'VE PROBABLY PLAYED hundreds, maybe thousands, of videos on your smartphone. But have you ever thought about what happens when you press “play”?
The instant you touch that little triangle, many things happen at once. In microseconds, idle compute cores on your phone's processor spring to life. As they do so, their voltages and clock frequencies shoot up to ensure that the video decompresses and displays without delay. Meanwhile, other cores, running tasks in the background, throttle down. Charge surges into the active cores' millions of transistors and slows to a trickle in the newly idled ones.
This dance, called dynamic voltage and frequency scaling (DVFS), happens continually in the processor, called a system-on-chip (SoC), that runs your phone and your laptop as well as in the servers that back them. It's all done in an effort to balance computational performance with power consumption, something that's particularly challenging for smartphones. The circuits that orchestrate DVFS strive to ensure a steady clock and a rock-solid voltage level despite the surges in current, but they are also among the most backbreaking to design.
That's mainly because the clock-generation and voltage-regulation circuits are analog, unlike almost everything else on your smartphone SoC. We've grown accustomed to a near-yearly introduction of new processors with substantially more computational power, thanks to advances in semiconductor manufacturing. “Porting” a digital design from an old semiconductor process to a new one is no picnic, but it's nothing compared to trying to move analog circuits to a new process. The analog components that enable DVFS, especially a circuit called a low-dropout voltage regulator (LDO), don't scale down like digital circuits do and must basically be redesigned from scratch with every new generation.
If we could instead build LDOs—and perhaps other analog circuits—from digital components, they would be much less difficult to port than any other part of the processor, saving significant design cost and freeing up engineers for other problems that cutting-edge chip design has in store. What's more, the resulting digital LDOs could be much smaller than their analog counterparts and perform better in certain ways. Research groups in industry and academia have tested at least a dozen designs over the past few years, and despite some shortcomings, a commercially useful digital LDO may soon be in reach.
Low-dropout voltage regulators (LDOs) allow multiple processor cores on the same input voltage rail (VIN) to operate at different voltages according to their workloads. In this case, Core 1 has the highest performance requirement. Its head switch, really a group of transistors connected in parallel, is closed, bypassing the LDO and directly connecting Core 1 to VIN, which is supplied by an external power management IC. Cores 2 through 4, however, have less demanding workloads. Their LDOs are engaged to supply the cores with voltages that will save power.
The basic analog low-dropout voltage regulator [left] controls voltage through a feedback loop. It tries to make the output voltage (VDD) equal to the reference voltage by controlling the current through the power PFET. In the basic digital design [right], an independent clock triggers a comparator [triangle] that compares the reference voltage to VDD. The result tells control logic how many power PFETs to activate.
A TYPICAL SYSTEM-ON-CHIP for a smartphone is a marvel of integration . On a single sliver of silicon it integrates multiple CPU cores, a graphics processing unit, a digital signal processor, a neural processing unit, an image signal processor, as well as a modem and other specialized blocks of logic. Naturally, boosting the clock frequency that drives these logic blocks increases the rate at which they get their work done. But to operate at a higher frequency, they also need a higher voltage. Without that, transistors can't switch on or off before the next tick of the processor clock. Of course, a higher frequency and voltage comes at the cost of power consumption. So these cores and logic units dynamically change their clock frequencies and supply voltages—often ranging from 0.95 to 0.45 volts— based on the balance of energy efficiency and performance they need to achieve for whatever workload they are assigned—shooting video, playing back a music file, conveying speech during a call, and so on.
Typically, an external power-management IC generates multiple input voltage (VIN) values for the phone's SoC. These voltages are delivered to areas of the SoC chip along wide interconnects called rails. But the number of connections between the power-management chip and the SoC is limited. So, multiple cores on the SoC must share the same VIN rail.
But they don't have to all get the same voltage, thanks to the low-dropout voltage regulators. LDOs along with dedicated clock generators allow each core on a shared rail to operate at a unique supply voltage and clock frequency. The core requiring the highest supply voltage determines the shared VIN value. The power-management chip sets VIN to this value and this core bypasses the LDO altogether through transistors called head switches.
To keep power consumption to a minimum, other cores can operate at a lower supply voltage. Software determines what this voltage should be, and analog LDOs do a pretty good job of supplying it. They are compact, low cost to build, and relatively simple to integrate on a chip, as they do not require large inductors or capacitors.
But these LDOs can operate only in a particular window of voltage. On the high end, the target voltage must be lower than the difference between VIN and the voltage drop across the LDO itself (the eponymous “dropout” voltage). For example, if the supply voltage that would be most efficient for the core is 0.85 V, but VIN is 0.95 V and the LDO's dropout voltage is 0.15 V, that core can't use the LDO to reach 0.85 V and must work at the 0.95 V instead, wasting some power. Similarly, if VIN has already been set below a certain voltage limit, the LDO's analog components won't work properly and the circuit can't be engaged to reduce the core supply voltage further.
The main obstacle that has limited use of digital LDOs so far is the slow transient response.
However, if the desired voltage falls inside the LDO's window, software enables the circuit and activates a reference voltage equal to the target supply voltage.
HOW DOES THE LDO supply the right voltage? In the basic analog LDO design, it's by means of an operational amplifier, feedback, and a specialized power p-channel field effect transistor (PFET). The latter is a transistor that reduces its current with increasing voltage to its gate. The gate voltage to this power PFET is an analog signal coming from the op amp, ranging from 0 volts to VIN. The op amp continuously compares the circuit's output voltage—the core's supply voltage, or VDD—to the target reference voltage. If the LDO's output voltage falls below the reference voltage—as it would when newly active logic suddenly demands more current—the op amp reduces the power PFET's gate voltage, increasing current and lifting VDD toward the reference voltage value. Conversely, if the output voltage rises above the reference voltage—as it would when a core's logic is less active—then the op amp increases the transistor's gate voltage to reduce current and lower VDD.
A basic digital LDO, on the other hand, is made up of a voltage comparator, control logic, and a number of parallel power PFETs. (The LDO also has its own clock circuit, separate from those used by the processor core.) In the digital LDO, the gate voltages to the power PFETs are binary values instead of analog, either 0 V or VIN.
With each tick of the clock, the comparator measures whether the output voltage is below or above the target voltage provided by the reference source. The comparator output guides the control logic in determining how many of the power PFETs to activate. If the LDO's output is below target, the control logic will activate more power PFETs.Their combined current props up the core's supply voltage, and that value feeds back to the comparator to keep it on target. If it overshoots, the comparator signals to the control logic to switch some of the PFETs off.
NEITHER THE ANALOG nor the digital LDO is ideal, of course. The key advantage of an analog design is that it can respond rapidly to transient droops and overshoots in the supply voltage, which is especially important when those events involve steep changes. These transients occur because a core's demand for current can go up or down greatly in a matter of nanoseconds. In addition to the fast response, analog LDOs are very good at suppressing variations in VIN that might come in from the other cores on the rails. And, finally, when current demands are not changing much, it controls the output tightly without constantly overshooting and undershooting the target in a way that introduces ripples in VDD.
When a core's current requirement changes suddenly it can cause the LDO's output voltage to overshoot or droop [top]. Basic digital LDO designs do not handle this well [bottom left]. However, a scheme called adaptive sampling with reduced dynamic stability [bottom right] can reduce the extent of the voltage excursion. It does this by ramping up the LDO's sample frequency when the droop gets too large, allowing the circuit to respond faster.