30

I was reading the book "The singularity is near" written by Kurzweil and he mentioned the reversible gates like for example the Fredkin gate. The advantage using such gates is that we could get rid of the thermal waste related to computation where bits just disappear into heat, and computation won't need any energy input. Those assumptions make these gates sound like a miracle solution. So the question is what technical hurdles are still preventing their large scale usage.

I also think it is a shame that I never heard about those gates in my electrical engineering bachelor and master studies at a top German university...

Mehdi
  • 453
  • 4
  • 6

6 Answers6

21

I am by no means an expert on this topic, but just from casually reading Wikipedia:

it relies on the motion of spherical billiard balls in a friction-free environment made of buffers against which the balls bounce perfectly

... this sounds very realistic.

Nobody has actually figured out how to actually make such gates yet, they're merely of theoretical interest. That might explain why you've never heard of them since engineering usually deals with practice.

The premise of Reversible Computing is that when a bit disappears, some amount of heat is generated. By using reversible gates, no bits ever appear or disappear so supposedly computation could be much more efficient with reversible gates.

The theoretical limit Reversible Computing claims to get around is that erasing 1 bit of information generates at least $kT\ln 2$ energy in heat. For a computer running at a toasty $60\,{}^\circ\mathrm{C}$ with $10^9$ transistors each making bits disappear at a rate of $5\,\mathrm{GHz}$, that corresponds to $16\,\mathrm{mW}$ of heat generation. That only accounts for a tiny proportion ($1/10000$) of a computer's energy usage.

Our current-day computers are not limited by heat generation associated with bits disappearing. They are limited by the inherent inefficiency in moving electrons around on tiny copper traces.

Tom van der Zanden
  • 13,493
  • 1
  • 39
  • 56
20

The problem with practical reversible gates (gates that can (and have been) fabricated in silicon) is that the actual energy savings are linearly proportional to how slowly you run them.

I know that Tom Knight's research group at MIT fabricated a small adiabatic processor in the late 1990s. The practical logic family they developed is called split-level charge recovery logic, and can be implemented using standard (CMOS) fabrication techniques. I believe the work has been continued by Michael P Frank at Florida State University. An example of the work in Tom Knight's group is the following master's thesis (which has a pretty decent section on related work through the early 1990s.) Vieri, C J: Pendulum: A Reversible Computer Architecture, Master's Thesis, MIT EECS dept, 1995.

Reversible circuits need to be adiabatic (there can't be heat exchanges between the circuit and its environment), which means that they must be in equilibrium at all times. For any process that needs to change something you can only approximate equilibrium by making the change happen as slowly as possible.

If I remember my thermodynamics correctly, you can make the energy of a reversible computation arbitrarily small, but the minimum action (energy times time) must be a small constant.

Wandering Logic
  • 17,863
  • 1
  • 46
  • 87
10

The largest hurdle preventing their large scale use is the same as for asynchronous circuits and pretty much any other non-standard circuit design: Moore's law.

Moore's law has become something of a self fulfilling prophecy; as seen by the Tick Tock Release Schedule, chip manufacturers see fulfilling Moore's law as a challenge. Because of the need to fulfill Moore's law, we have gotten more and more adept at decreasing the size of chips by advancing lithography (and often by using cheats, like multipatterning).

What does all of this have to do with reversible gates? As foundries race to release newer and smaller transistor sizes, companies that want to print new chips see an easy path towards increasing speed by simply adding more cache and reworking their conventional designs to better use that cache.

The killer of better isn't technological hurtles; it's the success of good enough.

Alice
  • 201
  • 1
  • 3
5

Fredkin gates are realistic and many have been implemented. There are entire FPGA boards strictly using reversible logic gates that are implemented using Fredkin and Toffoli gate as their LUs.

There are several problems effecting their widespread usage in computer architecture. There are several "advertised" advantages to fredkin gates that do not necessarily work as expected in real circuits. The energy savings of reversible logic gates is due mostly to the fact that they do not require entropy be created when an operation is performed. As stated by Tom van der Zanden, this is the primary reason why reversible logic can be much more efficient. Why this is not the case in real circuits:

  1. Currently transistor technology is more what limits computer speed and power consumption, and unfortunately, more transistors are required to make a fredkin gate as opposed to traditional nand or nor gates. This means fredkin gates waste more energy through transistor leakage and require more space on silicon (which means more expensive). This alone is enough to justify using nand/nor over fredkin gates
  2. Since the primary form of power loss if from transistors and not entropy creation due to the actual computation, you still need to run power and ground lines to fredkin gates to compensate for this power loss. These large busses are fan-in busses, which also take up a lot of space on silicon. Since there are more transistors in a fredkin gate, this leads to more fan-in, and thus more wasted space on silicon.
  3. Although we have reversible fredkin gates, these are built up from non-reversible devices (transistors). This means some of the energy gains are not realized with current gate technology (outside of quantum circuits).
  4. Size and speed are related on silicon, the closer together things are, generally the faster you can make them. Since fredkin gates use more transistors and have more connections for power, etc, they tend to be significantly slower.
  5. The power consumption advantages and speed advantages are only realized when bits are successfully reused. Most of the algorithms we have are horribly non-conservative. You can see this by researching an implementation of a full adder or shift register using fredkin gates. Since reversible logic does not allow logical fan-in and fan-out, this leads to a lot of bit calculation that does not end up being used to achieve a useful operation. Something such as adding two 8-bit numbers will produce 9-bits or useful information (8-bit result, 1 carry bit), but will require an input bus of many 1 and 0 constants and produce many junk data output bits. Since you have a wider bus, this leads to an even further spread out circuit on silicon.
  6. Additionally, The junk bits must be dumped by the processor, and thus would lead to energy loss anyways because they are never used

Summary: Fredkin gates produce a lot of waste computation when implementing real algorithms. waste computation = wasted energy. Because of this, bus sizes increase which spreads things out and slows things down. Additionally, the physical implementation of fredkin gates is the greater concern for current technology. The current implementation spreads things out more by requiring more power and ground lines to compensate for losses in the circuit (which is a much greater concern for energy loss) and uses much more real-estate on silicon (which is a much greater concern for speed)

I realize this is an old thread, but many of the answers focus on the fact that transistors are inefficient. My goal is to show our algorithms are also inefficient and do not handle reversible computing well. I am a computer engineer that enjoys researching reversible and quantum computing

Ryan
  • 51
  • 1
  • 1
3

Useful computing devices require feedback, which makes it possible to have one circuit element perform an essentially-unlimited number of sequential computations. Usable feedback circuits must contain sections whose total number of inputs (counting both the ones that are fed back from outputs and those that aren't) exceeds the number of outputs which are fed back to input (the only way the number of inputs wouldn't exceed the number of fed-back outputs would be if the circuits didn't respond in any way to outside stimuli). Since perfectly reversible-logic functions can't have more inputs than outputs, it's not possible to construct from them any of the feedback structures required to perform any non-trivial computing tasks repeatedly. Note that with the CMOS technology used in today's computers, feedback is required to ensure that results reported by computations in different parts of a circuit are made available simultaneously to other parts, since if they weren't the relative timing with which the signals arrive would constitute "information" which could not be perfectly passed downstream; other technologies might make it possible to have many gates propagate signals at precisely the same rate while retaining reversibility, but I know of no practical technology for that.

Note that from a CS perspective, it's trivial to make a computing process reversible if one has initially-empty storage medium whose size is essentially proportional to the number of steps times the amount of state that could change in each step. This claim does not contradict the claim of the previous paragraph, since storage proportional to the number of steps will require circuitry proportional to the number of steps, which will imply circuitry proportional to the amount that would be required if all feedback were eliminated.

If one is allowed to have outputs which are ignored if, given proper input conditions, they will never go high, then it might be possible to design a system that would, in theory, benefit from reversible logic. For example, if one had an algorithm that operated on a 256-word chunk of RAM and one wanted to use a "reversible-logic CPU" that performed 1,000,000 operations per second and each operation updated either a register, the program counter, or one word of RAM, one could use a "reversible CPU" which would:

  • ran a bunch of instructions and, on each, send whatever was overwritten to a LIFO buffer
  • after a bunch of instructions had executed, copy the RAM into an initially-blank "forwarding" buffer
  • using the values in the LIFO, run all the computations in reverse
  • overwrite the contents of the main RAM with the forwarding buffer, which would be erased in the process.

The above recipe could be repeated any number of times to run the algorithm for an arbitrary number of steps; only the last step of the recipe wouldn't be reversible. The amount of energy spent per algorithmic step in non-reversible operations would be inversely proportional to the size of the LIFO, and thus could be made arbitrarily small if one were building to build a large enough LIFO.

In order for that ability to translate into any sort of energy savings, however, it would be necessary to have a LIFO which would store energy when information was put in, and usefully return that energy when it was read out. Further, the LIFO would have to be large enough to hold the state data for enough steps that the any energy cost of using it was less than the amount of energy it usefully saved. Given that the amount of energy lost in the storage and retrieval of N bytes from any practical FIFO is unlikely to be O(1), it's unclear that increasing N will meaningfully reduce energy consumption.

supercat
  • 1,281
  • 8
  • 11
3

Practical applied reversible computing is an active area of research and is likely to become more prominent in the future. Most of quantum computing can be seen to be attempting to create reversible qubit gates and it's very hard experimentally to match the theoretical properties of the QM formalism, but steady progress is being made.

Another basic point is that anytime energy dissipation is decreased on a chip, it's essentially moving the gate system to "more reversible", and lower-energy chip dissipation has been a high priority for a long time now in mobile computing (representing a sort of industry-wide paradigm shift). For decades chip performance gains (similar to Moore's law) came about by being somewhat "relaxed" or even "sloppy" with energy dissipation but that reached a point of diminishing returns a few years ago. The leading worldwide chip manufacturer Intel is attempting to pivot into lower-power chips to compete with Arm which has an advantage after never building anything but.

There is some possibly breakthrough recent research using superconducting technology (June 2014), and there are other active research projects in this area.

See e.g. Reversible logic gate using adiabatic superconducting devices / Takeuchi, Yamanashi, Yoshikawa, Nature:

Reversible computing has been studied since Rolf Landauer advanced the argument that has come to be known as Landauer's principle. This principle states that there is no minimum energy dissipation for logic operations in reversible computing, because it is not accompanied by reductions in information entropy. However, until now, no practical reversible logic gates have been demonstrated. One of the problems is that reversible logic gates must be built by using extremely energy-efficient logic devices. Another difficulty is that reversible logic gates must be both logically and physically reversible. Here we propose the first practical reversible logic gate using adiabatic superconducting devices and experimentally demonstrate the logical and physical reversibility of the gate. Additionally, we estimate the energy dissipation of the gate, and discuss the minimum energy dissipation required for reversible logic operations. It is expected that the results of this study will enable reversible computing to move from the theoretical stage into practical usage.

Juho
  • 22,905
  • 7
  • 63
  • 117
vzn
  • 11,162
  • 1
  • 28
  • 52