Johns Hopkins Magazine - April 1996 Issue

In Short - Science & Technology

By Elise Hancock and Melissa Hendricks
A molecule that could solve our energy problems microscope with an impact an algorithm that works the way nature does (sort of)

Nanotech solar cell now competitive

Chemists at Hopkins have developed a breakthrough "supramolecule" for an experimental type of solar cell, one that works at the molecular level. The supramolecule ups the open circuit voltage of the experimental cells by 50 percent, for the first time making this new technology competitive with conventional silicon solar cells (like those found on the roofs of many solar-heated homes).

"It's not quite as efficient [as conventional cells]," admits Gerald Meyer, associate professor of chemistry and leader of the research group, "but the cost per kilowatt hour will be 10 cents, compared with $3--$5." Fabrication is extremely simple, so labor costs will be low. Indeed, Meyer says that "the main expense actually is dye," a light-absorbing ruthenium compound.

Called "regenerative solar cells," the new devices are being developed in several labs around the world. The basic idea is to form a semi-conducting electrode by spraying onto glass a cluster of extremely small (14 Angstroms) particles of titanium dioxide, then soak the TiO2 in a dye that absorbs visible light. The completed electrode is sandwiched together with a counter-electrode, with iodine/iodide fluid in the middle, then exposed to visible light--photons.

Conceptually, here's how it works: The photons excite the dye molecules, so that an electron hops into the TiO2 and buzzes off around the circuit to light a lightbulb or do other useful work. In the process, the dye becomes oxidized--it's lost its electron, leaving a "hole," until it gets one back from an iodide electron "donor" and is ready to be excited again. Meanwhile, the original electron has come full circle and replenished the iodine/iodide donor. "So these dyes act as molecular pumps," says Meyer. There are many thousands of them in the solar cell, "each one able to go through this series thousands of times per second."

The stumbling block that has plagued groups working on molecular pumps is that too many of the departing electrons hop right back into the holes they had just left--a back reaction--instead of making the circuit. Light energy is wasted, and the cells produce only a weak dribble of current.

To solve the problem, Meyer and his group dreamed up a supramolecular compound: ruthenium-based dye bonded to an electron donor, phenothiazine. ("Actually it's a drug they use to treat schizophrenia," says Meyer, "but we use it as an electron donor.") Now each ruthenium dye has a donor just 2 Anstroms away, its electron able to hop into the hole before the departing electron has a chance to turn around and hop back. That's good, because the more the back reaction is inhibited, the more time the dye spends in an excited state, and the more electrons make the circuit.

Meyer says, "There's been nothing like this before, and it was basically our idea. This reaction takes roughly .3 milliseconds, which is slow--about 2,000 times slower than model compounds." And therefore about 50 percent more efficient.

Eventually, Meyer would like to make the cells solid-state with no liquid iodine. Perhaps the returning electron could hop straight from a platinum counter-electrode to the oxidized donor, the phenothiazine?

His research also gives him new heart for "the holy grail of solar conversion--using light to oxidize water into dioxygen, which is what we all breathe, and hydrogen. If you do that with visible light, we're all famous and can retire to Hawaii."

Such a technology would help solve the world's energy problems, says Meyer, "because when you burn hydrogen, you form water again." The process would be non-polluting, and the supply would never run out. "So in many ways it's an ideal fuel. And thermodynamically, visible light can drive this process."

The difficulty has been making it happen, because electrons and holes recombined too fast to take part in reactions of this type. "So slowing down the recombination rate is really quite exciting."

It is probably unnecessary to say that Gerald Meyer has some ideas about how to slow it down even further.
--EH

Optimizing complexity

"People have a lot of difficulty with this algorithm," says Jim Spall, a systems engineer at the Applied Physics Laboratory. "It sounds like perpetual motion or something for nothing. They think, it can't be true. But it's been proved so many times, there's just no doubt."

Mathematically proved, peer-reviewed, and works in practice: that's Spall's algorithm, a step-by-step procedure that specifies how to optimize the performance of extremely complex systems--complex as in traffic flow in a city, economic policies in investment, or dosages for multiple drugs in human patients.

In such a system, hundreds or thousands of independent factors will be at play, right? Yet not only can this algorithm handle any number of variables, it's also amazingly fast. Conventional optimizing methods work by trial and error: they change one variable at a time, observe the effect, change another, and so laboriously on. Spall's algorithm works more the way nature does: it randomly changes every variable at the same time, a method he calls "simultaneous perturbation." In that way, it gathers as much of the data essential to optimizing as trial-and-error does, but in a fraction of the time.

You may be thinking, Fast or not, this is nuts, because no one could tell which factor was affecting what. But hold on: that's okay. Spall's method does not depend on knowing which does what. It depends on each perturbation being random.

"Though not every type of randomness will work," he adds. "The algorithm will not work at all with the bell curve, nor with uniform distribution." Best is a coin-tossing distribution, heads or tails, so that the computer randomly perturbs each variable by plus-one or minus-one.

Technically, the algorithm works because it approximates the system's "gradient vector," to use a term from multivariate calculus. Spall explains that a gradient vector "gives you a direction, the best direction to go. And the algorithm creates an approximation to that gradient, such that the random approximation--on average--will effectively equal the true gradient, which is what you don't know. So [the algorithm] tends to move in the right direction."

It sounds like a magic compass, one that points you home even if you don't know where home is. "Precisely," says Spall.

The formula does have limits, of course. Because randomizing may take things briefly in a wrong direction, Spall would not recommend it for any system where error could be fatal. "For instance, if you were doing automatic brain surgery, this would not be good."

The system must be quantifiable--not necessarily understood, but quantifiable. Take Spall's favorite example, traffic lights in a city. "You don't need to understand the psychology of drivers," he says. "You just need input like time of day, weather, traffic flow, proportion of green to red time." Each intersection may have 6 to 10 such variables (or more), and the city has many intersections.

You must know what you want to optimize: Flow on the main traffic arteries? Minimum wait time at red lights? Minimum air pollution? All three, weighted? Also, information must be reasonably complete. Garbage in, garbage out--though the algorithm will zero out irrelevancies.

And feedback must come to some central place, so the computer can evaluate results. In the case of traffic, it must know what cars are doing via electrical sensors set in the pavement. (Virtually all cities have these devices.) Then let the drivers express their psychology--the algorithm will tinker with settings and record results, then tinker again. It will discover optimal settings for midnight, midmorning, rush hour, rain, snow, ballgame traffic.... "This is way beyond the state-of-the-art," says Spall.

Though the algorithm has not actually been tried on traffic yet (cities have already invested time, money, and thought in their traffic systems, and are reluctant to change), Maryland's Howard County has offered its facilities for a test run; Spall is seeking funds for the experiment.

Meanwhile, the algorithm is making its way. Spall says researchers all over the world are publishing theories on why it works. Already it is used in Japan to design advanced pattern and character recognition systems, while Italian engineers use it to detect faults in a power plant. And a senior engineer at a nuclear processing plant recently wrote Spall that after a several-day trial, "we're going full speed ahead. [The algorithm] looks like the most promising avenue that we have encountered."
--EH

Written by Elise Hancock and Melissa Hendricks.

Probing new cellular depths

With a powerful new light microscope, Hopkins biologists are resolving cellular architecture more minute than any previously seen through light microscopes--components as small as a tenth of a wavelength of light. "We're getting 5- to 10-fold better resolution," says biology professor Michael Edidin.

AT&T Bell Laboratories developed the device, which is called the near-field scanning microscope. It employs an optical fiber tip (see photo) and visible light to scan a field. But Bell Lab's version could only view non-biological (i.e. dry) samples. Edidin enlisted postdoctoral students Jeeseong Hwang and Levi Gheber, along with former Bell Labs physicist Eric Betzig, to assemble a modified version of the microscope, which can be used to view biological (wet) samples.

The device already has helped revise a theory about how cell surfaces are organized. Biologists had thought that lipids, or fat molecules, randomly dotted the landscape of the cell membrane rather like polka dots on a dress. More recently, they have come to view the membrane as a patchier structure. And that's what the near-field scanning microscope revealed: interconnected clumps of lipids. "That's the kind of thing that makes your heart go pitter-patter," says Edidin.

The team is currently using the microscope to examine the ridges of fruit fly chromosomes. The photo (above) shows a near-field image of a ridge. In the future this form of microscopy, along with a technique called fluorescent in situ hybridization, may help pinpoint the location of genes along chromosomes.

Edidin's funding comes from the National Institutes of Health and the Biology Department.
--MH


Send EMail to Johns Hopkins Magazine

Send EMail to Elise Hancock

Send EMail to Melissa Hendricks

Return to table of contents.