Desenvolvimentos Técnológicos

  • 1 Respostas
  • 2077 Visualizações
*

dremanu

  • Investigador
  • *****
  • 1254
  • Recebeu: 1 vez(es)
  • +7/-18
Desenvolvimentos Técnológicos
« em: Abril 29, 2004, 09:36:07 pm »
The First Nanochips  
 
As scientists and engineers continue to push back the limits of chipmaking technology, they have quietly entered into the nanometer realm  
 
By G. Dan Hutcheson    
 
For most people, the notion of harnessing nanotechnology for electronic circuitry suggests something wildly futuristic. In fact, if you have used a personal computer made in the past few years, your work was most likely processed by semiconductors built with nanometer-scale features. These immensely sophisticated microchips--or rather, nanochips--are now manufactured by the millions, yet the scientists and engineers responsible for their development receive little recognition. You might say that these people are the Rodney Dangerfields of nanotechnology. So here I would like to trumpet their accomplishments and explain how their efforts have maintained the steady advance in circuit performance to which consumers have grown accustomed.

The recent strides are certainly impressive, but, you might ask, is semiconductor manufacture really nanotechnology? Indeed it is. After all, the most widely accepted definition of that word applies to something with dimensions smaller than 100 nanometers, and the first transistor gates under this mark went into production in 2000. Integrated circuits coming to market now have gates that are a scant 50 nanometers wide. That's 50 billionths of a meter, about a thousandth the width of a human hair.

Having such minuscule components conveniently allows one to stuff a lot into a compact package, but saving space per se is not the impetus behind the push for extreme miniaturization. The reason to make things small is that it lowers the unit cost for each transistor. As a bonus, this overall miniaturization shrinks the size of the gates, which are the parts of the transistors that switch between blocking electric current and allowing it to pass. The more narrow the gates, the faster the transistors can turn on and off, thereby raising the speed limits for the circuits using them. So as microprocessors gain more transistors, they also gain more speed.

The desire for boosting the number of transistors on a chip and for running it faster explains why the semiconductor industry, just as it crossed into the new millennium, shifted from manufacturing microchips to making nanochips. How it quietly passed this milestone, and how it continues to advance, is an amazing story of people overcoming some of the greatest engineering challenges of our time--challenges every bit as formidable as those encountered in building the first atomic bomb or sending a person to the moon.

Straining to Accelerate

The best way to get a flavor for the technical innovations that helped to usher in the current era of nanochips is to survey improvements that have been made in each of the stages required to manufacture a modern semiconductor--say, the microprocessor that powers the computer on which I typed this text. That chip, a Pentium 4, contains some 42 million transistors intricately wired together. How in the world was this marvel of engineering constructed? Let us survey the steps.

Before the chipmaking process even begins, one needs to obtain a large crystal of pure silicon. The traditional method for doing so is to grow it from a small seed crystal that is immersed in a batch of molten silicon. This process yields a cylindrical ingot--a massive gem-quality crystal--from which many thin wafers are then cut.

It turns out that such single-crystal ingots are no longer good enough for the job: they have too many "defects," dislocations in the atomic lattice that hamper the silicon's ability to conduct and otherwise cause trouble during chip manufacture. So chipmakers now routinely deposit a thin, defect-free layer of single-crystal silicon on top of each wafer by exposing it to a gas containing silicon. This technique improves the speed of the transistors, but engineers have been pushing hard to do even better using something called silicon-on-insulator technology, which involves putting a thin layer of insulating oxide slightly below the surface of the wafer. Doing so lowers the capacitance (the ability to store electrical charge) between parts of the transistors and the underlying silicon substrate, capacitance that would otherwise sap speed and waste power. Adopting a silicon-on-insulator geometry can boost the rate at which the transistors can be made to switch on and off (or, alternatively, reduce the power needed) by up to 30 percent. The gain is equivalent to what one gets in moving one generation ahead in feature size.

IBM pioneered this technology and has been selling integrated circuits made with it for the past five years. The process IBM developed, dubbed SIMOX, short for separation by implantation of oxygen, was to bombard the silicon with oxygen atoms (or rather, oxygen ions, which have electrical charge and can thus be readily accelerated to high speeds). These ions implant themselves deep down, relatively speaking, where they combine with atoms in the wafer and form a layer of silicon dioxide. One difficulty with this approach is that the passage of oxygen ions through the silicon creates many defects, so the surface has to be carefully heated afterward to mend disruptions to the crystal lattice. The greater problem is that oxygen implantation is inherently slow, which makes it costly. Hence, IBM reserved its silicon-on-insulator technology for its most expensive chips.

A new, faster method for accomplishing the same thing is, however, gaining ground. The idea is to first form an insulating oxide layer directly on top of a silicon wafer. One then flips the oxidized surface over and attaches it onto another, untreated wafer. After cleverly pruning off most of the silicon above the oxide layer, one ends up with the desired arrangement: a thin stratum of silicon on top of the insulating oxide layer on top of a bulk piece of silicon, which just provides physical support.

The key was in developing a precision slicing method. The French company that did so, Soitec, aptly trademarked the name Smart Cut for this technique, which requires shooting hydrogen ions through the oxidized surface of the first wafer so that they implant themselves at a prescribed depth within the underlying silicon. (Implanting hydrogen can be done more rapidly than implanting oxygen, making this process relatively inexpensive.) Because the hydrogen ions do most of their damage right where they stop, they produce a level within the silicon that is quite fragile. So after flipping this treated wafer over and attaching it to a wafer of bulk silicon, one can readily cleave the top off at the weakened plane. Any residual roughness in the surface can be easily polished smooth. Even IBM now employs Smart Cut for making some of its high-performance chips, and AMD (Advanced Micro Devices in Sunnyvale, Calif.) will use it in its upcoming generation of microprocessors.

The never-ending push to boost the switching speed of transistors has also brought another very basic change to the foundations of chip manufacture, something called strained silicon. It turns out that forcing the crystal lattice of silicon to stretch slightly (by about 1 percent) increases the mobility of electrons passing through it considerably, which in turn allows the transistors built on it to operate faster. Chipmakers induce strain in silicon by bonding it to another crystalline material--in this case, a silicon-germanium blend--for which the lattice spacing is greater. Although the technical details of how this strategy is being employed remain closely held, it is well known that many manufacturers are adopting this approach. Intel, for example, is using strained silicon in an advanced version of its Pentium 4 processor called Prescott, which began selling late last year.

Honey, I Shrunk the Features

Advances in the engineering of the silicon substrate are only part of the story: the design of the transistors constructed atop the silicon has also improved tremendously in recent years. One of the first steps in the fabrication of transistors on a digital chip is growing a thin layer of silicon dioxide on the surface of a wafer, which is done by exposing it to oxygen and water vapor, allowing the silicon, in a sense, to rust (oxidize). But unlike what happens to the steel body of an old car, the oxide does not crumble away from the surface. Instead it clings firmly, and oxygen atoms required for further oxidization must diffuse through the oxide coating to reach fresh silicon underneath. The regularity of this diffusion provides chipmakers with a way to control the thickness of the oxide layers they create.

For example, the thin oxide layers required to insulate the gates of today's tiny transistors can be made by allowing oxygen to diffuse for only a short time. The problem is that the gate oxide, which in modern chips is just several atoms thick, is becoming too slim to lay down reliably. One fix, of course, is to make this layer thicker. The rub here is that as the thickness of the oxide increases, the capacitance of the gate decreases. You might ask: Isn't that a good thing? Isn't capacitance bad? Often capacitance is indeed something to be avoided, but the gate of a transistor operates by inducing electrical charge in the silicon below it, which provides a channel for current to flow. If the capacitance of the gate is too low, not enough charge will be present in this channel for it to conduct.

The solution is to use something other than the usual silicon dioxide to insulate the gate. In particular, semiconductor manufacturers have been looking hard at what are known as high-K (high-dielectric-constant) materials, such as hafnium oxide and strontium titanate, ones that allow the oxide layer to be made thicker, and thus more robust, without compromising the ability of the gate to act as a tiny electrical switch.

Placing a high-K insulator on top of silicon is, however, not nearly as straightforward as just allowing it to oxidize. The task is best accomplished with a technique called atomic-layer deposition, which employs a gas made of small molecules that naturally stick to the surface but do not bond to one another. A single-molecule-thick film can be laid down simply by exposing the wafer to this gas long enough so that every spot becomes covered. Treatment with a second gas, one that reacts with the first to form the material in the coating, creates the molecule-thin veneer. Repeated applications of these two gases, one after the next, deposit layer over layer of this substance until the desired thickness is built up.

After the gate insulator is put in place, parts of it must be selectively removed to achieve the appropriate pattern on the wafer. The procedure for doing so (lithography) constitutes a key part of the technology needed to create transistors and their interconnections. Semiconductor lithography employs a photographic mask to generate a pattern of light and shadows, which is projected on a wafer after it is coated with a light-sensitive substance called photoresist. Chemical processing and baking harden the unexposed photoresist, which protects those places in shadow from later stages of chemical etching.

Practitioners once believed it impossible to use lithography to define features smaller than the wavelength of light employed, but for a few years now, 70-nanometer features have been routinely made using ultraviolet light with a wavelength of 248 nanometers. To accomplish this magic, lithography had to

undergo some dramatic changes. The tools brought to bear have complicated names--optical proximity correction, phase-shifting masks, excimer lasers--but the idea behind them is simple, at least in principle. When the size of the features is smaller than the wavelength of the light, the distortions, which arise through optical diffraction, can be readily calculated and corrected for. That is, one can figure out an arrangement for that mask that, after diffraction takes place, yields the desired pattern on the silicon. For example, suppose a rectangle is needed. If the mask held a plain rectangular shape, diffraction would severely round the four corners projected on the silicon. If, however, the pattern on the mask were designed to look more like a dog bone, the result would better approximate a rectangle with sharp corners.

This general strategy now allows transistors with 50-nanometer features to be produced using light with a wavelength of 193 nanometers. But one can push these diffraction-correction techniques only so far, which is why investigators are trying to develop the means for higher-resolution patterning. The most promising approach employs lithography, but with light of much shorter wavelength--what astronomers would call "soft" x-rays or, to keep with the preferred term in the semiconductor industry, extreme ultraviolet.

Semiconductor manufacturers face daunting challenges as they move to extreme ultraviolet lithography, which reduces the wavelengths (and thus the size of the features that can be printed) by an order of magnitude. The prototype systems built so far are configured for a 13-nanometer wavelength. They are truly marvels of engineering--on both macroscales and nanoscales.

Take, for instance, the equipment needed to project images onto wafers. Because all materials absorb strongly at extreme ultraviolet wavelengths, these cameras cannot employ lenses, which would be essentially opaque. Instead the projectors must use rather sophisticated mirrors. For the same reason, the masks must be quite different from the glass screens used in conventional lithography. Extreme ultraviolet work demands masks that absorb and reflect light. To construct them, dozens of layers of molybdenum and silicon are laid down, each just a few nanometers thick. Doing so produces a highly reflective surface onto which a patterned layer of chromium is applied to absorb light in just the appropriate places.

As with other aspects of chipmaking, these masks must be free from imperfections. But because the wavelengths are so small, probing for defects proves a considerable challenge. Scientists and engineers from industry, academe and government laboratories from across the U.S. and Europe are collaboratively seeking solutions to this and other technical hurdles that must be overcome before extreme ultraviolet lithography becomes practical. But for the time being, chipmakers must accept the limits of conventional lithography and maintain feature sizes of at least 50 nanometers or so.

Using lithography to imprint such features on a film of photoresist is only the first in a series of manipulations used to sculpt the wafer below. Process engineers must also figure out how to remove the exposed parts of the photoresist and to etch the material that is uncovered in ways that do not eat into adjacent areas. And one must be able to wash off the photoresist and the residues left over after etching--a mundane task that becomes rather complicated as the size of the features shrinks.

The problem is that, seen at the nanometer level, the tiny features put on the chip resemble tall, thin skyscrapers, separated by narrow chasms. At this scale, traditional cleaning fluids act as viscous tidal waves and could easily cause things to topple. Even if that catastrophe can be avoided, these liquids have a troubling tendency to get stuck in the nanotechnology canyons.

An ingenious solution to this problem emerged during the 1990s from work done at Los Alamos National Laboratory: supercritical fluids. The basic idea is to use carbon dioxide at elevated pressure and temperature, enough to put it above its so-called critical point. Under these conditions, CO2 looks something like a liquid but retains an important property of a gas--the lack of viscosity. Supercritical carbon dioxide thus flows easily under particles and can mechanically dislodge them more effectively than can any wet chemical. (It is no coincidence that supercritical carbon dioxide has recently become a popular means to dry-clean clothes.) And mixed with the proper co-solvents, supercritical carbon dioxide can be quite effective in dissolving photoresist. What is more, once the cleaning is done, supercritical fluids are easy to remove: lowering the pressure--say, to atmospheric levels--causes them to evaporate away as a normal gas.

With the wafer cleaned and dried in this way, it is ready for the next step: adding the junctions of the transistors--tubs on either side of the gate that serve as the current "source" and "drain." Such junctions are made by infusing the silicon with trace elements that transform it from a semiconductor to a conductor. The usual tactic is to fire arsenic or boron ions into the surface of the silicon using a device called an ion implanter. Once emplaced, these ions must be "activated," that is, given the energy they need to incorporate themselves into the crystal lattice. Activation requires heating the silicon, which often has the unfortunate consequence of causing the arsenic and boron to diffuse downward.

To limit this unwanted side effect, the temperature must be raised quickly enough that only a thin layer on top heats up. Restricting the heating in this way ensures that the surface will cool rapidly on its own. Today's systems ramp up and down by thousands of degrees a second. Still, the arsenic and boron atoms diffuse too much for comfort, making the junctions thicker than desired for optimum speed. A remedy is, however, on the drawing board--laser thermal processing, which can vary the temperature at a rate of up to five billion degrees a second. This technology, which should soon break out of the lab and onto the factory floor, holds the promise of preventing virtually all diffusion and yielding extremely shallow junctions.

Once the transistors are completed, millions of capacitors are often added to make dynamic random-access memory, or DRAM. The capacitors used for DRAM have lately become so small that manufacturing engineers are experiencing the same kinds of problems they encounter in fashioning transistor gates. Indeed, here the problems are even more urgent, and the answer, again, appears to be atomic-layer deposition, which was adopted for the production of the latest generation of DRAM chips.

New Meets Old

Atomic-layer deposition can also help in the next phase of chip manufacture, hooking everything together. The procedure is to first lay down an insulating layer of glass on which a pattern of lines is printed and etched. The grooves are then filled with metal to form the wires. These steps are repeated to create six to eight layers of crisscrossing interconnections. Although the semiconductor industry has traditionally used aluminum for this bevy of wires, in recent years it has shifted to copper, which allows the chips to operate faster and helps to maintain signal integrity. The problem is that copper contaminates the junctions, so a thin conductive barrier (one that does not slow the chip down) needs to be placed below it. The solution was atomic-layer deposition.

The switch to copper also proved challenging for another reason: laying down copper is inherently tricky. Many high-tech approaches were attempted, but none worked well. Then, out of frustration, engineers at IBM tried an old-fashioned method: electroplating, which leaves an uneven surface and has to be followed with mechanical polishing. At the time, the thought of polishing a wafer--that is, introducing an abrasive grit--was anathema to managers in this industry, which is downright obsessed with cleanliness. Hence, the engineers who originally experimented with this approach at IBM did so without seeking permission from their supervisor. They were delighted to discover that the polishing made the wafer more amenable to lithographic patterning (because the projection equipment has a limited depth of focus), that it removed troublesome defects from the surface and that it made it easier to deposit films for subsequent processing steps.

The lesson to be learned here is that seemingly antiquated methods can be just as valuable as cutting-edge techniques. Indeed, the semiconductor industry has benefited a great deal in recent years from combinations of old and new. That it has advanced as far as it has is a testament to the ingenious ability of countless scientists and engineers to continually refine the basic method of chip manufacture, which is now more than four decades old.

Will the procedures used for fabricating electronic devices four decades down the road look anything like those currently employed? Although some futurists would argue that exotic forms of nanotechnology will revolutionize electronics by midcentury, I'm betting that the semiconductor industry remains pretty much intact, having by then carried out another dazzling series of incremental technical advances, ones that are today beyond anyone's imagination
"Esta é a ditosa pátria minha amada."
 

*

dremanu

  • Investigador
  • *****
  • 1254
  • Recebeu: 1 vez(es)
  • +7/-18
(sem assunto)
« Responder #1 em: Abril 29, 2004, 09:40:03 pm »
Mais noticias interessantes sobre desenvolvimento científico

-------------------------------------------------------------------------------------

Synthetic Life  
 
Biologists are crafting libraries of interchangeable DNA parts and assembling them inside microbes to create programmable, living machines  
 
By W. Wayt Gibbs    
 
Evolution is a wellspring of creativity; 3.6 billion years of mutation and competition have endowed living things with an impressive range of useful skills. But there is still plenty of room for improvement. Certain microbes can digest the explosive and carcinogenic chemical TNT, for example--but wouldn't it be handy if they glowed as they did so, highlighting the location of buried land mines or contaminated soil? Wormwood shrubs generate a potent medicine against malaria but only in trace quantities that are expensive to extract. How many millions of lives could be saved if the compound, artemisinin, could instead be synthesized cheaply by vats of bacteria? And although many cancer researchers would trade their eyeteeth for a cell with a built-in, easy-to-read counter that ticks over reliably each time it divides, nature apparently has not deemed such a thing fit enough to survive in the wild.

It may seem a simple matter of genetic engineering to rewire cells to glow in the presence of a particular toxin, to manufacture an intricate drug, or to keep track of the cells' age. But creating such biological devices is far from easy. Biologists have been transplanting genes from one species to another for 30 years, yet genetic engineering is still more of a craft than a mature engineering discipline.

"Say I want to modify a plant so that it changes color in the presence of TNT," posits Drew Endy, a biologist at the Massachusetts Institute of Technology. "I can start tweaking genetic pathways in the plant to do that, and if I am lucky, then after a year or two I may get a 'device'--one system. But doing that once doesn't help me build a cell that swims around and eats plaque from artery walls. It doesn't help me grow a little microlens. Basically the current practice produces pieces of art."

--------------------------------------------------------------------------------
Living machines reproduce, but as they do, they mutate.
--------------------------------------------------------------------------------


Endy is one of a small but rapidly growing number of scientists who have set out in recent years to buttress the foundation of genetic engineering with what they call synthetic biology. They are designing and building living systems that behave in predictable ways, that use interchangeable parts, and in some cases that operate with an expanded genetic code, which allows them to do things that no natural organism can.

This nascent field has three major goals: One, learn about life by building it, rather than by tearing it apart. Two, make genetic engineering worthy of its name--a discipline that continuously improves by standardizing its previous creations and recombining them to make new and more sophisticated systems. And three, stretch the boundaries of life and of machines until the two overlap to yield truly programmable organisms. Already TNT-detecting and artemisinin-producing microbes seem within reach. The current prototypes are relatively primitive, but the vision is undeniably grand: think of it as Life, version 2.0.

A Light Blinks On

The roots of synthetic biology extend back 15 years to pioneering work by Steven A. Benner and Peter G. Schultz. In 1989 Benner led a team at ETH Zurich that created DNA containing two artificial genetic "letters" in addition to the four that appear in life as we know it. He and others have since invented several varieties of artificially enhanced DNA. So far no one has made genes from altered DNA that are functional--transcribed to RNA and then translated to protein form--within living cells. Just within the past year, however, Schultz's group at the Scripps Research Institute developed cells (containing normal DNA) that generate unnatural amino acids and string them together to make novel proteins.

Benner and other "old school" synthetic biologists see artificial genetics as a way to explore basic questions, such as how life got started on earth and what forms it may take elsewhere in the universe. Interesting as that is, the recent buzz growing around synthetic biology arises from its technological promise as a way to design and build machines that work inside cells. Two such devices, reported simultaneously in 2000, inspired much of the work that has happened since.

Both devices were constructed by inserting selected DNA sequences into Escherichia coli, a normally innocuous bacterium in the human gut. The two performed very different functions, however. Michael Elowitz and Stanislaus Leibler, then at Princeton University, assembled three interacting genes in a way that made the E. coli blink predictably, like microscopic Christmas tree lights. Meanwhile James J. Collins, Charles R. Cantor and Timothy S. Gardner of Boston University made a genetic toggle switch. A negative feedback loop--two genes that interfere with each other--allows the toggle circuit to flip between two stable states. It effectively endows each modified bacterium with a rudimentary digital memory.

To engineering-minded biologists, these experiments were energizing but also frustrating. It had taken nearly a year to create the toggle switch and about twice that time to build the flashing microbes. And no one could see a way to connect the two devices to make, for example, blinking bacteria that could be switched on and off.

"We would like to be able to routinely assemble systems from pieces that are well described and well behaved," Endy remarks. "That way, if in the future someone asks me to make an organism that, say, counts to 3,000 and then turns left, I can grab the parts I need off the shelf, hook them together and predict how they will perform." Four years ago parts such as these were just a dream. Today they fill a box on Endy's desk.

Building with BioBricks

"These are genetic parts," Endy says as he holds out a container filled with more than 50 vials of clear, syrupy fluid. "Each of these vials contains copies of a distinct section of DNA that either performs some function on its own or can be used by a cell to make a protein that does something useful. What is important here is that each genetic part has been carefully designed to interact well with other parts, on two levels." At a mechanical level, individual BioBricks (as the M.I.T. group calls the parts) can be fabricated and stored separately, then later stitched together to form larger bits of DNA. And on a functional level, each part sends and receives standard biochemical signals. So a scientist can change the behavior of an assembly just by substituting a different part at a given spot.

"Interchangeable components are something we take for granted in other kinds of engineering," Endy notes, but genetic engineering is only beginning to draw on the power of the concept. One advantage it offers is abstraction. Just as electrical engineers need not know what is inside a capacitor before they use it in a circuit, biological engineers would like to be able to use a genetic toggle switch while remaining blissfully ignorant of the binding coefficients and biochemical makeup of the promoters, repressors, activators, inducers and other genetic elements that make the switch work. One of the vials in Endy's box, for example, contains an inverter BioBrick (also called a NOT operator). When its input signal is high, its output signal is low, and vice versa. Another BioBrick performs a Boolean AND function, emitting an output signal only when it receives high levels of both its inputs. Because the two parts work with compatible signals, connecting them creates a NAND (NOT AND) operator. Virtually any binary computation can be performed with enough NAND operators.

Beyond abstraction, standardized parts offer another powerful advantage: the ability to design a functional genetic system without knowing exactly how to make it. Early last year a class of 16 students was able in one month to specify four genetic programs to make groups of E. coli cells flash in unison, as fireflies sometimes do. The students did not know how to create DNA sequences, but they had no need to. Endy hired a DNA-synthesis company to manufacture the 58 parts called for in their designs. These new BioBricks were then added to M.I.T.'s Registry of Standard Biological Parts. That online database today lists more than 140 parts, with the number growing by the month.


Hijacking Cells

As useful as it has been to apply the lessons of other fields of engineering to genetics, beyond a certain point the analogy breaks down. Electrical and mechanical machines are generally self-contained. That is true for a select few genetic devices: earlier this year, for example, Milan Stojanovic of Columbia University contrived test tubes of DNA-like biomolecules that play a chemical version of tic-tac-toe. But synthetic biologists are mainly interested in building genetic devices within living cells, so that the systems can move, reproduce and interact with the real world. From a cell's point of view, the synthetic device inside it is a parasite. The cell provides it with energy, raw materials and the biochemical infrastructure that decodes DNA to messenger RNA and then to protein.

The host cell, however, also adds a great deal of complexity. Biologists have invested years of work in computer models of E. coli and other single-celled organisms [see "Cybernetic Cells," Scientific American, August 2001]. And yet, acknowledges Ron Weiss of Princeton, "if you give me the DNA sequence of your genetic system, I can't tell you what the bacteria will do with it." Indeed, Endy recalls, "about half of the 60 parts we designed in 2003 initially couldn't be synthesized because they killed the cells that were copying them. We had to figure out a way to lower the burden that carrying and replicating the engineered DNA imposed on the cells." (Eventually 58 of the 60 parts were produced successfully.)

One way to deal with the complexity added by the cells' native genome is to dodge it: the genetic device can be sequestered on its own loop of DNA, separate from the chromosome of the organism. Physical separation is only half the solution, however, because there are no wires in cells. Life runs on "wetware," with many protein signals simply floating randomly from one part to another. "So if I have one inverter over here made out of proteins and DNA," Endy explains, "a protein signal meant for that part will also act on any other instance of that inverter anywhere else in the cell," whether it lies on the artificial loop or on the natural chromosome.

--------------------------------------------------------------------------------
"The people in this class are happy and building nice, constructive things, as opposed to new species of virus or new kinds of bioweapons." --Drew Endy, M.I.T.
--------------------------------------------------------------------------------


One way to prevent crossed signals is to avoid using the same part twice. Weiss has taken this approach in constructing a "Goldilocks" genetic circuit, one that lights up when a target chemical is present but only when the concentration is not too high and not too low. Tucked inside its various parts are four inverters, each of which responds to a different protein signal. But this strategy makes it much more difficult to design parts that are truly interchangeable and can be rearranged.

Endy is testing a solution that may be better for some systems. "Our inverter uses the same components [as one of Weiss's], just arranged differently," Endy says. "Now the input is not a protein but rather a rate, specifically the rate at which a gene is transcribed. The inverter responds to how many messenger RNAs are produced per second. It makes a protein, and that protein determines the rate of transcription going out [by switching on a second gene]. So I send in TIPS--transcription events per second--and as output, I get TIPS. That is the common currency, like a current in an electrical circuit." In principle, the inverter could be removed and replaced with any other BioBrick that processes TIPS. And TIPS signals are location-specific, so the same part can be used at several places in a circuit without interference.

The TIPS technique will be tested by a new set of genetic systems designed by students who took a winter course at M.I.T. this past January. The aim this year was to reprogram cells to work cooperatively to form patterns, such as polka dots, in a petri dish. To do this the cells must communicate with one another by secreting and sensing chemical nutrients.

"This year's systems were about twice the size of the 2003 projects," Endy says. It took 13 months to get the blinking E. coli designs built and into cells. But in the intervening year the inventory of BioBricks has grown, the speed of DNA synthesis has shot up, and the engineers have gained experience assembling genetic circuits. So Endy expects to have the 2004 designs ready for testing in just five months, in time to show off at the first synthetic biology conference, scheduled for this June.

Rewriting the Book of Life

The scientists who attend that conference will no doubt commiserate about the inherent difficulty of engineering a relatively puny stretch of DNA to work reliably within a cell that is constantly changing. Living machines reproduce, but as they do they mutate.

"Replication is far from perfect. We've built circuits and seen them mutate in half the cells within five hours," Weiss reports. "The larger the circuit is, the faster it tends to mutate." Weiss and Frances H. Arnold of the California Institute of Technology have evolved circuits with improved performance using multiple rounds of mutation followed by selection of those cells most fit for the desired task. But left unsupervised, evolution will tend to break genetic machines.

"I would like to make a genetically encoded device that accepts an input signal and simply counts: 1, 2, 3, ... up to 256," Endy suggests. "That's not much more complex than what we're building now, and it would allow you to quickly and precisely detect certain types of cells that had lost control of their reproduction and gone cancerous. But how do I design a counter so that the design persists when the machine makes copies of itself that contain mistakes? I don't have a clue. Maybe we have to build in redundancy--or maybe we need to make the function of the counter somehow good for the cell."

Or perhaps the engineers will have to understand better how simple forms of life, such as viruses, have solved the problem of persistence. Synthetic biology may help here, too. Last November, Hamilton O. Smith and J. Craig Venter announced that their group at the Institute for Biological Energy Alternatives had re-created a bacteriophage (a virus that infects bacteria) called phiX174 from scratch, in just two weeks. The synthetic virus, Venter said, has the same 5,386 base pairs of DNA as the natural form and is just as active.

"Synthesis of a large chromosome is now clearly in reach," said Venter, who for several years led a project to identify the minimal set of genes required for survival by the bacterium Mycoplasma genitalium. "What we don't know is whether we can insert that chromosome into a cell and transform the cell's operating system to work off the new chromosome. We will have to understand life at its most basic level, and we're a long way from doing that."

Re-creating a virus letter-for-letter does not reveal much about it, but what if the genome were dissected into its constituent genes and then methodically put back together in a way that makes sense to human engineers? That is what Endy and colleagues are doing with the T7 bacteriophage. "We've rebuilt T7--not just resynthesized it but reengineered the genome and synthesized that," Endy reports. The scientists are separating genes that overlap, editing out redundancies, and so on. The group has completed about 11.5 kilobases so far and expects to finish the remaining 30,000 base pairs by the end of 2004.

Beta-Testing Life 2.0

Synthetic biologists have so far built living genetic systems as experiments and demonstrations. But a number of research laboratories are already working on applications. Martin Fussenegger and his colleagues at ETH Zurich have graduated from bacteria to mammals. Last year they infused hamster cells with networks of genes that have a kind of volume control: adding small amounts of various antibiotics turned the output of the synthetic genes to low, medium or high. Controlling gene expression in this way could prove quite handy for gene therapies and the manufacture of pharmaceutical proteins.

Living machines will probably find their first uses for jobs that require sophisticated chemistry, such as detecting toxins or synthesizing drugs. Last year Homme W. Hellinga of Duke University invented a way to redesign natural sensor proteins in E. coli so that they would latch onto TNT or any other compound of interest instead of their normal targets. Weiss says that he and Hellinga have discussed combining his Goldilocks circuit with Hellinga's sensor to make land-mine detectors.

Jay Keasling, who recently founded a synthetic biology department at Lawrence Berkeley National Laboratory (LBNL), reports that his group has engineered a large network of wormwood and yeast genes into E. coli. The circuit enables the bacterium to fabricate a chemical precursor to artemisinin, a next-generation antimalarial drug that is currently too expensive for the parts of the developing world that need it most.

Keasling says that three years of work have increased yields by a factor of one million. By boosting the yields another 25- to 50-fold, he adds, "we will be able to produce artemisinin-based dual cocktail drugs to the Third World for about one tenth the current price." With relatively simple modifications, the bioengineered bacteria could be altered to produce expensive chemicals used in perfumes, flavorings and the cancer drug Taxol.

Other scientists at LBNL are using E. coli to help dispose of nuclear waste as well as biological and chemical weapons. One team is modifying the bacteria's sense of "smell" so that the bugs will swim toward a nerve agent, such as VX, and digest it. "We have engineered E. coli and Pseudomonas aeruginosa to precipitate heavy metals, uranium and plutonium on their cell wall," Keasling says. "Once the cells have accumulated the metals, they settle out of solution, leaving cleaned wastewater."

Worthy goals, all. But if you become a touch uneasy at the thought of undergraduates creating new kinds of germs, of private labs synthesizing viruses, and of scientists publishing papers on how to use bacteria to collect plutonium, you are not alone.

In 1975 leading biologists called for a moratorium on the use of recombinant-DNA technology and held a conference at the Asilomar Conference Grounds in California to discuss how to regulate its use. Self-policing seemed to work: there has yet to be a major accident with genetically engineered organisms. "But recently three things have changed the landscape," Endy points out. "First, anyone can now download the DNA sequence for anthrax toxin genes or for any number of bad things. Second, anyone can order synthetic DNA from offshore companies. And third, we are now more worried about intentional misapplication."

So how does society counter the risks of a new technology without also denying itself all the benefits? "The Internet stays up because there are more people who want to keep it running than there are people who want to bring it down," Endy suggests. He pulls out a photograph of the class he taught last year. "Look. The people in this class are happy and building nice, constructive things, as opposed to new species of virus or new kinds of bioweapons. Ultimately we deal with the risks of biological technology by creating a society that can use the technology constructively."

But he also believes that a meeting to address potential problems makes sense. "I think," he says, "it would be entirely appropriate to convene a meeting like Asilomar to discuss the current state and future of biological technology." This June, as leaders in the field meet to share their latest ideas about what can now be created, perhaps they will also devote some thought to what shouldn't.
"Esta é a ditosa pátria minha amada."