An unlimited source of clean energy just got a little easier to attain

Researchers in the US have overcome a key barrier to making nuclear fusion reactors a reality. In results published in Nature, scientists have shown that they can now produce more energy from fusion reactions than they put into nuclear fuel for an experiment. The use of fusion as a source of energy remains a long way off, but the latest development is an important step towards that goal.

Nuclear fusion is the process that powers the sun and billions of other stars in the universe. If mastered, it could provide an unlimited source of clean energy because the raw materials are plentiful and the operation produces no carbon emissions.

During the fusion process, smaller atoms fuse into larger ones releasing huge amounts of energy. To achieve this on Earth, scientists have to create conditions similar to those at the centre of the sun, which involves creating very high pressures and temperatures.

There are two ways to achieve this: one uses lasers and is called inertial confinement fusion (ICF), another deploys magnets and is called magnetic confinement fusion (MCF). Omar Hurricane and colleagues at the Lawrence Livermore National Laboratory opted for ICF with the help of 192 high-energy lasers at the National Ignition Facility in the US, which was designed specifically to boost fusion research.

A typical fusion reaction at the facility takes weeks of preparation. But the fusion reaction is completed in an instant (150 picoseconds, to be precise, which is less than a billionth of a second). In that moment, at the core of the reaction the pressure is 150 billion times atmospheric pressure. The density and temperature of the plasma created is nearly three times that at the centre of the sun.

The most critical part of the reaction, and one that had been a real concern for Hurricane’s team, is the shape of the fuel capsule. The capsule is made from a polymer and is about 2mm in diameter – about the size of a paper pinhead. On the inside it is coated with deuterium and tritium – isotopes of hydrogen – that are frozen to be in a solid state.

Hohlraum geometry with a capsule inside. Dr. Eddie Dewald/LLNL

This capsule is placed inside a gold cylinder, where the 192 lasers are fired. The lasers hit the gold container which emit X-rays, which heat the pellet and make it implode instantly, causing a fusion reaction. According to Debbie Callahan, a co-author of the study: “When the lasers are fired, the capsule is compressed 35 times. That is like compressing a basketball to the size of a pea.”

The compression produces immense pressure and temperature leading to a fusion reaction. Problems with the process were overcome last September, when, for the first time, Hurricane was able to produce more energy output from a fusion reaction than the fuel put into it. Since then he has been able to repeat the experiment.

Hurricane’s current output, although more than the hydrogen fuel put into the reaction, is still 100 times less than the total energy put into the system, most of which is in the form of lasers. Yet, this is a big achievement because reaching ignition just became easier.

Hurricane hasn’t yet reached the stated goal of the NIF that is to achieve “ignition”, where nuclear fusion generates as much energy as the lasers supply. At that point it would be possible to make a sustainable power plant based on the technology.

Scientists have been trying to tame fusion power for more than 50 years, but with little success. Although the National Ignition Facility, a US$3.5-billion operation, was built for classified government research, half of its laser time was devoted to fusion with an aim to accelerate research.

Zulfikar Najmudin, a plasma physicist at Imperial College London said: “These results will come as a huge relief to scientists at NIF, who were very sure they could have achieved this a few years ago.”

With laser-mediated ICF showing positive results, the obvious question is how does it compare with magnet-mediated fusion? According to Stephen Cowley, director of Culham Centre for Fusion Energy in Oxfordshire, “The different measures of success make it hard to compare NIF’s results with those of ‘magnetic confinement’ fusion devices.”

Culham works with magnetic confinement where, in 1997, the facility generated 16MW of power for 24MW put into the device. “We have waited 60 years to get close to controlled fusion. We are now close in both magnetic and inertial. The engineering milestone is when the whole plant produces more energy than it consumes,” Cowley said.

That may happen at the fusion reactor ITER, under construction in France, which is expected to be the first power plant that produces more energy than it consumes to sustain a fusion reaction.The Conversation

First published on The Conversation. Image credit: LLNL.

2013 Physics Nobel

This time the pundits were right. The 2013 Nobel Prize in Physics was indeed awarded to the discovery of the Higgs boson. Peter Higgs and François Englert shared the prize for suggesting the mechanism that gives subatomic particles their mass.

The Higgs boson is a key part of the Standard Model, which is by far the best theory we have to explain how the universe works at the basic level. If the Higgs boson were not to exist, physicists would have had to go back to the drawing board.

The easiest way to understand the importance of the Higgs boson is to go back to the beginning of the universe. After the Big Bang, for a short time, all particles were massless. But soon after, when the temperature fell below a trillion degrees, the Higgs field switched on. Some particles interacting with this field slowed down and others did not. Those that did such as protons and neutrons gained mass, while others like photons and gluons remained massless. Only when this happened did matter, as we know it now, come to existence in the form of atoms.

This is what Englert and Higgs suggested independently. But others at the time were involved too. In 1964, six physicists came up with similar ideas. First was Englert at the Université Libre de Bruxelles who did it with Robert Brout. Then Higgs at the University of Edinburgh did it on his own. And finally it was a group of three researchers from Imperial College – Thomas Kibble, Gerald Guralnik and Carl Hagen.

These others, many lament, deserved credit too. That though would not have been possible. “It is no surprise that the Swedish Academy felt unable to include us, constrained as they are by a self-imposed rule that the Prize cannot be shared by more than three people,” Kibble said of him and his colleagues. Another candidate would have been Robert Brout, Englert’s colleague, were he still alive.

Still others decried the prize-awarding committee’s exclusion of experimentalists who proved the existence of the Higgs boson. The Large Hadron Collider (LHC) outside Geneva, where these experiments were conducted by the Atlas and CMS teams, was acknowledged in the official citation, but the rules of the prize restrict it to be given only to individuals.

Jon Butterworth of University College London, who was involved in the Higgs experiments at the LHC, wrote:

The discovery of a Higgs boson, showing that the theoretical ideas are manifested in the real world, was thanks to the work of many thousands. There are 3,000 or so people on Atlas, a similar number on CMS, and hundreds who worked on the LHC.

Paul Newman at the University of Birmingham, who is also involved in work at the LHC, said, “At first sight, the Higgs mechanism is a very strange idea.” So it is fitting that, 50 years after the theory was suggested, it took the world’s biggest experiment, thousands of scientists and many billions of pounds to prove the existence of the Higgs boson and thus the Higgs mechanism.

However, the repeated delays in this morning’s announcement of the prize, as the committee debated over who to give the prize to, were a sign that the most-deserved prize will also remain one of the most controversial ones.The Conversation

First published on The Conversation.

Image credit: CERN

The price of gaining an accurate theory has been the erosion of our common sense

Review of Richard Feynman’s QED: The strange theory of light and matter

The title of the post is a quote from Feynman’s book. Written by a Nobel laureate and one of the most beloved scientists, it is perhaps the best explainers of a theory that flips everything we know about physical phenomena on its head. It explains quantum electrodynamics (QED), a theory that explains 99% of all phenomena that involve photons and electrons.

But to be able to understand it one must, as Feynman puts it:”accept some very bizarre behaviour: a single beam of light reflecting from all parts of a mirror, light travelling in paths other than a straight line, photons going faster or slower than the speed of light, electrons going backwards in time, photons disintegrating into a positron-electron pair, and so on.”

This book is a series of four lectures that Feynman gave in 1983 at the University of California in Los Angeles. It is a short and entertaining, but intense read. Feynman goes into quite a lot of detail about how QED can be explained by the use of arrows drawn on a sheet of paper (!). But as Feynman claims, more than a few times in the book, what you get from the book is the spirit of the theory. To be able to use it accurately students regularly study it for several years. (Here’s an example of how I used QED to explain a new type of flat lens).

There is enough packed into the last few pages of the book as is in the remainder. In them Feynman, who says “Being a professor means having the habit of not being able to stop talking at the right time”, tries to explain the rest of physics apart from QED. His aim is to show that physicists’ search for elegance in nature through theories of physics is necessary, mostly because of the complexity of understanding how nature works. Perhaps we are being too naive, perhaps not. We won’t know till we make theories and test them. QED has stood 70 years of rigorous testing.

A revolution in lens-making

Understanding of optics has changed no end since the world’s oldest known lens was ground nearly 3,000 years ago in modern-day Iraq. Yet its Assyrian maker would instantly recognise today’s lenses, which continue to be made much as they were then: by fashioning a piece of transparent material into a solid with curved surfaces. Just as invariably, the curves introduce optical aberrations whose correction requires tweaking the lens’s geometry in complicated ways. As a consequence, lenses remain bulky, especially by the standards of modern electronics.

Enter Federico Capasso, of Harvard University. He and his colleagues have created a lens that is completely flat and the width of two human hairs. It works because its features, measured in nanometres (billionths of a metre), make it a “metamaterial”, endowed with some weird and useful properties.

According to the laws of quantum mechanics, a particle of light, called a photon, can take literally any possible path between source A and point B. However, those same laws stipulate that the path of least time is the most likely. When a photon is travelling through a uniform medium, like a vacuum, that amounts to a straight line. But although its speed in a vacuum is constant, light travels at different (lower) speeds in different media. For example, it moves more slowly in glass than it does in air. So in a medium composed of both air and glass, light’s most likely path from A to B will depend on the thickness of glass it needs to traverse, as well as the total distance it needs to cover. That means that the light may sometimes prefer to bend. This is the quantum-mechanical basis of refraction.

In order to maximise the probability that photons from A will end up precisely at B, those going in a straight line need to be slowed down relative to those taking a more circuitous route, so that, in effect, all hit B the same time. This can be done by forcing the former to pass through more glass than the latter. The result is a round piece of glass that is thick in the middle, where the straight-line path crosses, and tapers off towards the edge, where the less direct routes do—in other words, a focusing lens, with its focal point at B.

Dr Capasso’s lens, described in Nano Letters, also slows photons down. But instead of using varying thickness of glass to do the job, he and his team created an array of antennae which absorb photons, hold on to them for a short time and then release them. In order for this trick to work, though, the distance between the antennae has to be smaller than the wavelength of the light being focused. In Dr Capasso’s case that means less than 1,550 nanometres, though he thinks that with tweaking it could be made to work with shorter-wavelength visible light, too.

Creating the array involved coating a standard silicon wafer, 250 microns thick, with a 60-nanometre layer of gold. Most of this layer was then stripped away using a technique called electron-beam litography, leaving behind a forest of V-shaped antennae arranged in concentric circles. By fiddling with their precise shape, after much trial and error, antennae lying on different circles could be coaxed into holding on to the photons for slightly different lengths of time, mimicking an ordinary glass lens. The whole fragile system can be sandwiched between two sheets of transparent material to make it more robust.

At present the new-fangled lens only works for monochromatic light and so is unlikely to replace the glass sort in smartphone cameras anytime soon. But it could revolutionise instruments that rely on single-colour lasers, by making further minaturisation possible while eliminating the optical aberrations inherent to glass lenses. Such devices include laser microscopes, which are used to capture high-resolution images of cells, or optical data storage, where a more accurate and smaller lens could help squeeze more information into ever less space.

First published on economist.com.

References: 

  1. Capasso et al., Aberration-Free Ultrathin Flat Lenses and Axicons at Telecom Wavelengths Based on Plasmonic Metasurfaces, Nano Letters2012.
  2. Capasso et al., Light Propagation with Phase Discontinuities: Generalized Laws of Reflection and Refraction, Science2011.

Also appeared in The Economist. Also available in audio here.

Image credit: Francesco Aieta

Printing at the highest resolution possible

How high can you get? Resolutionwise, that is. In 2010, when launching the Apple iPhone 4, Steve Jobs claimed that the 326 dots per inch (dpi) resolution of that machine’s display would make it impossible to pick the pixels apart. His reason was that this density of dots is at the limit of the resolving power of the human eye when something is held at reading distance from it. This limit is not, however, the theoretical maximum resolution of an image. That is about 100,000 dpi, a figure imposed by the laws of physics. Place any more dots in an inch and the light waves coming from them start to interfere with each other, leading to a loss of clarity.

Printing at 100,000 dpi using either the inkjet technique (in which droplets of liquid ink are laid down side by side) or the laserjet technique (in which static electricity is used to direct bits of powdered ink onto paper, where a laser melts them) is impossible. Neither can manage more than about 10,000 dpi. But Karthik Kumar, a material scientist at Singapore’s Agency for Science, Technology and Research, thinks he can do better. As he and his colleagues report in Nature Nanotechnology, they have a prototype that can manage the full 100,000. The catch is that it uses “ink” made out of silver and gold.

Actually, that is not the only catch. For the image has to be created using an electron beam, rather than a laser or an inkjet, and such beams are rather hard to handle. But as a proof of principle it is interesting, and it might lead to cheaper and faster methods.

Dr Kumar and his team start with a plate of silicon. The electron beam carves bits of this away, leaving a pattern of cylindrical posts each about 140 nanometres (billionths of a metre) across and 50 nanometres apart. That “about” is important, though. The exact diameters of the posts and the distances between them are crucial. Varying them changes the colour that forms between the posts.

To create this colour, the plate is coated with a layer of silver and another of gold. The outer electrons of the atoms of these heavy metals often come loose, to form a cloud akin to an electronic gas. When light falls on this gas, it absorbs all frequencies bar one, which is reflected. Exactly which frequency is reflected depends on the resonant frequency at which the electron gas vibrates. And that, in turn, depends on how far apart the silicon posts (which constrain the gas’s movements) are.

A coloured image can thus be made by varying the size and spacing of the posts. This, the team did. Specifically, they recreated a widely used test image: that of Lenna, a pin-up girl from the 1970s whose picture is reckoned (ahem) a challenge to reproduce because of its wide range of tones. Dr Kumar’s version of Lenna was only 100 microns (about the thickness of a human hair) across, but matched the original with reasonable fidelity.

Carving images on silicon using electron beams, and then coating the result with precious metals, is unlikely ever to be a viable technology for the mass printing of images. It might, though, be a good way of storing data permanently—better, in terms of density, at least, than existing optical techniques such as CDs, DVDs and Blu-ray discs. It is also strangely reminiscent of the Daguerreotype, an early form of photography that formed images of silver on a copper plate. Bearing in mind the multi-billion dollar industry that Louis Daguerre’s idea eventually turned into, perhaps Dr Kumar’s version is not so strange, after all.

Also published on economist.com.

References:

  1. Kumar et al.Nature Nanotechnology, 2012
  2. Lenna Image
  3. Retina Display

Image from here.

The physics of sand castles: Just add water

A day out on the beach would be incomplete without a sand castle. The mightier the castle, the better. But sand is next to useless as a building material. Without water it simply spreads out as wide as possible. So in search of a good recipe Daniel Bonn, a physicist at the University of Amsterdam, and colleagues have stumbled upon a formula for making the perfect sandy redoubt.

As they reveal in a paper published this week in Scientific Reports the key is to use sand with only 1% water by volume. Wet sand has grains coated with a thin layer of water. Owing to water’s surface tension this thin coat acts like skin stretched over many grains, holding them together by creating bridges between the grains. The strength of these bridges is enough to fight Earth’s gravity and prevent the structures from buckling under their own weight.

An easy way to achieve the right amount of water, Dr Bonn suggests, is to tamp wet sand in a mould (open at the top and the bottom) with a thumper at least 70 times, as he did in his experiments.

As for the design itself, unsurprisingly, the wider the base the taller the castle. According to calculations, using ideally moist sand, a column with a three inch diameter could rise as high as two metres. At 12 metres, the current world record for the tallest sandcastle, set by Ed Jarrett in 2011, used a base of roughly 11 metres. If Dr Bonn is right, sand engineers could in principle beat that with a castle thrice the height upon the same foundation.

First published in The Economist.

Image from here.

Through twists of light

Two centuries ago a French engineer noticed something special about light from the sun and discovered the phenomenon of polarisation of light. Now using that property of light scientists have developed a  technique to spot the presence of water and other biological molecules on the many Earth-like planets. With more powerful telescopes being built, they might just be able to search for life through the twists of light.

In search of light through the twists of light Science Oxford, 20 March 2012