Metals in your smartphone have no substitutes

A few centuries ago, there were just a few widely used materials: wood, brick, iron, copper, gold and silver. Today’s material diversity is astounding. A chip in your smartphone, for instance, contains 60 different elements. Our lives are so dependent on these materials that a scarcity of a handful of elements could send us back in time by decades.

If we do ever face such scarcity, what can be done? Not a lot, according to Thomas Graedel of Yale University and his colleagues who decided to investigate the materials we rely on. He chose to restrict his analysis to metals and metalloids, which could face more critical constraints because many of them are relatively rare.

The authors’ first task was to make a comprehensive list of uses for these 62 elements. This is a surprisingly difficult task. Much of the modern use of metals happens behind closed doors of corporations, under the veil of trade secrets. Even if we can find out how certain metals are used, it may not always be possible to determine the proportions they are used in. Their compromise was to account for the use of 80% of the material that is made available each year through extraction and recycling.

The next task was to determine if there were any substitutes for these uses. But, as Graedel writes, “the best substitute for a metal in a particular use is not always readily apparent.” Elemental properties are quite unique and substitution will often reduce the performance of the product. But it can be done.

Two examples stand testament to that. In the 1970s, cobalt was commonly used in magnets. When a civil war in Zaire caused scarcity of cobalt, scientists at General Motors and elsewhere were forced to develop magnets that used no cobalt. More recently, a shortage of rhenium, which is used in superalloys for gas turbines, forced General Electric to develop alternatives that use little or no rhenium.

Graedel’s analysis of substitutes involved ploughing through scientific literature and interviewing product designers and material scientists. The results are a sobering reminder of how critical some metals are. On seeing the data, Andrea Sella of University College London said, “This is an important wake up call.”

Which metals have good substitutes and which don’t. PNAS

None of the 62 elements have substitutes that perform equally well. And some of those have no substitutes at all (or if there are substitutes, then they are inadequate). They include: rhenium, rhodium, lanthanum, europium, dysprosium, thulium, ytterbium, yttrium, strontium and thallium.

Economists have long assumed that a shortage of anything will promptly lead to the development of suitable substitutes, an attitude fostered in part because there have been successful substitutions in the past, such as the cobalt and rhenium examples. But metals are special, Graedel said: “We have shown that metal substitution is very problematic. Substitution would need to mimic these special properties – a real challenge in many applications.”

“The clarity of Graedel’s thinking is impressive,” said Sella. “No one has analysed metal criticality in such detail.” One of Graedel’s biggest contributions has been developing a visual way of understanding how critical metals are. They created a 3D map, where the three axes represent supply risk, environmental implications and vulnerability to supply restriction.

The Yale analytical framework for determining metal criticality. PNAS

The scarcity of metals came to public attention in 2010 when China suddenly decided to restrict its export of a group of metals called the rare earths. Prices of these metals shot up by as much as five times and caused companies around the world to consider reopening their rare earth mines. This had knock-on effects on the prices of everything from gadgets to wind turbines.

Some comfort may be drawn from the fact that consumptions of some metals can peak. For example, the use of iron has reached saturation in many countries. And, in the US, this seems to have happened for aluminium too. This, however, is the case only for bulk metals. Scarcer metals, even with superior recycling, may never reach saturation.

Apart from China, a handful of countries, including the US, South Africa, Australia, Congo, and Canada, hold the most diverse and largest metal reserves. “A national disaster or extended political turmoil in any of them would significantly ripple throughout the material world in which we live,” said Graedel.

As Sella puts it, Graedel’s measured analysis, published in the Proceedings of the National Academy of Sciences, is a warning of a serious issue. “But he has a thoughtful way of putting it.”The Conversation

First published in The Conversation.

Image: intelfreepress

Scientists falter as much as bankers in pursuit of answers

Bankers aim to maximise profits. Scientists aim to understand reality. But Mike Peacey of the University of Bristol suggests, based on a new model he has just published in Nature, that both professionals are equally likely to conform to whatever views are prevalent, whether they are right or wrong.

In the past decade scientists have raised serious doubts about whether science is as self-correcting as is commonly assumed. Many published findings, including those in the most prestigious journals, have been found to be wrong. One of the reasons is that, once a hypothesis becomes widely accepted, it becomes very difficult to refute it, which makes it, as Jeremy Freese of Northwestern University recently put it, “vampirical more than empirical – unable to be killed by mere evidence”.

There are three possibilities to explain why scientists converge on mistaken conclusions. First, as humans, scientists try to be rational but remain stuck on certain views in the face of contrary evidence. Second, some scientists make up data to further their careers, as happened in a high profile case last year. Third, the “publish or perish” culture forces scientists to consciously or unconsciously gravitate towards results that support their conclusions.

At the heart of science’s attempt to be self-correcting is the peer review system. The hope is that scientists’ aim to understand the world will guide them in evaluating the research, and that multiple independent reviews will get rid of some of the biases that usually affect the authors and the reviewers.

Sadly the peer review system does not always live up to its high aims. Some have called to abandon the system, while others insist that, like democracy, it is the least worst system on offer. “Peer review isn’t as bad as many think,” Peacey said. He and his colleagues decided to investigate what some of its faults are and how they could be fixed. They built a computer model to understand how scientists may behave based on some simplified parameters.

Subjectivity wins

Assume a group of scientists is deciding between Hypothesis A and Hypothesis B. Each scientist will have some probability of leaning towards one hypothesis or the other. The computer model begins when a scientist submits a manuscript based on one of these views to a journal. To keep things simple, editors will always pass this manuscript on for peer review. Now the reviewers need to decide whether the manuscript should be published. After which they will also need to decide which hypothesis should they lean towards in their own future submission.

(In reality one of the hypothesis may be correct, if herding occurred on the correct one it won’t be harmful. But that wasn’t the point of the experiment and thus the researchers gave no value judgement to a hypothesis.)

They ran the model in three different conditions. In M1 scientists were allowed to use their own subjective and unpublished results to evaluate the manuscript. In M2 scientists were forced to remain as objective as possible. In M3 all manuscripts were published without peer review.

Park, Peacey, Munafo

They looked at the probability of three outcomes – herding (scientists will submit manuscript on hypothesis they disagree with but others agree with), “misperception” (distance between scientific perception and the truth) and acceptance for publication. On all three outcomes M1 appears to win. In that model herding took the most time, misperception was at its lowest and the probability of acceptance was about the same.

“The simple conclusion is that subjective views of scientists should be encouraged in peer-review,” Peacey said. A moderate degree of subjectivity is optimal, further analysis revealed. “This doesn’t happen that much. A lot of journals insist reviewers be as objective as possible in their analysis. Instead, questions like ‘how interesting do you think this paper is?’ or ‘do you think this paper will make significant impact on the field?’ should be asked.”

Herd mentality

The most troubling aspect, however, is that herding occurs in all models. Bankers, particularly, have been blamed for making bad decisions because of herding.

Behavioural economics shows that one way to counter herding is to aggregate private signals across markets, rather than the public signals (buying or selling) that are used currently. For science this would mean a more open system of review, including that which involves peer review after a paper is published.

This form of herding should affect all journals that do not include subjective parameters. John Hollmwood of the University of Nottingham said, “High impact factors for journals may well be the outcome of herding. It would be interesting to find out if low impact factor journals offer greater heterogeneity.” Harry Collins of Cardiff University said, “I doubt this sort of herding occurs among top scientists, who are a much smaller group than top journals, which I believe are not publishing the best ideas out there.”

“But what is described is a model not an empirical study,” Holmwood said. And that is one limitation of the study: human behaviour is very difficult to model.

The other limitation might be that herding is not a new phenomenon, and Peacey’s conclusions agree with other scientific literature on human behaviour. The fact that this study was published in the prestigious journal Nature might itself be an example of herding. Or perhaps, for once, scientists are actually closer to a truth.The Conversation

First published on The Conversation.

Image: smanography

Ageing cells reveal features of cancer

The older we get, the higher our risk of cancer. With age, we accumulate exposure to environments and chemicals that increase the risk of acquiring cancer-causing mutations. But the danger doesn’t increase in a linear manner, and we know little about why there is such a dramatic increase with ageing.

Accumulated damage isn’t the only thing going on as we age. The body’s cells also go through a process called senescence. Chief among the changes that come with senescence are alterations to the epigenome, the proteins and chemical modifications that are attached to our DNA. These epigenetic changes can influence which genes are active in different tissues.

During this phase of a human cell’s life, the changes are an attempt to shutdown the process of cell division. Cell division involves creating copies of chromosomes and distributing them into two identical copies of the parent cell. But cells that go senescent must stop multiplying.

Cancer cells manage to bypass the mechanisms that stop them multiplying, including those put in place during senescence.

In the new study, published in Nature Cell Biology, Peter Adams at the University of Glasgow followed the ageing process in fibroblasts, which are cells that form connective tissue.

Adams and his colleague found that ageing cells have less control over their epigenome leading to widespread changes in DNA. Many sections of the genome, which were supposed to be under the control of DNA methyltransferase (DNMT1), end up with fewer methyl groups than would be expected. While some sections, known as CpG islands, get more methyl groups. It was surprising that comparison of these epigenetic changes with those found in cancer cells showed many similarities.

According to co-author of the study Richard Meehan, a researcher at the University of Edinburgh’s Human Genetics Unit, the study shows that ageing cells have some of the same features as cancer. “But we must be careful about interpreting the results,” he said. The study involved looking at human cells in Petri dishes, so the experiments must be repeated in animals and then humans before we can draw firm conclusions.

If the study stands that test, however, then we will have a strong hint of why ageing increases our risk of cancer and better understanding of the ageing process. “I don’t know if the results will help us fight cancer, but if I am able to delay the ageing of my fibroblasts, one thing’s for sure: I’ll look a hell of a lot better when I’m older,” Meehan said.

Avi Roy, a researcher at the University of Buckingham, has also worked on the senescence of cells. He said, “What they have done is not completely new, but it is a big piece of work. And they have a lot of evidence to back up their claim.” Roy agrees with Meehan and warns that any conclusions about revealing how cancer works based on this work would be premature.

A 2011 study points to the difficulty of drawing wider conclusions. In the study researchers removed a particular kind of senescent cell from ageing mice. They found that in these mice many of the age-related diseases, such as cataracts, were delayed. “But the mice didn’t have their life extended. They died of either cardiac arrests or cancer,” Roy said. Much remains to be understood about how ageing causes cancer, and with the latest study from Adams and Meehan we take a few steps closer.The Conversation

First published on The Conversation.

Image credit:lnmurrey