New cyber-attack model helps hackers time the next Stuxnet

Of the many tricks used by the world’s greatest military strategists, one usually works well—taking the enemy by surprise. It is an approach that goes back to the horse that brought down Troy. But surprise can only be achieved if you get the timing right. Timing which, researchers at the University of Michigan argue, can be calculated using a mathematical model—at least in the case of cyber-wars.

James Clapper, the director of US National Security, said cybersecurity is “first among threats facing America today,” and that’s true for other world powers as well. In many ways, it is even more threatening than conventional weapons, since attacks can take place in the absence of open conflict. And attacks are waged not just to cause damage to the enemy, but often to steal secrets.

Timing is key for these attacks, as the name of a common vulnerability—the zero-day attack—makes apparent. A zero-day attack refers to exploiting a vulnerability in a computer system on the same day that the vulnerability is recognized (aka when there are zero days to prepare for or defend against the attack). That is why cyber-attacks are usually carried out before an opponent has the time to fix its vulnerabilities.

As Robert Axelrod and Rumen Iliev at the University of Michigan write in a paper just published in the Proceedings of the National Academy of Sciences, “The question of timing is analogous to the question of when to use a double agent to mislead the enemy, where it may be worth waiting for an important event but waiting too long may mean the double agent has been discovered.”

Equations are as good as weapons

Axelrod and Iliev decided the best way to answer the question of timing would be through the use of a simple mathematical model. They built the model using four variables:

  1. Cyber-weapons exploit a specific vulnerability.
  2. Stealth of the weapon measures the chance that an enemy may find out the use of the weapon and take necessary steps to stop its reuse.
  3. Persistence of the weapon measures the chance that a weapon can still be used in the future, if not used now. Or, put another way, the chance that the enemy finds out their own vulnerability and fixes it, which renders the weapon useless.
  4. Threshold defines the time when the stakes are high enough to risk the use of a weapon. Beyond the threshold you will gain more than you will lose.

Using their model, it is possible to calculate the optimum time of a cyber-attack:

When the persistence of a weapon increases, the optimal threshold increases—that is, the longer a vulnerability exists, the longer one can wait before using it.

When the stealth of a weapon increases, the optimal threshold decreases—the longer a weapon can avoid detection, the better it is to use it quickly.

Based on the stakes of the outcome, a weapon must be used soon (if stakes are constant) or later (if the stakes are uneven). In other words, when the gain from an attack is fixed and ramifications are low, it is best to attack as quickly as possible. When the gain is high or low and ramifications are high, it is best to be patient before attacking.

How to plan the next Stuxnet

Axelrod and Iliev’s model deserves merit, according to Allan Woodward, a cybersecurity expert at the University of Surrey, because it fits past examples well. Their model perfectly predicts timing of both the Stuxnet attack and Iran’s counter to it.

Stuxnet was a worm aimed at interfering with Iran’s attempts to enrich uranium to build nuclear weapons. So, from an American perspective, the stakes were very high. The worm itself remained hidden for nearly 17 months, which means its stealth was high and persistence was low. According to the model, US and Israel should have attacked as soon as Stuxnet was ready. And indeed that is what seems to have happened.

Iran may have responded to this attack by targeting the workstations of Aramco, an oil company in Saudi Arabia that supplied oil to the US. Although the US called this the “most destructive cyber-assault the private sector has seen to date,” it achieved little. However, for Iran, the result mattered less than the speed of the response. In a high stakes case, the model predicts immediate use of a cyber-weapon, which is what happened in this case, too.

Although the model has been developed for cyber-attacks, it can be equally effective in modeling cyber-defense. Also, the model need not be limited to cyber-weapons; small changes in the variables can be made so that the model can be used to consider other military actions or economic sanctions.

Just like the atomic bomb

Eerke Boiten, a computer scientist at the University of Kent, said: “These models are a good start, but they are far too simplistic. The Stuxnet worm, for example, attacked four vulnerabilities in Iran’s nuclear enrichment facility. Had even one been fixed, the attack would have failed. The model doesn’t take that into account.”

In their book Cyber War: The Next Threat to National Security and What to Do About It, Richard Clarke and Robert Knake write:

It took a decade and a half after nuclear weapons were first used before a complex strategy for employing them, and better yet, for not using them, was articulated and implemented.

That transition period is what current cyber-weapons are going through. In that light, the simplicity of Axelrod and Iliev’s model may be more a strength than a weakness for now.The Conversation

First published at The Conversation. Image credit: usairforce

Nanoparticles cause cancer cells to die and stop spreading

More than nine in ten cancer-related deaths occur because of metastasis, the spread of cancer cells from a primary tumour to other parts of the body. While primary tumours can often be treated with radiation or surgery, the spread of cancer throughout the body limits treatment options. This, however, can change if work done by Michael King and his colleagues at Cornell University, delivers on its promises, because he has developed a way of hunting and killing metastatic cancer cells.

When diagnosed with cancer, the best news can be that the tumour is small and restricted to one area. Many treatments, including non-selective ones such as radiation therapy, can be used to get rid of such tumours. But if a tumour remains untreated for too long, it starts to spread. It may do so by invading nearby, healthy tissue or by entering the bloodstream. At that point, a doctor’s job becomes much more difficult.

Cancer is the unrestricted growth of normal cells, which occurs because mutations in normal cell cause it to bypass a key mechanism called apoptosis (or programmed cell death) that the body uses to clear old cells. However, since the 1990s, researchers have been studying a protein called TRAIL, which on binding to the cell can reactivate apoptosis. But so far, using TRAIL as a treatment of metastatic cancer hasn’t worked, because cancer cells suppress TRAIL receptors.

When attempting to develop a treatment for metastases, King faced two problems: targeting moving cancer cells and ensuring cell death could be activated once they were located. To handle both issues, he built fat-based nanoparticles that were one thousand times smaller than a human hair and attached two proteins to them. One is E-selectin, which selectively binds to white blood cells, and the other is TRAIL.

He chose to stick the nanoparticles to white blood cells because it would keep the body from excreting them easily. This means the nanoparticles, made from fat molecules, remain in the blood longer, and thus have a greater chance of bumping into freely moving cancer cells.

There is an added advantage. Red blood cells tend to travel in the centre of a blood vessel and white blood cells stick to the edges. This is because red blood cells are lower density and can be easily deformed to slide around obstacles. Cancer cells Have a similar density to white blood cells and remain close to the walls, too. As a result, these nanoparticles are more likely to bump into cancer cells and bind their TRAIL receptors.

Leukocytes are WBCs and liposomes are nanoparticles. King/PNAS

King, with help from Chris Schaffer, also at Cornell University, tested these nanoparticles in mice. They first injected healthy mice with cancer cells, and then after a 30-minute delay injected the nanoparticles. These treated mice developed far fewer cancers, compared to a control group that did not receive the nanoparticles.

“Previous attempts have not succeeded, probably because they couldn’t get the response that was needed to reactivate apoptosis. With multiple TRAIL molecules attached on the nanoparticle, we are able to achieve this,” Schaffer said. The work has been published in the Proceedings of the National Academy of Sciences.

While these are exciting results, the research is at an early stage. Schaffer said that the next step would be to test mice that already have a primary tumour.

“While this is an exciting and novel strategy,” according to Sue Eccles, professor of experimental cancer therapeutics at London’s Institute of Cancer Research, “it would be important to show that cancer cells already resident in distant organs (the usual clinical reality) could be accessed and destroyed by this approach. Preventing cancer cells from getting out of the blood in the first place may only have limited clinical utility.”

But there is hope for cancers that spend a lot of time in blood circulation, such as blood, bone marrow and lymph nodes cancers. As Schaffer said, any attempt to control spreading of cancer is bound to help. It remains one of the most exciting areas of research and future cancer treatment.The Conversation

First published at The Conversation.

Image credit: Cornell University

Why one hectare of tropical forest grows more tree species than the US and Canada combined

One hectare of land in a tropical forest can hold 650 tree species – more than in all of Canada and the continental US. This has left biologists baffled for decades. Now, with advances in data analysis, Phyllis Coley and Thomas Kursar of the University of Utah may have finally found an explanation.

From a broad perspective, evolution is pretty simple. Successful species survive and reproduce, which depends on how readily they obtain resources. So if two species are too similar in their use of resources, they would compete with each other – unless one evolves to use a different resource and exploits a niche that hasn’t been filled. However, in any environment, niches are limited. That is why the diversity in a tropical forest cannot be explained by the exploitation of niches alone.

The competition for niches is shaped by species’ interactions with the environment, which includes both abiotic elements (climate, water, soil and such) and biotic elements (in other words, other species). Tropical forests have stable abiotic environments, so Coley and Kursar concluded it must be the biotic interactions that explain the extraordinary diversity in these forests.

They argue, in an article just published in Science, that an arms race between plants and plant-eaters is what drives evolutionary changes. When a plant-eater finds a new way to attack a plant, the plant must evolve to fight the plant-eater. Through many generations these changes force formation of new species, leading to the observed tropical diversity.

This explanation is known as the Red Queen hypothesis, which gets its name from a statement the Red Queen made to Alice in Lewis Carroll’s “Through the Looking-Glass”:

Now, here, you see, it takes all the running you can do, to keep in the same place.

The Red Queen Hypothesis is not new. It was first suggested in 1973, and has been applied to many other ecological scenarios. However, so far, biologists have found it hard to determine whether it applies to tropical forests because of the sheer size of the task. Tropical forests have thousands of plant species that may have hundreds of plant-eaters each. These millions of interactions need to be all taken into account to show the Red Queen hypothesis at work.

Also, in such an arms race, plants have it harder than herbivores, because their lifespan can be hundreds of times longer than the average leaf-eater, which is usually a small insect. That is why a single tropical tree may have hundreds of distinct chemical compounds in its defence arsenal against herbivores, which makes the analysis harder.

This is where advances in data analysis prove handy. To understand these defences on an ecosystem scale requires the use of metabolomics, which is the study of chemical fingerprints left behind by an organism.

Metabolomic analyses across forests in Mexico, the Amazon and Panama, show that neighbouring plants mostly have different defences than would be expected if it were a random process – in other words the Red Queen seems to be in action. Most convincingly, closely related trees and shrubs have often diverged defences, which is a sign of exploring biotic interaction niches, but have similar non-defence traits, which results from similar abiotic conditions that they find themselves in.

Coley said that, while the data seems convincing, there are still limitations. Tropical forests have been studied well, but there is no comparable data from the temperate regions, which would be needed as a control to validate the hypothesis. Perhaps such an arms race also occurs in temperate regions that have been studied less. Also, temperate regions are purported to have less diversity in tree species, but that may not actually be true, according to Jeff Ollerton, professor of Biodiversity at the University of Northampton.

In a 2011 study published in the journal Functional Ecology, Angela Moles, the head of the Big Ecology Lab at the University of New South Wales, looked at all the data on interactions between plants and plant-eaters. She found only a third of the studies showed there to be more interactions among tropical species than those at higher latitudes, such as temperate regions. But this meta-analysis (a method to meaningfully compare different datasets) showed that the positive results are not statistically significant. Worse still, only nine out of 56 comparisons showed that chemical defences were higher in tropical plants than in temperate ones.

Also, some recent work has called out biologists for depending on the Red Queen hypothesis for many explanations. A small but vocal group of researchers argue that other processes can explain diversity. Chief among the alternate explanations is the idea of genetic drift, where some genetic mutations are passed on to progeny randomly. This differs from natural selection, where nature actively chooses which mutations get passed on.

While Coley remains confident that the Red Queen hypothesis will indeed prove to be a satisfactory explanation, she also knows that a lot more data will be needed to get there. Previously, the limitation was data analysis; now it is data collection. Researchers have no option but to go out in a tropical forest, search for plants and their herbivores, and then record their interactions.

While other explanations will certainly have some role to play, Coley and Kursar make a persuasive case for why nature seems to have endowed tropical regions with so many plant and plant-eating species. Although Alice may not like it, we may have to thank the Red Queen for it.The Conversation

First published at The Conversation.