Researchers have discovered a new way in which computers based on quantum physics could beat the performance of classical computers. The work implies that a Matrix-like simulation of reality would require less memory on a quantum computer than on a classical computer. It also hints at a way to investigate whether a deeper theory lies beneath quantum theory.
The finding emerges from fundamental consideration of how much information is needed to predict the future. Researchers know how to calculate the amount of information transferred inherently in any stochastic process. Theoretically, this sets the lowest amount of information needed to simulate the process. In reality, however, classical simulations of stochastic processes require more storage than this.
Gu, Wiesner, Rieper and Vedral showed that quantum simulators need to store less information than the optimal classical simulators. That is because quantum simulations can encode information about the probabilities in a “superposition”, where one quantum bit of information can represent more than one classical bit.
What surprised the researchers is that the quantum simulations are still not as efficient as they could be: they still have to store more information than the process would seem to need.
A new X-ray study of the remains of an exploded star indicates that the supernova that disrupted the massive star may have turned it inside out in the process. Using very long observations of Cassiopeia A (or Cas A), a team of scientists has mapped the distribution of elements in the supernova remnant in unprecedented detail. This information shows where the different layers of the pre-supernova star are located three hundred years after the explosion, and provides insight into the nature of the supernova.
The data show that the distributions of sulfur and silicon are similar, as are the distributions of magnesium and neon. Oxygen, which according to theoretical models is the most abundant element in the remnant, is difficult to detect because the X-ray emission characteristic of oxygen ions is strongly absorbed by gas in along the line of sight to Cas A, and because almost all the oxygen ions have had all their electrons stripped away.
Most of the iron, which according to theoretical models of the pre-supernova was originally on the inside of the star, is now located near the outer edges of the remnant. Surprisingly, there is no evidence from X-ray (Chandra) or infrared (Spitzer Space Telescope) observations for iron near the center of the remnant, where it was formed. Also, much of the silicon and sulfur, as well as the magnesium, is now found toward the outer edges of the still-expanding debris. The distribution of the elements indicates that a strong instability in the explosion process somehow turned the star inside out.
The pulsar at the center of the Crab Nebula is a bundle of energy. This was confirmed by the two MAGIC Telescopes on the Canary island of La Palma. They observed the pulsar in very high energy gamma radiation from 25 up to 400 gigaelectronvolts (GeV), a region that was previously difficult to access with high energy instruments, and discovered that it actually emits pulses with the maximum measurable energy of up to 400 GeV – at least 50 to 100 times higher than theorists thought possible. These latest observations are difficult for astrophysicists to explain. “There must be processes behind this that are as yet unknown”, says Razmik Mirzoyan, project head at the Max Planck Institute for Physics.
Data measured by MAGIC over the past two years show that the pulsed emissions far exceed all expectations, reaching 400 GeV in extremely short pulses of about a millisecond duration. This finding casts doubt on existing theories, since it was thought that all pulsars had significantly lower energy limits. The recent measurements by MAGIC, together with those of the orbiting Fermi satellite at much lower energies, provide an uninterrupted spectrum of the pulses from 0.1 GeV to 400 GeV. These clear observational results create major difficulties for most of the existing pulsar theories that predict significantly lower limits for highest energy emission.
A group of European astronomers has discovered an ancient planetary system that is likely to be a survivor from one of the earliest cosmic eras, 13 billion years ago. The system consists of the star HIP 11952 and two planets. Whereas planets usually form within clouds that include heavier chemical elements, the star HIP 11952 contains very little other than hydrogen and helium. The system promises to shed light on planet formation in the early universe – under conditions quite different from those of later planetary systems, such as our own.
Statistically, a star that contains more “metals” (chemical elements other than hydrogen and helium) – is more likely to have planets. This suggests a question: Originally, the universe contained almost no chemical elements other than hydrogen and helium. Almost all heavier elements have been produced, over time inside stars, and then flung into space as massive stars end their lives in supernovae. So what about planet formation under conditions like those of the very early universe, say: 13 billion years ago? If metal-rich stars are more likely to form planets, are there stars with a metal content so low that they cannot form planets at all? And if the answer is yes, then when, throughout cosmic history, should we expect the very first planets to form?
Now a group of astronomers has discovered a planetary system that could help provide answers to those questions. As part of a survey targeting especially metal-poor stars, they identified two giant planets around a star known by its catalogue number as HIP 11952, at a distance of about 375 light-years from Earth. By themselves, these planets, HIP 11952b and HIP 11952c, are not unusual. What is unusual is the fact that they orbit such an extremely metal-poor and, in particular, such a very old star.
A new result from ESO’s HARPS planet finder shows that rocky planets not much bigger than Earth are very common in the habitable zones around faint red stars. The international team estimates that there are tens of billions of such planets in the Milky Way galaxy alone, and probably about one hundred in the Sun’s immediate neighbourhood. This is the first direct measurement of the frequency of super-Earths around red dwarfs, which account for 80% of the stars in the Milky Way.
The HARPS team surveyed a carefully chosen sample of 102 red dwarf stars in the southern skies over a six-year period. A total of nine super-Earths (planets with masses between one and ten times that of Earth) were found, including two inside the habitable zones of Gliese 581 and Gliese 667 C respectively. The astronomers could estimate how heavy the planets were and how far from their stars they orbited.
By combining all the data, including observations of stars that did not have planets, and looking at the fraction of existing planets that could be discovered, the team has been able to work out how common different sorts of planets are around red dwarfs. They find that the frequency of occurrence of super-Earths in the habitable zone is 41% with a range from 28% to 95%.
A study of galaxies in the deepest far-infrared image of the sky, obtained by the Herschel Space Observatory, highlights the two contrasting ways that stars formed in galaxies up to 12 billion years ago.
Recent results from Herschel show that gas-rich galaxies in the early universe were able to create stars at an intense rate. In the nearby universe, we only see such high rates of star formation when galaxies collide. However, the Herschel data shows that while star-formation in some galaxies in the early universe were triggered by mergers, the majority of star forming galaxies were not undergoing interactions. The formation was driven by the amount of gas present.
“The aim of this study was to estimate the amount of gas that the galaxies contained and understand how that affected the way that they formed stars. In contrast to what we see in the nearby universe, it was only in the minority of the intensively star forming galaxies that star formation activity was triggered by merging of galaxies,” said Magdis.
“The dominant population had very large gas reservoirs that could induce and maintain a high birth rate of stars without the need of galaxy ‘cannibalism’. Such episodes of star-formation naturally resulted from steady, long lasting accretion of gas, forming these ‘cosmic beacons’ of our Universe. However, our study shows that the other population – merging galaxies – had ten times less gas, but the interactions made them much more efficient in converting gas into stars. These galaxies experienced an extreme but nevertheless short-lived firework of star formation,” said Magdis.
Researchers have devised a nanoscale sensor to electronically read the sequence of a single DNA molecule, a technique that is fast and inexpensive.
The researchers previously reported creating the nanopore by genetically engineering a protein pore from a mycobacterium. The nanopore, from Mycobacterium smegmatis porin A, has an opening 1 billionth of a meter in size, just large enough for a single DNA strand to pass through.
To make it work as a reader, the nanopore was placed in a membrane surrounded by potassium-chloride solution, with a small voltage applied to create an ion current flowing through the nanopore. The electrical signature changes depending on the type of nucleotide traveling through the nanopore. Each type of DNA nucleotide – cytosine, guanine, adenine and thymine – produces a distinctive signature.
The researchers attached a molecular motor, taken from an enzyme associated with replication of a virus, to pull the DNA strand through the nanopore reader.
A single drug can shrink or cure human breast, ovary, colon, bladder, brain, liver, and prostate tumors that have been transplanted into mice, researchers have found. The treatment, an antibody that blocks a “do not eat” signal normally displayed on tumor cells, coaxes the immune system to destroy the cancer cells.
A decade ago, biologist Irving Weissman discovered that leukemia cells produce higher levels of a protein called CD47 than do healthy cells. CD47, he and other scientists found, is also displayed on healthy blood cells; it’s a marker that blocks the immune system from destroying them as they circulate. Cancers take advantage of this flag to trick the immune system into ignoring them. In the past few years, Weissman’s lab showed that blocking CD47 with an antibody cured some cases of lymphomas and leukemias in mice by stimulating the immune system to recognize the cancer cells as invaders. Now, he and colleagues have shown that the CD47-blocking antibody may have a far wider impact than just blood cancers.
In two new studies, researchers have found a way to stimulate the brains of rodents to activate a specific memory trace. This research could help explain how we form our own memories, and why competing recollections sometimes make it hard to learn new information.
In the first study, cell biologist Mark Mayford and colleagues genetically engineered mice to be able to relive a memory when injected with the schizophrenia drug clozapine. Certain activities, such as exploring a new environment, cause these mice to create receptors for the drug; and when they’re given the drug later, the same neurons fire as did when the mice explored the new environment. In effect, clozapine recreates the memory.
In the second study, researchers were also able to reactivate an old memory in mice. In this case, molecular biologist and Nobel laureate Susumu Tonegawa and colleagues added a light-sensitive receptor to a group of cells in the hippocampus. These cells are known to be involved in fear-related learning. The mice went through the same shock-conditioning process as in the Mayford study and were then returned to their home cage. When the receptors were activated by a pulse of laser light, the animals immediately froze, though there were no cues, visual or otherwise, to remind them of the shock. “Our finding shows that activating these cells is absolutely sufficient to produce recall in the mice,” Tonegawa says.
A new analysis of isotopes found in lunar minerals challenges the prevailing view of how Earth’s nearest neighbor formed.
Most scientists believe Earth collided with a hypothetical, Mars-sized planet called Theia early in its existence, and the resulting smash-up produced a disc of magma orbiting our planet that later coalesced to form the moon. This is called the giant impact hypothesis. Computer models indicate that, for the collision to remain consistent with the laws of physics, at least 40% of the magma would have had to come from Theia.
In the new research, geochemists led by Junjun Zhang at the University of Chicago in Illinois, together with a colleague, looked at titanium isotopes in 24 separate samples of lunar rock and soil. Just as with oxygen, the researchers found the moon’s proportion was effectively the same as Earth’s and different from elsewhere in the solar system. Zhang explains that it’s unlikely Earth could have exchanged titanium gas with the magma disk because titanium has a very high boiling point. “The oxygen isotopic composition would be very easily homogenized because oxygen is much more volatile, but we would expect homogenizing titanium to be very difficult.”
By 2050, global average temperature could be between 1.4°C and 3°C warmer than it was just a couple of decades ago, according to a new study that seeks to address the largest sources of uncertainty in current climate models. That’s substantially higher than estimates produced by other climate analyses, suggesting that Earth’s climate could warm much more quickly than previously thought.
Many factors affect global and regional climate, including planet-warming “greenhouse” gases, solar activity, light-scattering atmospheric pollutants, and heat transfer among the land, sea, and air, to name just a few. There are so many influences to consider that it makes determining the effect of any one factor—despite years and sometimes decades of measurements—difficult.
Daniel Rowlands, a climate scientist at the University of Oxford, and his colleagues took a stab at addressing the largest sources of short-term climate uncertainty by modifying a version of one climate model used by the United Kingdom’s meteorological agency.
The higher end of the team’s range of likely warming scenarios is between 0.5°C and 0.75°C warmer than the scenarios published in the last report of the Intergovernmental Panel on Climate Change, Rowlands says.
One of the strange features of quantum information is that, unlike almost every other type of information, it cannot be perfectly copied. For example, it is impossible to take a single photon and make a number of photons that are in the exact same quantum state. This may seem minor, but it’s not. If perfect copying was possible, it would, among other things, be possible to send signals faster than the speed of light. This is forbidden by Einstein’s theory of relativity.
For years, scientists have been experimenting with the idea of approximate quantum copying. A recent paper publishedby Sadegh Raeisi, Wolfgang Tittel and Christoph Simon takes another step in that research. They showed that it is possible to perfectly recover the original from the imperfect quantum copies. They also proposed a way that his could be done in practice.
The research can be used in a variety of ways. First, it shows clearly that quantum information is preserved when copied. Even though the copies may be imperfect, the original quantum state can be recovered. In practical terms, it might lead to a precision measurement technique based on quantum physics for samples that have very low contrast, such as living cells.
Shortly after a mouse embryo starts to form, some of its stem cells undergo a dramatic metabolic shift to enter the next stage of development, Seattle researchers report today. These stem cells start using and producing energy like cancer cells.
The metabolic transition they discovered occurs very early as the mouse embryo, barely more than a speck of dividing cells, implants in the mother’s uterus. The change is driven by low oxygen conditions.
The researchers also saw a specific type of biochemical slowdown in the stem cells’ mitochondria – the cells’ powerhouses. The phenomenon previously was associated with aging and disease. This was the first example of the same downshift controlling normal early embryonic development.
Astronomers have put forward a new theory about why black holes become so hugely massive – claiming some of them have no ‘table manners’, and tip their ‘food’ directly into their mouths, eating more than one course simultaneously. Researchers from the UK and Australia investigated how some black holes grow so fast that they are billions of times heavier than the sun.
Professor Andrew King from the Department of Physics and Astronomy, University of Leicester, said: “Almost every galaxy has an enormously massive black hole in its centre. Our own galaxy, the Milky Way, has one about four million times heavier than the sun. But some galaxies have black holes a thousand times heavier still. We know they grew very quickly after the Big Bang. These hugely massive black holes were already full-grown when the universe was very young, less than a tenth of its present age.”
Nixon, King and their colleague Daniel Price in Australia made a computer simulation of two gas discs orbiting a black hole at different angles. After a short time the discs spread and collide, and large amounts of gas fall into the hole. According to their calculations black holes can grow 1,000 times faster when this happens.
Toby Cubitt at the Complutense University of Madrid, Spain, and colleagues have applied the tools used by computer science researchers to link notions of computational hardness to the difficulty of inferring dynamical equations from observational data.
Computer scientists classify computational hardness by how long it takes to solve a problem as a function of the size of the problem: cases in which the time is a polynomial function of size are in class P and are said to be “tractable” or “efficiently solvable” problems; problems for which the required time explodes exponentially with size are called NP-hard. The latter are something of a gold standard for computational complexity and can’t be solved efficiently.
Cubitt et al. prove theoretically that, regardless of experimental precision, deducing the dynamical equations that describe a system’s evolution is an NP-hard problem. The result applies to classical and quantum systems both, regardless of dynamical details.
Researchers have developed a computer program that has tracked the manner in which different forms of dementia spread within a human brain. They say their mathematical model can be used to predict where and approximately when an individual patient’s brain will suffer from the spread, neuron to neuron, of “prion-like” toxic proteins — a process they say underlies all forms of dementia.
Their findings could help patients and their families confirm a diagnosis of dementia and prepare in advance for future cognitive declines over time. In the future — in an era where targeted drugs against dementia exist — the program might also help physicians identify suitable brain targets for therapeutic intervention, says the study’s lead researcher, Ashish Raj, Ph.D., an assistant professor of computer science in radiology.
The computational model, which Dr. Raj developed, is the latest, and one of the most significant, validations of the idea that dementia is caused by proteins that spread through the brain along networks of neurons. It extends findings that were widely reported in February that Alzheimer’s disease starts in a particular brain region, but spreads further via misfolded, toxic “tau” proteins. Those studies were conducted in mouse models and focused only on Alzheimer’s disease.
In this study, Dr. Raj details how he developed the mathematical model of the flow of toxic proteins, and then demonstrates that it correctly predicted the patterns of degeneration that results in a number of different forms of dementia.
Many people who go to a hospital with chest pain don’t end up having a heart attack. A new study that identifies certain abnormal cells in patients’ blood might lead to screening tests to spot those who remain at risk, despite passing standard diagnostic tests.
In people experiencing the opening throes of a heart attack, cells from the inner lining of blood vessels — called endothelial cells — get set adrift in the bloodstream, researchers report in the March 21 Science Translational Medicine. Heart attack patients have higher numbers of these endothelial cells in their blood than healthy people, and the patients’ cells take abnormal shapes, says Eric Topol, a cardiologist at Scripps Research Institute in La Jolla, Calif.
Studies using X-ray and ultraviolet observations from NASA’s Swift satellite provide new insights into the elusive origins of an important class of exploding star called Type Ia supernovae.
“A missing detail is what types of stars reside in these systems. They may be a mix of stars like the sun or much more massive red- and blue-supergiant stars,” said Brock Russell, a physics graduate student at the University of Maryland, College Park, and lead author of the X-ray study.
Russell and Immler combined X-ray data for 53 of the nearest known Type Ia supernovae but could not detect an X-ray point source. Stars shed gas and dust throughout their lives. When a supernova shock wave plows into this material, it becomes heated and emits X-rays. The lack of X-rays from the combined supernovae shows that supergiant stars, and even sun-like stars in a later red giant phase, likely aren’t present in the host binaries.
An exploding star known as a Type Ia supernova plays a key role in our understanding of the universe. Studies of Type Ia supernovae led to the discovery of dark energy, which garnered the 2011 Nobel Prize in Physics. Yet the cause of this variety of exploding star remains elusive.
All evidence points to a white dwarf that feeds off its companions star, gaining mass, growing unstable, and ultimately detonating. But does that white dwarf draw material from a Sun-like star, an evolved red giant star, or from a second white dwarf? Or is something more exotic going on? Clues can be collected by searching for “cosmic crumbs” left over from the white dwarf’s last meal.
In two comprehensive studies of SN 2011fe - the closest Type Ia supernova in the past two decades - there is new evidence that indicates that the white dwarf progenitor was a particularly picky eater, leading scientists to conclude that the companion star was not likely to be a Sun-like star or an evolved giant.
The first observation of a cosmic effect theorized 40 years ago could provide astronomers with a more precise tool for understanding the forces behind the universe’s formation and growth, including the enigmatic phenomena of dark energy and dark matter.
A large research team from two major astronomy surveys reports in a paper submitted to the journal Physical Review Letters that scientists detected the movement of distant galaxy clusters via the kinematic Sunyaev-Zel’dovich (kSZ) effect, which has never before been seen. The paper was recently posted on the arXiv preprint database, and was initiated at Princeton University by lead author Nick Hand as part of his senior thesis. Fifty-eight collaborators from the Atacama Cosmology Telescope (ACT) and the Baryon Oscillation Spectroscopic Survey (BOSS) projects are listed as co-authors.
Proposed in 1972 by Russian physicists Rashid Sunyaev and Yakov Zel’dovich, the kSZ effect results when the hot gas in galaxy clusters distorts the cosmic microwave background radiation — which is the glow of the heat left over from the Big Bang — that fills our universe. Radiation passing through a galaxy cluster moving toward Earth appears hotter by a few millionths of a degree, while radiation passing through a cluster moving away appears slightly cooler.