Researchers have discovered a new way in which computers based on quantum physics could beat the performance of classical computers. The work implies that a Matrix-like simulation of reality would require less memory on a quantum computer than on a classical computer. It also hints at a way to investigate whether a deeper theory lies beneath quantum theory.
The finding emerges from fundamental consideration of how much information is needed to predict the future. Researchers know how to calculate the amount of information transferred inherently in any stochastic process. Theoretically, this sets the lowest amount of information needed to simulate the process. In reality, however, classical simulations of stochastic processes require more storage than this.
Gu, Wiesner, Rieper and Vedral showed that quantum simulators need to store less information than the optimal classical simulators. That is because quantum simulations can encode information about the probabilities in a “superposition”, where one quantum bit of information can represent more than one classical bit.
What surprised the researchers is that the quantum simulations are still not as efficient as they could be: they still have to store more information than the process would seem to need.
A new X-ray study of the remains of an exploded star indicates that the supernova that disrupted the massive star may have turned it inside out in the process. Using very long observations of Cassiopeia A (or Cas A), a team of scientists has mapped the distribution of elements in the supernova remnant in unprecedented detail. This information shows where the different layers of the pre-supernova star are located three hundred years after the explosion, and provides insight into the nature of the supernova.
The data show that the distributions of sulfur and silicon are similar, as are the distributions of magnesium and neon. Oxygen, which according to theoretical models is the most abundant element in the remnant, is difficult to detect because the X-ray emission characteristic of oxygen ions is strongly absorbed by gas in along the line of sight to Cas A, and because almost all the oxygen ions have had all their electrons stripped away.
Most of the iron, which according to theoretical models of the pre-supernova was originally on the inside of the star, is now located near the outer edges of the remnant. Surprisingly, there is no evidence from X-ray (Chandra) or infrared (Spitzer Space Telescope) observations for iron near the center of the remnant, where it was formed. Also, much of the silicon and sulfur, as well as the magnesium, is now found toward the outer edges of the still-expanding debris. The distribution of the elements indicates that a strong instability in the explosion process somehow turned the star inside out.
The pulsar at the center of the Crab Nebula is a bundle of energy. This was confirmed by the two MAGIC Telescopes on the Canary island of La Palma. They observed the pulsar in very high energy gamma radiation from 25 up to 400 gigaelectronvolts (GeV), a region that was previously difficult to access with high energy instruments, and discovered that it actually emits pulses with the maximum measurable energy of up to 400 GeV – at least 50 to 100 times higher than theorists thought possible. These latest observations are difficult for astrophysicists to explain. “There must be processes behind this that are as yet unknown”, says Razmik Mirzoyan, project head at the Max Planck Institute for Physics.
Data measured by MAGIC over the past two years show that the pulsed emissions far exceed all expectations, reaching 400 GeV in extremely short pulses of about a millisecond duration. This finding casts doubt on existing theories, since it was thought that all pulsars had significantly lower energy limits. The recent measurements by MAGIC, together with those of the orbiting Fermi satellite at much lower energies, provide an uninterrupted spectrum of the pulses from 0.1 GeV to 400 GeV. These clear observational results create major difficulties for most of the existing pulsar theories that predict significantly lower limits for highest energy emission.
A group of European astronomers has discovered an ancient planetary system that is likely to be a survivor from one of the earliest cosmic eras, 13 billion years ago. The system consists of the star HIP 11952 and two planets. Whereas planets usually form within clouds that include heavier chemical elements, the star HIP 11952 contains very little other than hydrogen and helium. The system promises to shed light on planet formation in the early universe – under conditions quite different from those of later planetary systems, such as our own.
Statistically, a star that contains more “metals” (chemical elements other than hydrogen and helium) – is more likely to have planets. This suggests a question: Originally, the universe contained almost no chemical elements other than hydrogen and helium. Almost all heavier elements have been produced, over time inside stars, and then flung into space as massive stars end their lives in supernovae. So what about planet formation under conditions like those of the very early universe, say: 13 billion years ago? If metal-rich stars are more likely to form planets, are there stars with a metal content so low that they cannot form planets at all? And if the answer is yes, then when, throughout cosmic history, should we expect the very first planets to form?
Now a group of astronomers has discovered a planetary system that could help provide answers to those questions. As part of a survey targeting especially metal-poor stars, they identified two giant planets around a star known by its catalogue number as HIP 11952, at a distance of about 375 light-years from Earth. By themselves, these planets, HIP 11952b and HIP 11952c, are not unusual. What is unusual is the fact that they orbit such an extremely metal-poor and, in particular, such a very old star.
A new result from ESO’s HARPS planet finder shows that rocky planets not much bigger than Earth are very common in the habitable zones around faint red stars. The international team estimates that there are tens of billions of such planets in the Milky Way galaxy alone, and probably about one hundred in the Sun’s immediate neighbourhood. This is the first direct measurement of the frequency of super-Earths around red dwarfs, which account for 80% of the stars in the Milky Way.
The HARPS team surveyed a carefully chosen sample of 102 red dwarf stars in the southern skies over a six-year period. A total of nine super-Earths (planets with masses between one and ten times that of Earth) were found, including two inside the habitable zones of Gliese 581 and Gliese 667 C respectively. The astronomers could estimate how heavy the planets were and how far from their stars they orbited.
By combining all the data, including observations of stars that did not have planets, and looking at the fraction of existing planets that could be discovered, the team has been able to work out how common different sorts of planets are around red dwarfs. They find that the frequency of occurrence of super-Earths in the habitable zone is 41% with a range from 28% to 95%.
A study of galaxies in the deepest far-infrared image of the sky, obtained by the Herschel Space Observatory, highlights the two contrasting ways that stars formed in galaxies up to 12 billion years ago.
Recent results from Herschel show that gas-rich galaxies in the early universe were able to create stars at an intense rate. In the nearby universe, we only see such high rates of star formation when galaxies collide. However, the Herschel data shows that while star-formation in some galaxies in the early universe were triggered by mergers, the majority of star forming galaxies were not undergoing interactions. The formation was driven by the amount of gas present.
"The aim of this study was to estimate the amount of gas that the galaxies contained and understand how that affected the way that they formed stars. In contrast to what we see in the nearby universe, it was only in the minority of the intensively star forming galaxies that star formation activity was triggered by merging of galaxies," said Magdis.
"The dominant population had very large gas reservoirs that could induce and maintain a high birth rate of stars without the need of galaxy ‘cannibalism’. Such episodes of star-formation naturally resulted from steady, long lasting accretion of gas, forming these ‘cosmic beacons’ of our Universe. However, our study shows that the other population – merging galaxies – had ten times less gas, but the interactions made them much more efficient in converting gas into stars. These galaxies experienced an extreme but nevertheless short-lived firework of star formation," said Magdis.
Researchers have devised a nanoscale sensor to electronically read the sequence of a single DNA molecule, a technique that is fast and inexpensive.
The researchers previously reported creating the nanopore by genetically engineering a protein pore from a mycobacterium. The nanopore, from Mycobacterium smegmatis porin A, has an opening 1 billionth of a meter in size, just large enough for a single DNA strand to pass through.
To make it work as a reader, the nanopore was placed in a membrane surrounded by potassium-chloride solution, with a small voltage applied to create an ion current flowing through the nanopore. The electrical signature changes depending on the type of nucleotide traveling through the nanopore. Each type of DNA nucleotide – cytosine, guanine, adenine and thymine – produces a distinctive signature.
The researchers attached a molecular motor, taken from an enzyme associated with replication of a virus, to pull the DNA strand through the nanopore reader.
A single drug can shrink or cure human breast, ovary, colon, bladder, brain, liver, and prostate tumors that have been transplanted into mice, researchers have found. The treatment, an antibody that blocks a “do not eat” signal normally displayed on tumor cells, coaxes the immune system to destroy the cancer cells.
A decade ago, biologist Irving Weissman discovered that leukemia cells produce higher levels of a protein called CD47 than do healthy cells. CD47, he and other scientists found, is also displayed on healthy blood cells; it’s a marker that blocks the immune system from destroying them as they circulate. Cancers take advantage of this flag to trick the immune system into ignoring them. In the past few years, Weissman’s lab showed that blocking CD47 with an antibody cured some cases of lymphomas and leukemias in mice by stimulating the immune system to recognize the cancer cells as invaders. Now, he and colleagues have shown that the CD47-blocking antibody may have a far wider impact than just blood cancers.
In two new studies, researchers have found a way to stimulate the brains of rodents to activate a specific memory trace. This research could help explain how we form our own memories, and why competing recollections sometimes make it hard to learn new information.
In the first study, cell biologist Mark Mayford and colleagues genetically engineered mice to be able to relive a memory when injected with the schizophrenia drug clozapine. Certain activities, such as exploring a new environment, cause these mice to create receptors for the drug; and when they’re given the drug later, the same neurons fire as did when the mice explored the new environment. In effect, clozapine recreates the memory.
In the second study, researchers were also able to reactivate an old memory in mice. In this case, molecular biologist and Nobel laureate Susumu Tonegawa and colleagues added a light-sensitive receptor to a group of cells in the hippocampus. These cells are known to be involved in fear-related learning. The mice went through the same shock-conditioning process as in the Mayford study and were then returned to their home cage. When the receptors were activated by a pulse of laser light, the animals immediately froze, though there were no cues, visual or otherwise, to remind them of the shock. “Our finding shows that activating these cells is absolutely sufficient to produce recall in the mice,” Tonegawa says.
A new analysis of isotopes found in lunar minerals challenges the prevailing view of how Earth’s nearest neighbor formed.
Most scientists believe Earth collided with a hypothetical, Mars-sized planet called Theia early in its existence, and the resulting smash-up produced a disc of magma orbiting our planet that later coalesced to form the moon. This is called the giant impact hypothesis. Computer models indicate that, for the collision to remain consistent with the laws of physics, at least 40% of the magma would have had to come from Theia.
In the new research, geochemists led by Junjun Zhang at the University of Chicago in Illinois, together with a colleague, looked at titanium isotopes in 24 separate samples of lunar rock and soil. Just as with oxygen, the researchers found the moon’s proportion was effectively the same as Earth’s and different from elsewhere in the solar system. Zhang explains that it’s unlikely Earth could have exchanged titanium gas with the magma disk because titanium has a very high boiling point. “The oxygen isotopic composition would be very easily homogenized because oxygen is much more volatile, but we would expect homogenizing titanium to be very difficult.”
By 2050, global average temperature could be between 1.4°C and 3°C warmer than it was just a couple of decades ago, according to a new study that seeks to address the largest sources of uncertainty in current climate models. That’s substantially higher than estimates produced by other climate analyses, suggesting that Earth’s climate could warm much more quickly than previously thought.
Many factors affect global and regional climate, including planet-warming “greenhouse” gases, solar activity, light-scattering atmospheric pollutants, and heat transfer among the land, sea, and air, to name just a few. There are so many influences to consider that it makes determining the effect of any one factor—despite years and sometimes decades of measurements—difficult.
Daniel Rowlands, a climate scientist at the University of Oxford, and his colleagues took a stab at addressing the largest sources of short-term climate uncertainty by modifying a version of one climate model used by the United Kingdom’s meteorological agency.
The higher end of the team’s range of likely warming scenarios is between 0.5°C and 0.75°C warmer than the scenarios published in the last report of the Intergovernmental Panel on Climate Change, Rowlands says.
One of the strange features of quantum information is that, unlike almost every other type of information, it cannot be perfectly copied. For example, it is impossible to take a single photon and make a number of photons that are in the exact same quantum state. This may seem minor, but it’s not. If perfect copying was possible, it would, among other things, be possible to send signals faster than the speed of light. This is forbidden by Einstein’s theory of relativity.
For years, scientists have been experimenting with the idea of approximate quantum copying. A recent paper publishedby Sadegh Raeisi, Wolfgang Tittel and Christoph Simon takes another step in that research. They showed that it is possible to perfectly recover the original from the imperfect quantum copies. They also proposed a way that his could be done in practice.
The research can be used in a variety of ways. First, it shows clearly that quantum information is preserved when copied. Even though the copies may be imperfect, the original quantum state can be recovered. In practical terms, it might lead to a precision measurement technique based on quantum physics for samples that have very low contrast, such as living cells.
Shortly after a mouse embryo starts to form, some of its stem cells undergo a dramatic metabolic shift to enter the next stage of development, Seattle researchers report today. These stem cells start using and producing energy like cancer cells.
The metabolic transition they discovered occurs very early as the mouse embryo, barely more than a speck of dividing cells, implants in the mother’s uterus. The change is driven by low oxygen conditions.
The researchers also saw a specific type of biochemical slowdown in the stem cells’ mitochondria – the cells’ powerhouses. The phenomenon previously was associated with aging and disease. This was the first example of the same downshift controlling normal early embryonic development.
Astronomers have put forward a new theory about why black holes become so hugely massive – claiming some of them have no ‘table manners’, and tip their ‘food’ directly into their mouths, eating more than one course simultaneously. Researchers from the UK and Australia investigated how some black holes grow so fast that they are billions of times heavier than the sun.
Professor Andrew King from the Department of Physics and Astronomy, University of Leicester, said: “Almost every galaxy has an enormously massive black hole in its centre. Our own galaxy, the Milky Way, has one about four million times heavier than the sun. But some galaxies have black holes a thousand times heavier still. We know they grew very quickly after the Big Bang. These hugely massive black holes were already full-grown when the universe was very young, less than a tenth of its present age.”
Nixon, King and their colleague Daniel Price in Australia made a computer simulation of two gas discs orbiting a black hole at different angles. After a short time the discs spread and collide, and large amounts of gas fall into the hole. According to their calculations black holes can grow 1,000 times faster when this happens.
Toby Cubitt at the Complutense University of Madrid, Spain, and colleagues have applied the tools used by computer science researchers to link notions of computational hardness to the difficulty of inferring dynamical equations from observational data.
Computer scientists classify computational hardness by how long it takes to solve a problem as a function of the size of the problem: cases in which the time is a polynomial function of size are in class P and are said to be “tractable” or “efficiently solvable” problems; problems for which the required time explodes exponentially with size are called NP-hard. The latter are something of a gold standard for computational complexity and can’t be solved efficiently.
Cubitt et al. prove theoretically that, regardless of experimental precision, deducing the dynamical equations that describe a system’s evolution is an NP-hard problem. The result applies to classical and quantum systems both, regardless of dynamical details.
Researchers have developed a computer program that has tracked the manner in which different forms of dementia spread within a human brain. They say their mathematical model can be used to predict where and approximately when an individual patient’s brain will suffer from the spread, neuron to neuron, of “prion-like” toxic proteins — a process they say underlies all forms of dementia.
Their findings could help patients and their families confirm a diagnosis of dementia and prepare in advance for future cognitive declines over time. In the future — in an era where targeted drugs against dementia exist — the program might also help physicians identify suitable brain targets for therapeutic intervention, says the study’s lead researcher, Ashish Raj, Ph.D., an assistant professor of computer science in radiology.
The computational model, which Dr. Raj developed, is the latest, and one of the most significant, validations of the idea that dementia is caused by proteins that spread through the brain along networks of neurons. It extends findings that were widely reported in February that Alzheimer’s disease starts in a particular brain region, but spreads further via misfolded, toxic “tau” proteins. Those studies were conducted in mouse models and focused only on Alzheimer’s disease.
In this study, Dr. Raj details how he developed the mathematical model of the flow of toxic proteins, and then demonstrates that it correctly predicted the patterns of degeneration that results in a number of different forms of dementia.
Many people who go to a hospital with chest pain don’t end up having a heart attack. A new study that identifies certain abnormal cells in patients’ blood might lead to screening tests to spot those who remain at risk, despite passing standard diagnostic tests.
In people experiencing the opening throes of a heart attack, cells from the inner lining of blood vessels — called endothelial cells — get set adrift in the bloodstream, researchers report in the March 21 Science Translational Medicine. Heart attack patients have higher numbers of these endothelial cells in their blood than healthy people, and the patients’ cells take abnormal shapes, says Eric Topol, a cardiologist at Scripps Research Institute in La Jolla, Calif.
Studies using X-ray and ultraviolet observations from NASA’s Swift satellite provide new insights into the elusive origins of an important class of exploding star called Type Ia supernovae.
"A missing detail is what types of stars reside in these systems. They may be a mix of stars like the sun or much more massive red- and blue-supergiant stars," said Brock Russell, a physics graduate student at the University of Maryland, College Park, and lead author of the X-ray study.
Russell and Immler combined X-ray data for 53 of the nearest known Type Ia supernovae but could not detect an X-ray point source. Stars shed gas and dust throughout their lives. When a supernova shock wave plows into this material, it becomes heated and emits X-rays. The lack of X-rays from the combined supernovae shows that supergiant stars, and even sun-like stars in a later red giant phase, likely aren’t present in the host binaries.
An exploding star known as a Type Ia supernova plays a key role in our understanding of the universe. Studies of Type Ia supernovae led to the discovery of dark energy, which garnered the 2011 Nobel Prize in Physics. Yet the cause of this variety of exploding star remains elusive.
All evidence points to a white dwarf that feeds off its companions star, gaining mass, growing unstable, and ultimately detonating. But does that white dwarf draw material from a Sun-like star, an evolved red giant star, or from a second white dwarf? Or is something more exotic going on? Clues can be collected by searching for “cosmic crumbs” left over from the white dwarf’s last meal.
In two comprehensive studies of SN 2011fe - the closest Type Ia supernova in the past two decades - there is new evidence that indicates that the white dwarf progenitor was a particularly picky eater, leading scientists to conclude that the companion star was not likely to be a Sun-like star or an evolved giant.
The first observation of a cosmic effect theorized 40 years ago could provide astronomers with a more precise tool for understanding the forces behind the universe’s formation and growth, including the enigmatic phenomena of dark energy and dark matter.
A large research team from two major astronomy surveys reports in a paper submitted to the journal Physical Review Letters that scientists detected the movement of distant galaxy clusters via the kinematic Sunyaev-Zel’dovich (kSZ) effect, which has never before been seen. The paper was recently posted on the arXiv preprint database, and was initiated at Princeton University by lead author Nick Hand as part of his senior thesis. Fifty-eight collaborators from the Atacama Cosmology Telescope (ACT) and the Baryon Oscillation Spectroscopic Survey (BOSS) projects are listed as co-authors.
Proposed in 1972 by Russian physicists Rashid Sunyaev and Yakov Zel’dovich, the kSZ effect results when the hot gas in galaxy clusters distorts the cosmic microwave background radiation — which is the glow of the heat left over from the Big Bang — that fills our universe. Radiation passing through a galaxy cluster moving toward Earth appears hotter by a few millionths of a degree, while radiation passing through a cluster moving away appears slightly cooler.
Princeton University researchers have used a novel virtual reality and brain imaging system to detect a form of neural activity underlying how the brain forms short-term memories that are used in making decisions.
By following the brain activity of mice as they navigated a virtual reality maze, the researchers found that populations of neurons fire in distinctive sequences when the brain is holding a memory. Previous research centered on the idea that populations of neurons fire together with similar patterns to each other during the memory period.
The findings give insight into what happens in the brain during “working memory,” which is used when the mind stores information for short periods of time prior to acting on it or integrating it with other information. Working memory is a central component of reasoning, comprehension and learning.
A ‘see-sawing’ atmosphere over 2.5 billion years ago preceded the oxygenation of our planet and the development of complex life on Earth, a new study has shown.
Research published in the journal Nature Geoscience, reveals that the Earth’s early atmosphere periodically flipped from a hydrocarbon-free state into a hydrocarbon-rich state similar to that of Saturn’s moon, Titan.
This switch between “organic haze” and a “haze-free” environment was the result of intense microbial activity and would have had a profound effect on the climate of the Earth system.
Similar to the way scientists believe our climate behaves today, the team say their findings provide us with an insight into the Earth’s surface environment prior to oxygenation of the planet.
A series of discoveries in southern Scotland have helped fill a 15-million-year gap in the fossil record and provide key information about the early evolution of terrestrial vertebrates. Jennifer A. Clack and colleagues found a variety of invertebrate and limbed vertebrate, or tetrapod, fossils representing both terrestrial and aquatic life forms. The fossils help populate the gap between Devonian tetrapods, shown to be mainly aquatic with many primitive features, and the post-Devonian tetrapods that were effectively terrestrial and developed features of modern four-footed animals.
The findings suggest that the multimillion-year gap in the fossil record reflected an incomplete fossil collection rather than an absence of terrestrial animals. The fossils provide crucial information about the timing and processes of early terrestrialization and suggest that many tetrapod lineages originated much earlier than previously thought. The discoveries could be used to test previous hypotheses about the evolution of terrestrial vertebrates and could help researchers understand the ecology and environment in which modern terrestrial fauna emerged, the authors suggest.
Biologists may need to rethink where to look for evolutionary changes responsible for the origin of vertebrates, including humans, as a result of research at Stanford University and the University of Chicago.
Chris Lowe and Ari Pani, biologists at Stanford’s Hopkins Marine Station, discovered some of the essential genetic machinery previously thought exclusive to vertebrate brains in a surprising place – a sea dwelling, bottom-feeding acorn worm, Saccoglossus kowalevskii.
These worms lack vertebrate-like brains, and are, in fact, separated from vertebrates by over 500 million years of evolution. The worms are even classified in a different phylum, the hemichordates.
By temporarily silencing a hyperactive gene, scientists dramatically boost the efficiency of mouse cloning.
In principle, somatic cell nuclear transfer (SCNT) is a potent tool for scientists looking to produce exact genetic replicas of a particular animal. By injecting a nucleus from an adult cell into an oocyte from which the nucleus has been removed, one can initiate the embryonic development process and derive a clone of the ‘donor’ animal.
Unfortunately, this technique is terribly inefficient, with a success rate of 1–2% in mice. “This must be due to some errors in the reprogramming of the donor genome into the ‘totipotent’ state, which is equivalent to the state observed in conventionally fertilized embryos,” explains Atsuo Ogura. However, Ogura and colleagues have now made significant progress in clearing a major roadblock thwarting SCNT success
There’s yet another indication that neutrinos cannot travel faster than the speed of light after all, provided by a neighbor of the OPERA detector that set off the fuss in the first place. OPERA’s detector sits deep underground at Gran Sasso in Italy, where it receives neutrinos from a beam generated at CERN, 730km away on the French-Swiss border. Because the neutrino beam spreads out over the intervening distance, it’s possible to run multiple detectors at the same site, all listening in on the same beam. The team running one of Gran Sasso’s other detectors (called ICARUS) has now performed time-of-flight measurements on neutrinos and determined that they don’t seem to be moving faster than light.
These results are significant because they largely took advantage of precisely the same infrastructure used to generate the OPERA results. ICARUS used the short, widely spaced bunches of neutrinos produced by CERN to help narrow down potential errors in the earlier results (read our discussion of these errors). The ICARUS team also used the same timing and position infrastructure used by OPERA, which gives them uncertainties of only nanoseconds and centimeters, respectively.
Astronomers using NASA’s Hubble Space Telescope have found several examples of galaxies containing quasars, which act as gravitational lenses, amplifying and distorting images of galaxies aligned behind them.
To find these rare cases of galaxy-quasar combinations acting as lenses, a team of astronomers led by Frederic Courbin at the Ecole Polytechnique Federale de Lausanne (EPFL, Switzerland) selected 23,000 quasar spectra in the Sloan Digital Sky Survey (SDSS). They looked for the spectral imprint of galaxies at much greater distances that happened to align with foreground galaxies. Once candidates were identified, Hubble’s sharp view was used to look for gravitational arcs and rings (which are indicated by the arrows in these three Hubble photos) that would be produced by gravitational lensing.
Quasar host galaxies are hard or even impossible to see because the central quasar far outshines the galaxy. Therefore, it is difficult to estimate the mass of a host galaxy based on the collective brightness of its stars. However, gravitational lensing candidates are invaluable for estimating the mass of a quasar’s host galaxy because the amount of distortion in the lens can be used to estimate a galaxy’s mass.
In our home galaxy, the Milky Way, on average one solar mass’s worth of matter per year is turned into stars. Yet a survey of the available raw material, clouds of gas and dust, shows that, using only its own resources, our galaxy could not keep up this rate of star formation for longer than a couple of billion years. Is our home galaxy currently undergoing a rather special, short-lived era of star formation? Both stellar age determinations and comparison with other spiral galaxies show that not to be the case. One solar mass per year is a typical star formation rate, and the problem of insufficient raw matter appears to be universal as well.
Evidently, additional matter finds its way into galaxies. One possibility is an inflow from huge low-density gas reservoirs filling the intergalactic voids; there is, however, very little evidence that this is happening. Another possibility, closer to home, involves a gigantic cosmic matter cycle. Gas is observed to flow away from many galaxies, and may be pushed by several different mechanisms, including violent supernova explosions (which are how massive stars end their lives), and the sheer pressure exerted by light emitted by bright stars on gas in their cosmic neighbourhood.
Now, a team of astronomers led by Kate Rubin (MPIA) has used the Keck I telescope on Mauna Kea, Hawai’i, to examine gas associated with a hundred galaxies at distances between 5 and 8 billion light-years (z ~ 0.5 – 1), finding, in six of those galaxies, the first direct evidence that gas adrift in intergalactic space does indeed flow back into star-forming galaxies.
Fossils from two caves in south-west China have revealed a previously unknown Stone Age people and give a rare glimpse of a recent stage of human evolution with startling implications for the early peopling of Asia.
The fossils are of a people with a highly unusual mix of archaic and modern anatomical features and are the youngest of their kind ever found in mainland East Asia.
Dated to just 14,500 to 11,500 years old, these people would have shared the landscape with modern-looking people at a time when China’s earliest farming cultures were beginning, says an international team of scientists led by Associate Professor Darren Curnoe, of the University of New South Wales, and Professor Ji Xueping of the Yunnan Institute of Cultural Relics and Archeology.
Details of the discovery are published in the journal PLoS One. The team has been cautious about classifying the fossils because of their unusual mosaic of features.
Two teams of astronomers have used data from NASA’s Chandra X-ray Observatory and other telescopes to map the distribution of dark matter in a galaxy cluster known as Abell 383, which is located about 2.3 billion light years from Earth. Not only were the researchers able to find where the dark matter lies in the two dimensions across the sky, they were also able to determine how the dark matter is distributed along the line of sight.
The recent work on Abell 383 provides one of the most detailed 3-D pictures yet taken of dark matter in a galaxy cluster. Both teams have found that the dark matter is stretched out like a gigantic football, rather than being spherical like a basketball, and that the point of the football is aligned close to the line of sight.
The X-ray data (purple) from Chandra in the composite image show the hot gas, which is by far the dominant type of normal matter in the cluster. Galaxies are shown with the optical data from the Hubble Space Telescope (HST), the Very Large Telescope, and the Sloan Digital Sky Survey, colored in blue and white.
Using radio and infrared telescopes, astronomers have obtained a first tantalizing look at a crucial early stage in star formation. The new observations promise to help scientists understand the early stages of a sequence of events through which a giant cloud of gas and dust collapses into dense cores that, in turn, form new stars.
The scientists studied a giant cloud about 770 light-years from Earth in the constellation Perseus. They used the European Space Agency’s Herschel Space Observatory and the National Science Foundation’s Green Bank Telescope (GBT) to make detailed observations of a clump, containing nearly 100 times the mass of the Sun, within that cloud.
Stars are formed, astronomers think, when such a cloud of gas and dust collapses gravitationally, first into clumps, then into dense cores, each of which can then begin to further collapse and form a young star. The details of how this happens are not well understood. One difficulty is that most regions where this process is underway already have formed stars nearby. Those stars affect subsequent nearby star formation through their stellar winds and shock waves when they explode as supernovae.
Researchers at the RIKEN Omics Science Center (OSC) have successfully developed and demonstrated a new experimental technique for producing cells with specific functions through the artificial reconstruction of transcriptional regulatory networks. As an alternative to induced pluripotent stem cells, the technique promises to enable faster and more efficient production of functional cells for use in cancer therapy and a variety of other areas.
The OSC research team explored an alternative to iPS cells based on the use of transcriptional regulatory networks (TRNs), networks of transcription factors and the genes they regulate. Previous research by the team characterized the dynamic regulatory activities of such transcription factors during cellular differentiation from immature cell (monoblast) to developed (monocyte-like) cell using human acute monocytic leukemia cell lines (THP-1). Their findings led them to hypothesize that functional characteristics of the cell-type are maintained by its specific TRN.
New observations made with ESO’s Very Large Telescope are making a major contribution to understanding the growth of adolescent galaxies. In the biggest survey of its kind astronomers have found that galaxies changed their eating habits during their teenage years - the period from about 3 to 5 billion years after the Big Bang. At the start of this phase smooth gas flow was the preferred snack, but later, galaxies mostly grew by cannibalising other smaller galaxies.
Astronomers have known for some time that the earliest galaxies were much smaller than the impressive spiral and elliptical galaxies that now fill the Universe. Over the lifetime of the cosmos galaxies have put on a great deal of weight but their food, and eating habits, are still mysterious. A new survey of carefully selected galaxies has focussed on their teenage years — roughly the period from about 3 to 5 billion years after the Big Bang.
These days graphene is the rock star of materials science, but it has an Achilles heel: It is exceptionally sensitive to its electrical environment.
This single-atom-thick honeycomb of carbon atoms is lighter than aluminum, stronger than steel and conducts heat and electricity better than copper. As a result, scientists around the world are trying to turn it into better computer displays, solar panels, touch screens, integrated circuits and biomedical sensors, among other possible applications. However, it has proven extremely difficult to reliably create graphene-based devices that live up to its electrical potential when operating at room temperature and pressure.
Now, writing in the Mar. 13 issue of the journal Nature Communications, a team of Vanderbilt physicists reports that they have nailed down the source of the interference inhibiting the rapid flow of electrons through graphene-based devices and found a way to suppress it. This allowed them to achieve record-levels of room-temperature electron mobility – the measure of the speed that electrons travel through a material – three times greater than those reported in previous graphene-based devices.
According to the experts, graphene may have the highest electron mobility of any known material. In practice, however, the measured levels of mobility, while significantly higher than in other materials like silicon, have been considerably below its potential.
Quasars, and other galaxies with less dramatic but still active nuclei, come in a variety of subgroups. Some, for example, contain hot gas moving at huge velocities, while others do not; some are seen with strong dust absorption features, but others are not. One problem in unraveling the mystery of quasars is that many (perhaps most) quasar nuclei seem to be surrounded by a torus of obscuring dust that makes them difficult to study. In fact, the standard model of these objects proposes that the various subgroups result from viewing the active nuclei at different angles with respect to its dusty torus. If the nucleus happens to be seen face-on, and if there is a jet present, the gas velocities are large and the dust is not apparent; if seen edge-on through the torus, the observed velocities are much smaller and the dust absorption features are dominant. But so far no one knows for sure how quasars form, how they develop in time, or how (or what) physical processes generate their stupendous energies.
The situation may be about to change. The violent activity around a black hole is very difficult to analyze with just pen and paper, and so for years researchers have tried to use computer simulations to identify what happens. But these simulations have faced a major challenge: tracing the detailed flow of material from galaxy-wide scales of hundreds of thousands of light-years down into the central tenth of a light-year around the black hole. It has just been too hard to keep track of everything at such a fine scale across such a large one.
CfA astronomers Chris Hayward and Lars Hernquist, together with ex-CfA member Phil Hopkins and a fourth colleague, have figured out a way to deal with the computational dilemma.
The Greenland ice sheet is likely to be more vulnerable to global warming than previously thought. The temperature threshold for melting the ice sheet completely is in the range of 0.8 to 3.2 degrees Celsius global warming, with a best estimate of 1.6 degrees above pre-industrial levels, shows a new study by scientists from the Potsdam Institute for Climate Impact Research (PIK) and the Universidad Complutense de Madrid. Today, already 0.8 degrees global warming has been observed. Substantial melting of land ice could contribute to long-term sea-level rise of several meters and therefore it potentially affects the lives of many millions of people.
The time it takes before most of the ice in Greenland is lost strongly depends on the level of warming. “The more we exceed the threshold, the faster it melts,” says Alexander Robinson, lead-author of the study now published in Nature Climate Change. In a business-as-usual scenario of greenhouse-gas emissions, in the long run humanity might be aiming at 8 degrees Celsius of global warming. This would result in one fifth of the ice sheet melting within 500 years and a complete loss in 2000 years, according to the study. “This is not what one would call a rapid collapse,” says Robinson. “However, compared to what has happened in our planet’s history, it is fast. And we might already be approaching the critical threshold.”
In contrast, if global warming would be limited to 2 degrees Celsius, complete melting would happen on a timescale of 50.000 years. Still, even within this temperature range often considered a global guardrail, the Greenland ice sheet is not secure. Previous research suggested a threshold in global temperature increase for melting the Greenland ice sheet of a best estimate of 3.1 degrees, with a range of 1.9 to 5.1 degrees. The new study’s best estimate indicates about half as much.
The well-being of living cells requires specialized squads of proteins that maintain order. Degraders chew up worn-out proteins, recyclers wrap up damaged organelles, and-most importantly-DNA repair crews restitch anything that resembles a broken chromosome. If repair is impossible, the crew foreman calls in executioners to annihilate a cell. As unsavory as this last bunch sounds, failure to summon them is one aspect of what makes a cancer cell a cancer cell.
A recent study from scientists at the Salk Institute for Biological Studies showed exactly how cells sense the possibility that their DNA is damaged as a first step to responding to the failure to divide properly. That study, published March 11 in Nature Structural and Molecular Biology, found that if cells take too long to undergo cell division, structures at the tips of their chromosomes, known as telomeres, send out a molecular SOS signal.
These findings have dual implications for cancer chemotherapy. First, they show how a class of anti-cancer drugs that slows cell division—known as mitotic inhibitors—kills cells. This class includes the common chemotherapy drugs Vinblastine, Taxol and Velcade. More significantly, the findings suggest ways to make therapy with those inhibitors more potent.
A new study by Harvard School of Public Health (HSPH) researchers has found that red meat consumption is associated with an increased risk of total, cardiovascular, and cancer mortality. The results also showed that substituting other healthy protein sources, such as fish, poultry, nuts, and legumes, was associated with a lower risk of mortality. The study was published online in Archives of Internal Medicine.
“Our study adds more evidence to the health risks of eating high amounts of red meat, which has been associated with type 2 diabetes, coronary heart disease, stroke, and certain cancers in other studies,” said lead author An Pan, research fellow in the Department of Nutrition at HSPH.
Regular consumption of red meat, particularly processed red meat, was associated with increased mortality risk. One daily serving of unprocessed red meat (about the size of a deck of cards) was associated with a 13 percent increased risk of mortality, and one daily serving of processed red meat (one hot dog or two slices of bacon) was associated with a 20 percent increased risk.
Among specific causes, the corresponding increases in risk were 18 percent and 21 percent for cardiovascular mortality, and 10 percent and 16 percent for cancer mortality. These analyses took into account chronic disease risk factors such as age, body mass index, physical activity, and family history of heart disease or major cancers.
In the beginning – of the ribosome, the cell’s protein-building workbench – there were ribonucleic acids, the molecules we call RNA that today perform a host of vital functions in cells. And according to a new analysis, even before the ribosome’s many working parts were recruited for protein synthesis, proteins also were on the scene and interacting with RNA. This finding challenges a long-held hypothesis about the early evolution of life. The study appears in the journal PLoS ONE.
The “RNA world” hypothesis, first promoted in 1986 in a paper in the journal Nature and defended and elaborated on for more than 25 years, posits that the first stages of molecular evolution involved RNA and not proteins, and that proteins (and DNA) emerged later, said University of Illinois crop sciences and Institute for Genomic Biology professor Gustavo Caetano-Anollés, who led the new study.
"I’m convinced that the RNA world (hypothesis) is not correct," Caetano-Anollés said. "That world of nucleic acids could not have existed if not tethered to proteins."
Differences between the decay properties of charm-carrying mesons and those of their antiparticles may carry clues to the mystery of the missing antimatter in the Universe.
From what we can observe, there is no symmetry between matter and antimatter in the Universe. All structures, from unimaginably large clusters of galaxies to microscopic human cells, are made of matter: protons, neutrons, and electrons. Though it is possible to produce antimatter—antiprotons, antineutrons, and positrons—in high-energy particle collisions, and even make medical use of positrons, antimatter seems to have disappeared from the Universe at large.
So far, all experimentally observed differences between the way matter and antimatter behave are well explained by the standard model of particle physics. When applying this theory to the behavior of matter and antimatter in the early Universe, however, the differences are far too little to explain the asymmetry observed today. Now, in a paper appearing in Physical Review Letters, the Large Hadron Collider beauty (LHCb) collaboration at CERN has found a difference in the decay properties of D mesons and their antiparticles that is perhaps too large to be explained by the standard model.
A new study is the first to demonstrate that the biological mechanism that keeps the HIV virus hidden and unreachable by current antiviral therapies can be targeted and interrupted in humans, providing new hope for a strategy to eradicate HIV completely.
In a clinical trial, six HIV-infected men who were medically stable on anti-AIDS drugs, received vorinostat, an oncology drug. Recent studies by Margolis and others have shown that vorinostat also attacks the enzymes that keep HIV hiding in certain CD4+ T cells, specialized immune system cells that the virus uses to replicate. Within hours of receiving the vorinostat, all six patients had a significant increase in HIV RNA in these cells, evidence that the virus was being forced out of its hiding place.
“This proves for the first time that there are ways to specifically treat viral latency, the first step towards curing HIV infection,” said Margolis, who led the study. “It shows that this class of drugs, HDAC inhibitors, can attack persistent virus. Vorinostat may not be the magic bullet, but this success shows us a new way to test drugs to target latency, and suggests that we can build a path that may lead to a cure.”
Experimental physicists are pushing across the assumed divide between the quantum and the ordinary by demonstrating quantum effects in more familiar environments. Now a group of researchers has furthered that cause by encoding quantum information into a room-temperature solid for time spans that can be ticked off on a stopwatch. The new quantum memory scheme can store information for more than a second, which extends by orders of magnitude the lifetime of information encoded as a quantum bit, or qubit, on a particle at ordinary temperatures. The American, German and British researchers have only just submitted the research to a peer-reviewed journal, but in late February they presented their findings to a meeting of the American Physical Society.
A qubit, much like an ordinary bit in commonplace electronic devices, has a 0 state and a 1 state. But unlike a classical bit, a qubit can be in a so-called superposition of 0 and 1. That property, along with other phenomena such as quantum entanglement, means that quantum computers based on qubits would be phenomenally powerful—that is, if a practical machine could ever be built.
In both animals and humans, vocal signals used for communication contain a wide array of different sounds that are determined by the vibrational frequencies of vocal cords. For example, the pitch of someone’s voice, and how it changes as they are speaking, depends on a complex series of varying frequencies. Knowing how the brain sorts out these different frequencies—which are called frequency-modulated (FM) sweeps—is believed to be essential to understanding many hearing-related behaviors, like speech. Now, a pair of biologists at the California Institute of Technology (Caltech) has identified how and where the brain processes this type of sound signal.
Knowing the direction of an FM sweep—if it is rising or falling, for example—and decoding its meaning, is important in every language. The significance of the direction of an FM sweep is most evident in tone languages such as Mandarin Chinese, in which rising or dipping frequencies within a single syllable can change the meaning of a word.
In their paper, the researchers pinpointed the brain region in rats where the task of sorting FM sweeps begins.
How do we recognize a face? To date, most research has answered “holistically”: We look at all the features—eyes, nose, mouth—simultaneously and, perceiving the relationships among them, gain an advantage over taking in each feature individually. Now a new study overturns this theory. The researchers found that people’s performance in recognizing a whole face is no better than their performance with each individual feature shown alone. “Surprisingly, the whole was not greater than the sum of its parts,” says Gold.
To predict each participant’s best possible performance in putting together the individual features, the investigators used a theoretical model called an “optimal Bayesian integrator” (OBI). The OBI measures someone’s success in perceiving a series of sources of information—in this case, facial features—and combines them as if they were using the sources together just as they would when perceiving them one by one. Their score recognizing the combination of features (the whole face) should equal the sum of the individual-feature scores. If the whole-face performance exceeds this sum, it implies that the relationships among the features enhanced the information processing—that is, “holistic” facial recognition exists.
Observations using the OASIS integral field spectrograph on the William Herschel Telescope (WHT) have revealed a long, thin plume of ionised gas stretching out from the brightest cluster galaxy (BCG) of Abell 2146 (z=0.243) (Canning et al. 2012). Extended optical emission-line nebulae are not uncommon in the cores of clusters, but the discovery of this particular structure is unexpected, as the host cluster is in the throes of a major merger event.
How can a >15kpc long plume survive in the environment of such a turbulent intracluster medium? Chandra X-ray observations of the system show that a merging subcluster has created large shock fronts, each several hundred kiloparsecs across. These surround a dense, relatively cool X-ray core which is being stripped of its material in the collision.
The situation in A2146 is unusual. While many of the member galaxies of the merging subcluster are located ahead of the X-ray peak, the dominant cluster galaxy lags behind. A large offset of 36 kpc is observed between the brightest cluster galaxy and the X-ray cool core. The new OASIS observations reveal a thin plume of ionised gas stretching out from the brightest cluster galaxy to bridge the gap to the X-ray cool core. The shape of the plume is tracked by a tail of cooler X-ray gas, linking the dense X-ray peak to the brightest cluster galaxy. This also means that the optical plume could trace an intermediate stage of gas cooling directly from the hot phase and on to the BCG from the X-ray cool core. Whether or not such a process results in significant star formation - thus contributing to the growth of the massive central galaxy - awaits further observation.
A neutron star is the closest thing to a black hole that astronomers can observe directly, crushing half a million times more mass than Earth into a sphere no larger than a city. In October 2010, a neutron star near the center of our galaxy erupted with hundreds of X-ray bursts that were powered by a barrage of thermonuclear explosions on the star’s surface. NASA’s Rossi X-ray Timing Explorer (RXTE) captured the month-long fusillade in extreme detail. Using this data, an international team of astronomers has been able to bridge a long-standing gap between theory and observation.
Models designed to explain these processes made one prediction that had never been confirmed by observation. At the highest rates of accretion, they said, the flow of fuel onto the neutron star can support continuous and stable thermonuclear reactions without building up and triggering episodic explosions.
But at the highest rates, the strong spikes disappeared and the pattern transformed into gentle waves of emission. Linares and his colleagues interpret this as a sign of marginally stable nuclear fusion, where the reactions take place evenly throughout the fuel layer, just as theory predicted.
Researchers at the University of California, Davis, have figured out how the human body keeps essential genes switched “on” and silences the vast stretches of genetic repeats and “junk” DNA.
Frédéric Chédin, associate professor in the Department of Molecular and Cellular Biology, describes the research in a paper published March 1 in the journal Molecular Cell. The work could lead to treatments for lupus and other autoimmune diseases, by reversing the gene-silencing process known as cytosine methylation.
“R-loops” are the key, say graduate student Paul Ginno, Chédin and colleagues. The loops emerge in the RNA transcription process in DNA sections that are rich in cytosine and guanine, the C and G in the four-letter DNA code. These C and G stretches serve as “on” switches, or promoters, for about 60 percent of human genes.
Scientists have known since the 1980s that these so-called CG island promoters are not subject to methylation. But, Chédin said, the mechanism has been a long-standing mystery.
A research group led by Dr. A. Claudio Cuello of McGill University has uncovered a critical process in understanding the degeneration of brain cells sensitive to Alzheimer’s disease (AD). The study, published in the February issue of the Journal of Neuroscience, suggests that this discoverycould help develop alternative AD therapies.
A breakdown in communication between the brain’s neurons is thought to contribute to the memory loss and cognitive failure seen in people with AD. The likely suspect is NGF (Nerve Growth Factor), a molecule responsible for generating signals that maintain healthy cholinergic neurons – a subset of brain cells that are particularly sensitive to AD – throughout a person’s lifetime. Oddly, scientists had never been able to find anything wrong with this molecule to explain the degeneration of cholinergic neurons in patients with AD.
This new study, however, has elucidated the process by which NGF is released in the brain, matures to an active form and is ultimately degraded. The researchers were also able to determine how this process is altered in AD. The group demonstrated that treatment of healthy adult rats with a drug that blocks the maturation of active NGF leads to AD-like losses of cholinergic functional units, which result in cognitive impairments. By contrast, when treated with a drug to prevent degradation of active NGF, the numbers of cholinergic contacts increased significantly.
University of California, San Diego electrical engineers are building a forest of tiny nanowire trees in order to cleanly capture solar energy without using fossil fuels and harvest it for hydrogen fuel generation. Reporting in the journal Nanoscale, the team said nanowires, which are made from abundant natural materials like silicon and zinc oxide, also offer a cheap way to deliver hydrogen fuel on a mass scale. “This is a clean way to generate clean fuel,” said Deli Wang, professor at UC San Diego.
The trees’ vertical structure and branches are keys to capturing the maximum amount of solar energy, according to Wang. That’s because the vertical structure of trees grabs and adsorbs light while flat surfaces simply reflect it, Wang said, adding that it is also similar to retinal photoreceptor cells in the human eye.