Astronomers detect first Earth-size planets orbiting another star
By Brian Vastag, Published: December 20 - Washington Post
In a milestone hailed by scientists as a key step toward finding another Earth-like world, astronomers Tuesday announced the discovery of two blazingly hot planets roughly the size of Earth some 950 light years distant.
The discovery “demonstrates for first time that Earth-size planets exist around other stars, and that we can detect them,” said Francois Fressin, an astronomer at the Harvard-Smithsonian Center for Astrophysics who led the discovery team.
The two planets orbit a star much like our sun, but they whiz around it so fast and so close that their surfaces sizzle like frying pans.
“They’re way too hot to be anything like our own Earth,” said Sara Seager, a planet hunter at the Massachusetts Institute of Technology and a member of the discovery team.
The implication: No life — at least life as we can conceive it — is possible on the new planets.
Still, finding these hot Earth-sized planets is “seriously cool,” said Lisa Kaltenegger, who studies so-called exoplanets at the Max Planck Institute in Heidelberg, Germany, and was not involved in the research. “These discoveries are a great technological step forward.”
The planets were announced Tuesday in the journal Nature and during a NASA teleconference.
Detected by NASA’s Kepler space telescope, the two planets, dubbed Kepler 20e and Kepler 20f, are almost certainly rocky like Earth and not gaseous like Jupiter, Kaltenegger said.
The smaller planet, Kepler 20e, is about the size of Venus but much closer to its star, zooming around it every six days. An Earth year, by contrast, is 365 days.
The larger planet, Kepler 20f, is just three percent larger than Earth. “It’s the first Earth-sized planet” ever detected orbiting another star, Seager said. “It is a big milestone.”
Kepler 20f is a bit farther out from its star, completing an orbit about every 20 days. Its surface temperature is hotter than a pizza oven — about 800 degrees Fahrenheit.
The two planets nestle in among three other larger planets tightly circling the star Kepler 20.
“It’s a beautiful planetary system,” said Dimitar Sasselov, a planet hunter at the Harvard-Smithsonian center and a member of the discovery team.
But it’s also a puzzling one. None of the five planets lie within the so-called habitable zone, the narrow band of space around a star where water can exist as liquid. Instead, all five planets hug their star, orbiting closer than Mercury is to the sun.
And, unlike our solar system, the two newly-found rocky planets are interspersed with three larger, gassy, Neptune-like planets.
“The architecture of that solar system is crazy,” said David Charbonneau of Harvard University. “In our solar system, the two different kinds of planets don’t mingle. This is the first time we’ve seen anything like this.”
The finds mark a key moment in the accelerating search to bag and tag planets outside our solar system. Since the first such detection in 1995, multiple teams employing ground and space telescopes have found more than 700 planets orbiting other stars, according to an online catalogue.
The message, says Seager, is simple: Planets abound wherever we look. “We think every star has planets,” she said.
NASA launched the $600 million Kepler space telescope in 2009 on a mission to find other Earth-like planets. So far, the telescope has found 33 confirmed planets and 2,326 possible planets, but they are all too big or too hot to qualify as Earth-like. The telescope detects planets by staring at 150,000 stars near the constellation Cygnus. When light from a star dims, or winks, it indicates a possible planet passing by. If Kepler sees the same wink three times, astronomers infer a planet. The time that passes between winks indicates the planet’s orbital period, or year.
Earlier this month, Kepler scientists announced a planet square in its star’s habitable zone. Dubbed Kepler 22b, that planet is about 1.4 times as wide as Earth, likely too large to host a rocky surface. “It’s too big, we think, for life,” Seager said.
The next milestone for Kepler will be the big one: The detection of an Earth-sized world with a surface temperature just right for life. Kepler scientists are confident they will soon spot such an Earth 2.0.
“One of these days — whether next year or two years from now — Kepler will confirm a true Earth analog,” Sasselov said. “And that will be a historic moment.”
When asked whether the Kepler team planned to give the newly found Earth-sized planets a catchier name, Sasselov balked. “Everybody wants pretty names,” he said. “But what do we do? There will be thousands of these planets.”
Wednesday, December 21, 2011
Saturday, December 10, 2011
Higgs boson "the God particle" getting closer
Higgs particle: Getting closer
By Joel Achenbach
The Higgs particle is also known as the Higgs boson, or “the God particle,” a term that Leon Lederman used some years ago and which delighted journalists but surely offended photons and electrons throughout the universe. The Higgs is named after Peter Higgs, a theorist who four decades ago predicted its existence as part of the Standard Model of particle physics. No one’s ever found one. Discovering the Higgs is a central purpose of two very elaborate experiments being conducted at the Large Hadron Collider at CERN. On Tuesday, the CERN scientists will announce their latest batch of results, and, as Scientific American has reported, rumors abound that they’ve homing in on the Higgs. More here from Nature.
CERN has itself said that there will be no “discovery” announcement, and the best bet is that the two experiments haven’t quite nailed the Higgs with certainty but are getting very close. “I am looking for closure, and I don’t expect to get it next week,” a leading theorist tells me by email.
As SciAm notes, certainty in this case is made difficult by the fact that, even with the elaborate infrastructure in place at the LHC, there’s no way to catch a Higgs and bottle it up like a lightning bug.
“...the CMS and ATLAS detectors cannot directly catch Higgs bosons; those particles would decay into other particles immediately after being created in the LHC’s proton collisions. Instead, physicists must analyze the subatomic debris from the decays and reconstruct what happened.”
As my editor, Claudia, has pointed out, we’re at a point where a lot of major discoveries are indirect. No one shouts “Land ho!” from the crow’s nest anymore. Instead we find planets like Kepler 22-b, utterly invisible even with the most advanced telescope, but found through fluctuations in the light of its parent star. We are devising new ways to peel back layers of the onion.
Physicists are hoping to discover some “new physics” with the LHC. At the very least, they’d like to find a new particle they hadn’t even imagined. The Higgs, however, is kind of a familiar particle, as undiscovered particles go. It’s supposed to be lurking there somewhere because otherwise the Standard Model has a gaping hole in it. What matters most about the Higgs, beyond whether it exists at all, is how massive it is. If it’s high-mass, that gives you a different universe than a low-mass Higgs. Among other things, the “stability” of the vacuum is in play. A low-mass Higgs leads to a less stable vacuum, is what I hear.
I hope that we can all agree that a stable vacuum is better than an unstable one. We’ve got enough problems.
By Joel Achenbach | 10:15 AM ET, 12/10/2011
By Joel Achenbach
The Higgs particle is also known as the Higgs boson, or “the God particle,” a term that Leon Lederman used some years ago and which delighted journalists but surely offended photons and electrons throughout the universe. The Higgs is named after Peter Higgs, a theorist who four decades ago predicted its existence as part of the Standard Model of particle physics. No one’s ever found one. Discovering the Higgs is a central purpose of two very elaborate experiments being conducted at the Large Hadron Collider at CERN. On Tuesday, the CERN scientists will announce their latest batch of results, and, as Scientific American has reported, rumors abound that they’ve homing in on the Higgs. More here from Nature.
CERN has itself said that there will be no “discovery” announcement, and the best bet is that the two experiments haven’t quite nailed the Higgs with certainty but are getting very close. “I am looking for closure, and I don’t expect to get it next week,” a leading theorist tells me by email.
As SciAm notes, certainty in this case is made difficult by the fact that, even with the elaborate infrastructure in place at the LHC, there’s no way to catch a Higgs and bottle it up like a lightning bug.
“...the CMS and ATLAS detectors cannot directly catch Higgs bosons; those particles would decay into other particles immediately after being created in the LHC’s proton collisions. Instead, physicists must analyze the subatomic debris from the decays and reconstruct what happened.”
As my editor, Claudia, has pointed out, we’re at a point where a lot of major discoveries are indirect. No one shouts “Land ho!” from the crow’s nest anymore. Instead we find planets like Kepler 22-b, utterly invisible even with the most advanced telescope, but found through fluctuations in the light of its parent star. We are devising new ways to peel back layers of the onion.
Physicists are hoping to discover some “new physics” with the LHC. At the very least, they’d like to find a new particle they hadn’t even imagined. The Higgs, however, is kind of a familiar particle, as undiscovered particles go. It’s supposed to be lurking there somewhere because otherwise the Standard Model has a gaping hole in it. What matters most about the Higgs, beyond whether it exists at all, is how massive it is. If it’s high-mass, that gives you a different universe than a low-mass Higgs. Among other things, the “stability” of the vacuum is in play. A low-mass Higgs leads to a less stable vacuum, is what I hear.
I hope that we can all agree that a stable vacuum is better than an unstable one. We’ve got enough problems.
By Joel Achenbach | 10:15 AM ET, 12/10/2011
Friday, November 18, 2011
Neutrinos Faster Than Light
Second experiment confirms faster-than-light particles
By Brian Vastag, Published: November 17
Washington Post
A second experiment at the European facility that reported subatomic particles zooming faster than the speed of light — stunning the world of physics — has reached the same result, scientists said late Thursday.
The “positive outcome of the [second] test makes us more confident in the result,” said Fernando Ferroni, president of the Italian Institute for Nuclear Physics, in a statement released late Thursday. Ferroni is one of 160 physicists involved in the international collaboration known as OPERA (Oscillation Project with Emulsion Tracking Apparatus) that performed the experiment.
While the second experiment “has made an important test of consistency of its result,” Ferroni added, “a final word can only be said by analogous measurements performed elsewhere in the world.”
That is, more tests are needed, and on other experimental setups. There is still a large crowd of skeptical physicists who suspect that the original measurement done in September was an error.
Should the results stand, they would upend more than a century of modern physics.
In the first round of experiments, a massive detector buried in a mountain in Gran Sasso, Italy, recorded neutrinos generated at the CERN particle accelerator on the French-Swiss border arriving 60 nanoseconds sooner than expected. CERN is the French acronym for European Council for Nuclear Research.
A chorus of critiques from physicists soon followed. Among other possible errors, some suggested that the neutrinos generated at CERN were smeared into bunches too wide to measure precisely.
So in recent weeks, the OPERA team tightened the packets of neutrinos that CERN sent sailing toward Italy. Such tightening removed some uncertainty in the neutrinos’ speed.
The detector still saw neutrinos moving faster than light.
“One of the eventual systematic errors is now out of the way,” said Jacques Martino, director of the National Institute of Nuclear and Particle Physics in France, in a statement.
But the faster-than-light drama is far from over, Martino added. The OPERA team is discussing more cross-checks, he added, including possibly running a fiber the 454 miles between the sites.
For more than a century, the speed of light has been locked in as the universe’s ultimate speed limit. No experiment had seen anything moving faster than light, which zips along at 186,000 miles per second.
Much of modern physics — including Albert Einstein’s famous theory of relativity — is built on that ultimate speed limit.
The scientific world stopped and gaped in September when the OPERA team announced it had seen neutrinos moving just a hint faster than light.
“If it’s correct, it’s phenomenal,” said Rob Plunkett, a scientist at Fermilab, the Department of Energy physics laboratory in Illinois, in September. “We’d be looking at a whole new set of rules” for how the universe works.
By Brian Vastag, Published: November 17
Washington Post
A second experiment at the European facility that reported subatomic particles zooming faster than the speed of light — stunning the world of physics — has reached the same result, scientists said late Thursday.
The “positive outcome of the [second] test makes us more confident in the result,” said Fernando Ferroni, president of the Italian Institute for Nuclear Physics, in a statement released late Thursday. Ferroni is one of 160 physicists involved in the international collaboration known as OPERA (Oscillation Project with Emulsion Tracking Apparatus) that performed the experiment.
While the second experiment “has made an important test of consistency of its result,” Ferroni added, “a final word can only be said by analogous measurements performed elsewhere in the world.”
That is, more tests are needed, and on other experimental setups. There is still a large crowd of skeptical physicists who suspect that the original measurement done in September was an error.
Should the results stand, they would upend more than a century of modern physics.
In the first round of experiments, a massive detector buried in a mountain in Gran Sasso, Italy, recorded neutrinos generated at the CERN particle accelerator on the French-Swiss border arriving 60 nanoseconds sooner than expected. CERN is the French acronym for European Council for Nuclear Research.
A chorus of critiques from physicists soon followed. Among other possible errors, some suggested that the neutrinos generated at CERN were smeared into bunches too wide to measure precisely.
So in recent weeks, the OPERA team tightened the packets of neutrinos that CERN sent sailing toward Italy. Such tightening removed some uncertainty in the neutrinos’ speed.
The detector still saw neutrinos moving faster than light.
“One of the eventual systematic errors is now out of the way,” said Jacques Martino, director of the National Institute of Nuclear and Particle Physics in France, in a statement.
But the faster-than-light drama is far from over, Martino added. The OPERA team is discussing more cross-checks, he added, including possibly running a fiber the 454 miles between the sites.
For more than a century, the speed of light has been locked in as the universe’s ultimate speed limit. No experiment had seen anything moving faster than light, which zips along at 186,000 miles per second.
Much of modern physics — including Albert Einstein’s famous theory of relativity — is built on that ultimate speed limit.
The scientific world stopped and gaped in September when the OPERA team announced it had seen neutrinos moving just a hint faster than light.
“If it’s correct, it’s phenomenal,” said Rob Plunkett, a scientist at Fermilab, the Department of Energy physics laboratory in Illinois, in September. “We’d be looking at a whole new set of rules” for how the universe works.
Tuesday, March 1, 2011
HP Rethinking the Modern Computer
February 28, 2011
Remapping Computer Circuitry to Avert Impending Bottlenecks
By JOHN MARKOFF
PALO ALTO, Calif. — Hewlett-Packard researchers have proposed a fundamental rethinking of the modern computer for the coming era of nanoelectronics — a marriage of memory and computing power that could drastically limit the energy used by computers.
Today the microprocessor is in the center of the computing universe, and information is moved, at heavy energy cost, first to be used in computation and then stored. The new approach would be to marry processing to memory to cut down transportation of data and reduce energy use.
The semiconductor industry has long warned about a set of impending bottlenecks described as “the wall,” a point in time where more than five decades of progress in continuously shrinking the size of transistors used in computation will end. If progress stops it will not only slow the rate of consumer electronics innovation, but also end the exponential increase in the speed of the world’s most powerful supercomputers — 1,000 times faster each decade.
However, in an article published in IEEE Computer in January, Parthasarathy Ranganathan, a Hewlett-Packard electrical engineer, offers a radical alternative to today’s computer designs that would permit new designs for consumer electronics products as well as the next generation of supercomputers, known as exascale processors.
Today, computers constantly shuttle data back and forth among faster and slower memories. The systems keep frequently used data close to the processor and then move it to slower and more permanent storage when it is no longer needed for the ongoing calculations.
In this approach, the microprocessor is in the center of the computing universe, but in terms of energy costs, moving the information, first to be computed upon and then stored, dwarfs the energy used in the actual computing operation.
Moreover, the problem is rapidly worsening because the amount of data consumed by computers is growing even more quickly than the increase in computer performance.
“What’s going to be the killer app 10 years from now?” asked Dr. Ranganathan. “It’s fairly clear it’s going to be about data; that’s not rocket science. In the future every piece of storage on the planet will come with a built-in computer.”
To distinguish the new type of computing from today’s designs, he said that systems will be based on memory chips he calls “nanostores” as distinct from today’s microprocessors. They will be hybrids, three-dimensional systems in which lower-level circuits will be based on a nanoelectronic technology called the memristor, which Hewlett-Packard is developing to store data. The nanostore chips will have a multistory design, and computing circuits made with conventional silicon will sit directly on top of the memory to process the data, with minimal energy costs.
Within seven years or so, experts estimate that one such chip might store a trillion bytes of memory (about 220 high-definition digital movies) in addition to containing 128 processors, Dr. Ranganathan wrote. If these devices become ubiquitous, it would radically reduce the amount of information that would need to be shuttled back and forth in future data processing schemes.
For years, computer architects have been saying that a big new idea in computing was needed. Indeed, as transistors have continued to shrink, rather than continuing to innovate, computer designers have simply adopted a so-called “multicore” approach, where multiple processors are added as more chip real estate became available.
The absence of a major breakthrough was referred to in a remarkable confrontation that took place two years ago during Hot Chips, an annual computer design conference held each summer at Stanford University.
John L. Hennessy, the president of Stanford and a computer design expert, stood before a panel of some of the world’s best computer designers and challenged them to present one fundamentally new idea. He was effectively greeted with silence.
“What is your one big idea?” he asked the panel. “I believe that the next big idea is going to come from someone who is considerably younger than the average age of the people in this room.”
Dr. Ranganathan, who was 36 at the time, was there. He said that he took Dr. Hennessy’s criticism as an inspiration for his work and he believes that nanostore chip design is an example of the kind of big idea that has been missing.
It is not just Dr. Hennessy who has been warning about the end the era of rapidly increasing computer performance. In 2008, Darpa, the Defense Advanced Research Projects Agency assembled a panel of the nation’s best supercomputer experts and asked them to think about ways in which it might be possible to reach an exascale computer — a supercomputer capable of executing one quintillion mathematical calculations in a second, about 1,000 times faster than today’s fastest systems.
The panel, which was led by Peter Kogge, a University of Notre Dame supercomputer designer, came back with pessimistic conclusions. “Will the next decade see the same kind of spectacular progress as the last two did?” he wrote in the January issue of IEEE Spectrum. “Alas, no.” He added: “The party isn’t over, but the police have arrived and the music has been turned way down.”
One reason is computing’s enormous energy appetite. A 10-petaflop supercomputer — scheduled to be built by I.B.M. next year — will consume 15 megawatts of power, roughly the electricity consumed by a city of 15,000 homes. An exascale computer, built with today’s microprocessors, would require 1.6 gigawatts. That would be roughly one and half times the amount of electricity produced by a nuclear power plant.
The panel did, however, support Dr. Ranganathan’s memory-centric approach. It found that the energy cost of a single calculation was about 70 picojoules (a picojoule is one millionth of one millionth of a joule. The energy needed to keep a 100-watt bulb lit for an hour is more than eight million joules). However, when the energy costs of moving the data needed to do a single calculation — moving 200 bits of data in and out of memory multiple times — the real energy cost of a single calculation might be anywhere from 1,000 to 10,000 picojoules.
A range of other technologies are being explored to allow the continued growth of computing power, including ways to build electronic switches smaller than 10 nanometers — thought to be the minimum size for current chip-making techniques.
Last month, for example, researchers at Harvard and Mitre Corporation reported the development of nanoprocessor “tiles” based on electronic switches fabricated from ultrathin germanium-silicon wires.
I.B.M. researchers have been pursuing so-called phase-change memories based on the ability to use an electric current to switch a material from a crystalline to an amorphous state and back again. This technology was commercialized by Samsung last year. More recently, I.B.M. researchers have said that they are excited about the possibility of using carbon nanotubes as an a partial step to build hybrid systems that straddle the nanoelectronic and microelectronic worlds.
Veteran computer designers note that whichever technology wins, the idea of moving computer processing closer to memory has been around for some time, and it may simply be the arrival of nanoscale electronics that finally makes the new architecture possible.
An early effort was called iRAM, in a research project at the University of California, Berkeley, during the late 1990s. Today pressure for memory-oriented computing is coming both from computing challenges posed by smartphones and from the data center, said Christoforos Kozyrakis, a Stanford University computer scientist who worked on the iRAM project in graduate school.
Remapping Computer Circuitry to Avert Impending Bottlenecks
By JOHN MARKOFF
PALO ALTO, Calif. — Hewlett-Packard researchers have proposed a fundamental rethinking of the modern computer for the coming era of nanoelectronics — a marriage of memory and computing power that could drastically limit the energy used by computers.
Today the microprocessor is in the center of the computing universe, and information is moved, at heavy energy cost, first to be used in computation and then stored. The new approach would be to marry processing to memory to cut down transportation of data and reduce energy use.
The semiconductor industry has long warned about a set of impending bottlenecks described as “the wall,” a point in time where more than five decades of progress in continuously shrinking the size of transistors used in computation will end. If progress stops it will not only slow the rate of consumer electronics innovation, but also end the exponential increase in the speed of the world’s most powerful supercomputers — 1,000 times faster each decade.
However, in an article published in IEEE Computer in January, Parthasarathy Ranganathan, a Hewlett-Packard electrical engineer, offers a radical alternative to today’s computer designs that would permit new designs for consumer electronics products as well as the next generation of supercomputers, known as exascale processors.
Today, computers constantly shuttle data back and forth among faster and slower memories. The systems keep frequently used data close to the processor and then move it to slower and more permanent storage when it is no longer needed for the ongoing calculations.
In this approach, the microprocessor is in the center of the computing universe, but in terms of energy costs, moving the information, first to be computed upon and then stored, dwarfs the energy used in the actual computing operation.
Moreover, the problem is rapidly worsening because the amount of data consumed by computers is growing even more quickly than the increase in computer performance.
“What’s going to be the killer app 10 years from now?” asked Dr. Ranganathan. “It’s fairly clear it’s going to be about data; that’s not rocket science. In the future every piece of storage on the planet will come with a built-in computer.”
To distinguish the new type of computing from today’s designs, he said that systems will be based on memory chips he calls “nanostores” as distinct from today’s microprocessors. They will be hybrids, three-dimensional systems in which lower-level circuits will be based on a nanoelectronic technology called the memristor, which Hewlett-Packard is developing to store data. The nanostore chips will have a multistory design, and computing circuits made with conventional silicon will sit directly on top of the memory to process the data, with minimal energy costs.
Within seven years or so, experts estimate that one such chip might store a trillion bytes of memory (about 220 high-definition digital movies) in addition to containing 128 processors, Dr. Ranganathan wrote. If these devices become ubiquitous, it would radically reduce the amount of information that would need to be shuttled back and forth in future data processing schemes.
For years, computer architects have been saying that a big new idea in computing was needed. Indeed, as transistors have continued to shrink, rather than continuing to innovate, computer designers have simply adopted a so-called “multicore” approach, where multiple processors are added as more chip real estate became available.
The absence of a major breakthrough was referred to in a remarkable confrontation that took place two years ago during Hot Chips, an annual computer design conference held each summer at Stanford University.
John L. Hennessy, the president of Stanford and a computer design expert, stood before a panel of some of the world’s best computer designers and challenged them to present one fundamentally new idea. He was effectively greeted with silence.
“What is your one big idea?” he asked the panel. “I believe that the next big idea is going to come from someone who is considerably younger than the average age of the people in this room.”
Dr. Ranganathan, who was 36 at the time, was there. He said that he took Dr. Hennessy’s criticism as an inspiration for his work and he believes that nanostore chip design is an example of the kind of big idea that has been missing.
It is not just Dr. Hennessy who has been warning about the end the era of rapidly increasing computer performance. In 2008, Darpa, the Defense Advanced Research Projects Agency assembled a panel of the nation’s best supercomputer experts and asked them to think about ways in which it might be possible to reach an exascale computer — a supercomputer capable of executing one quintillion mathematical calculations in a second, about 1,000 times faster than today’s fastest systems.
The panel, which was led by Peter Kogge, a University of Notre Dame supercomputer designer, came back with pessimistic conclusions. “Will the next decade see the same kind of spectacular progress as the last two did?” he wrote in the January issue of IEEE Spectrum. “Alas, no.” He added: “The party isn’t over, but the police have arrived and the music has been turned way down.”
One reason is computing’s enormous energy appetite. A 10-petaflop supercomputer — scheduled to be built by I.B.M. next year — will consume 15 megawatts of power, roughly the electricity consumed by a city of 15,000 homes. An exascale computer, built with today’s microprocessors, would require 1.6 gigawatts. That would be roughly one and half times the amount of electricity produced by a nuclear power plant.
The panel did, however, support Dr. Ranganathan’s memory-centric approach. It found that the energy cost of a single calculation was about 70 picojoules (a picojoule is one millionth of one millionth of a joule. The energy needed to keep a 100-watt bulb lit for an hour is more than eight million joules). However, when the energy costs of moving the data needed to do a single calculation — moving 200 bits of data in and out of memory multiple times — the real energy cost of a single calculation might be anywhere from 1,000 to 10,000 picojoules.
A range of other technologies are being explored to allow the continued growth of computing power, including ways to build electronic switches smaller than 10 nanometers — thought to be the minimum size for current chip-making techniques.
Last month, for example, researchers at Harvard and Mitre Corporation reported the development of nanoprocessor “tiles” based on electronic switches fabricated from ultrathin germanium-silicon wires.
I.B.M. researchers have been pursuing so-called phase-change memories based on the ability to use an electric current to switch a material from a crystalline to an amorphous state and back again. This technology was commercialized by Samsung last year. More recently, I.B.M. researchers have said that they are excited about the possibility of using carbon nanotubes as an a partial step to build hybrid systems that straddle the nanoelectronic and microelectronic worlds.
Veteran computer designers note that whichever technology wins, the idea of moving computer processing closer to memory has been around for some time, and it may simply be the arrival of nanoscale electronics that finally makes the new architecture possible.
An early effort was called iRAM, in a research project at the University of California, Berkeley, during the late 1990s. Today pressure for memory-oriented computing is coming both from computing challenges posed by smartphones and from the data center, said Christoforos Kozyrakis, a Stanford University computer scientist who worked on the iRAM project in graduate school.
Wednesday, February 23, 2011
Dark Matter and Galaxie Formation
Dark Matter: New Evidence on How Galaxies Are Born
By Michael D. Lemonick Wednesday, Feb. 23, 2011
If you think it's hard to swallow the concept of dark matter, you're not alone. Decades ago, a few astronomers began to suspect that the universe was swarming with some mysterious, invisible substance that was yanking galaxies around with its own powerful gravity. And for those same decades, most of those astronomers' colleagues dismissed the notion as pretty much nuts.
But the evidence kept mounting, and nowadays dark matter is a firmly established concept in modern astrophysics. It pretty much has to exist, in fact, to explain why individual galaxies spin as fast as they do without flying apart, and why groups of galaxies move the way they do in relation to one another. If there weren't 10 times as much dark matter as there are stars and gas clouds and other visible matter, the universe would make no sense. Nature abhors irrationality, and so we live in a universe in which just about every galaxy, including the Milky Way, is held safely inside a huge blob of dark matter like a butterfly floating inside a glass paperweight. (See "The Hubble Space Telescope's Greatest Hits.")
Astrophysicists are also convinced that the dark matter came first, in blobs of various sizes. Those invisible masses then pulled in ordinary matter to make the galaxies. Not all galaxies are created equal, however. Some are pipsqueaks, some are giants and some are true stellar overachievers — so feverishly prolific in their star creation that they churned out up to 1,000 new suns a year for 100 million years. These so-called starburst galaxies have long been a puzzle to astronomers, but a new paper published in Nature may have finally explained them. The answer — once again — is that the dark matter did it.
The creation of a starburst galaxy, says study co-author Asantha Cooray of the University of California, Irvine, is all a matter of blob size. If your blob is too big, hydrogen gas can't fall together efficiently enough to sustain a starmaking frenzy. Instead the gas breaks apart to make several separate, reasonably sedate galaxies. If the dark-matter blob is too small, by contrast, the hydrogen falls together too efficiently. Stars form so quickly and so furiously that their heat keeps the rest of hydrogen from falling in. The frenzy is short-lived. (Watch TIME's video "Herschel: The Telescope for Invisible Stars.")
Cooray and his colleagues figured all of this out with data from the William Herschel Telescope. The Herschel is sensitive to infrared radiation, a type of light originally discovered by the astronomer William Herschel at the turn of the 19th century — which is why the telescope carries his name. Young, far-off, dusty galaxies are especially bright in infrared, and while the Herschel couldn't generate images of individual galaxies, it could measure brighter and dimmer spots in the overall wash of infrared energy streaming in from across the universe. The brighter spots represent denser clots of galaxies; the dimmer spots are sparse regions.
The scientists then compared what they saw with computer simulations of the early universe, which reveal how dark matter should have been distributed. The comparison showed a good match between medium-size lumps of dark matter and starburst galaxies. In other words: the ancient model is consistent with the current reality. "It's not like a new planet, where everyone goes, 'Wow!' " admits Cooray, "but it's a pretty cool result."
Comments on this article by bloggers:
Contrary to the misconception which you perpetuate here, dark matter is not the only option for explaining our observations of galaxies and space. Plasmas are widely accepted to represent 99% of the universe's visible matter. Thus, quite a lot depends upon the accuracy of those models.
Hannes Alfven received the Nobel Physics prize for creating the plasma models around 1970. During his acceptance speech, he warned that he had made mistakes early in his career. Those magnetohydrodynamic models -- the plasma models -- which theorists and astrophysicists to this day rely upon for their computations were in fact "pseudo-pedegagical", meaning that they appeared to help, but in fact were dangerously misleading. He was widely ignored, and we continue to use the same models to this day.
Astrophysicists and cosmologists today claim that galactic rotation curves demand some huge amount of invisible, theoretical matter placed at just the right spot. But, it's worth noting that we also observe magnetic fields to be associated with intergalactic space and the galaxies themselves. This is an incredibly important clue which this space reporter appears to be completely ignoring.
It's important because in the laboratory, magnetic fields and electric currents go hand-in-hand. It's why there is a term "electromagnetic". Where you see one, it is ASSUMED that there exists the other nearby causing it.
But, in space, astrophysicists and cosmologists would prefer to specifically avoid that inference. And yet, plasma is an electrified gas in the laboratory. So, 99% of the matter we see in space with our telescopes is inherently electric. And we can see the magnetic fields to demonstrate it.
By Michael D. Lemonick Wednesday, Feb. 23, 2011
If you think it's hard to swallow the concept of dark matter, you're not alone. Decades ago, a few astronomers began to suspect that the universe was swarming with some mysterious, invisible substance that was yanking galaxies around with its own powerful gravity. And for those same decades, most of those astronomers' colleagues dismissed the notion as pretty much nuts.
But the evidence kept mounting, and nowadays dark matter is a firmly established concept in modern astrophysics. It pretty much has to exist, in fact, to explain why individual galaxies spin as fast as they do without flying apart, and why groups of galaxies move the way they do in relation to one another. If there weren't 10 times as much dark matter as there are stars and gas clouds and other visible matter, the universe would make no sense. Nature abhors irrationality, and so we live in a universe in which just about every galaxy, including the Milky Way, is held safely inside a huge blob of dark matter like a butterfly floating inside a glass paperweight. (See "The Hubble Space Telescope's Greatest Hits.")
Astrophysicists are also convinced that the dark matter came first, in blobs of various sizes. Those invisible masses then pulled in ordinary matter to make the galaxies. Not all galaxies are created equal, however. Some are pipsqueaks, some are giants and some are true stellar overachievers — so feverishly prolific in their star creation that they churned out up to 1,000 new suns a year for 100 million years. These so-called starburst galaxies have long been a puzzle to astronomers, but a new paper published in Nature may have finally explained them. The answer — once again — is that the dark matter did it.
The creation of a starburst galaxy, says study co-author Asantha Cooray of the University of California, Irvine, is all a matter of blob size. If your blob is too big, hydrogen gas can't fall together efficiently enough to sustain a starmaking frenzy. Instead the gas breaks apart to make several separate, reasonably sedate galaxies. If the dark-matter blob is too small, by contrast, the hydrogen falls together too efficiently. Stars form so quickly and so furiously that their heat keeps the rest of hydrogen from falling in. The frenzy is short-lived. (Watch TIME's video "Herschel: The Telescope for Invisible Stars.")
Cooray and his colleagues figured all of this out with data from the William Herschel Telescope. The Herschel is sensitive to infrared radiation, a type of light originally discovered by the astronomer William Herschel at the turn of the 19th century — which is why the telescope carries his name. Young, far-off, dusty galaxies are especially bright in infrared, and while the Herschel couldn't generate images of individual galaxies, it could measure brighter and dimmer spots in the overall wash of infrared energy streaming in from across the universe. The brighter spots represent denser clots of galaxies; the dimmer spots are sparse regions.
The scientists then compared what they saw with computer simulations of the early universe, which reveal how dark matter should have been distributed. The comparison showed a good match between medium-size lumps of dark matter and starburst galaxies. In other words: the ancient model is consistent with the current reality. "It's not like a new planet, where everyone goes, 'Wow!' " admits Cooray, "but it's a pretty cool result."
Comments on this article by bloggers:
Contrary to the misconception which you perpetuate here, dark matter is not the only option for explaining our observations of galaxies and space. Plasmas are widely accepted to represent 99% of the universe's visible matter. Thus, quite a lot depends upon the accuracy of those models.
Hannes Alfven received the Nobel Physics prize for creating the plasma models around 1970. During his acceptance speech, he warned that he had made mistakes early in his career. Those magnetohydrodynamic models -- the plasma models -- which theorists and astrophysicists to this day rely upon for their computations were in fact "pseudo-pedegagical", meaning that they appeared to help, but in fact were dangerously misleading. He was widely ignored, and we continue to use the same models to this day.
Astrophysicists and cosmologists today claim that galactic rotation curves demand some huge amount of invisible, theoretical matter placed at just the right spot. But, it's worth noting that we also observe magnetic fields to be associated with intergalactic space and the galaxies themselves. This is an incredibly important clue which this space reporter appears to be completely ignoring.
It's important because in the laboratory, magnetic fields and electric currents go hand-in-hand. It's why there is a term "electromagnetic". Where you see one, it is ASSUMED that there exists the other nearby causing it.
But, in space, astrophysicists and cosmologists would prefer to specifically avoid that inference. And yet, plasma is an electrified gas in the laboratory. So, 99% of the matter we see in space with our telescopes is inherently electric. And we can see the magnetic fields to demonstrate it.
Monday, February 7, 2011
IceCube - Window on Energy in the Universe
IceCube opens up a window on energy in the universe
AMUNDSEN-SCOTT BASE, ANTARCTICA - The world's newest astronomical observatory is defined by a field of 86 colored flags rippling across an ice-covered polar landscape. Each banner marks a line of glass-covered orbs that stretches down a mile and a half into the ice, like beads on a frozen string.
Known as IceCube, this massive underground array is designed to do what no other observatory has done before - catch a glimpse of elusive neutrinos, ghostly particles that are formed in the hearts of supernovas, black holes and other deep-space objects and may give scientists new information about the origins of the universe.
"The idea with IceCube is to do astronomy, but instead of using light, we're using neutrinos," said Greg Sullivan, a physicist at the University of Maryland who is one of the collaborators on the $279 million project.
"It opens up a window on energy in the universe," he explained. "We've seen particles in outer space that are 10 million times more energetic than the ones we can accelerate on Earth. Neutrinos are a way to try and find out what's causing those very high energy [particles]. It's been a mystery for 100 years."
Astronomers have flocked to the South Pole in the winter for decades, drawn by the sunless skies and atmospheric conditions that make superb star-gazing. A permanent U.S. station has been at the pole since 1956, and several telescopes have been built here to take advantage of the darkness that lasts from late February to early October.
But IceCube is something different, an observatory built entirely beneath the ice. Along each of the 86 cables are strung 60 three-foot spherical detectors, called digital optical modules or DOMs. These glass-covered orbs are designed to find evidence of neutrinos - particles formed in the hearts of stars that are so small they pass right through the Earth (and our bodies) without hitting molecules or other matter.
Since neutrinos have almost no mass and are too small to be seen with a normal telescope, researchers instead are looking for the extremely small and extremely brief flashes of bluish light that are given off when a neutrino's energy trail strikes an oxygen atom in the ice and creates a third particle, called a muon.
ad_icon
"We thought that if we could . . . detect that light, we could reconstruct the direction and energy of that muon, which would give us the direction of the neutrinos," Sullivan said during a visit last month to the South Pole sponsored by the National Science Foundation.
In the past, scientists have tried to build neutrino detectors in the deep ocean, abandoned mine shafts and the bottom of deep lakes. All the projects failed for different reasons: salt corroded the detectors, for example, or the muon trails were obscured by the natural light given off by plankton.
Astrophysicists have high hopes for the South Pole location. One advantage of the massive icepack is that it provides a "scaffolding . . . infrastructure for the detectors," holding them steady, Jonathan Feng, a particle physicist and cosmologist at the University of California at Irvine, explained in a phone interview. It also presented extreme challenges: Constructing IceCube involved more than 400 technicians and engineers and took seven summers of tough drilling through polar ice.
IceCube's detectors are pointed northward, toward the center of the Earth, so the planet's mass serves as a filter to block most cosmic rays and other particles. Feng noted that in addition to passing through most matter, neutrinos also are not bent by electric and magnetic fields, which can bend other forms of radiation - potentially bringing information more directly from farther corners of the universe.
The National Science Foundation picked up $242 million of Ice Cube's $279 million price tag. The rest was split among science agencies from Germany, Sweden and Belgium, which also cooperated on construction. The University of Wisconsin at Madison, the project's lead institution, coordinated the design, build and software to run it. The university is also coordinating the data distribution, making information available to scientists around the world.
Now that IceCube is up and running, Feng says he's especially interested in what it might reveal about dark matter, mysterious material that scientists postulate makes up five-sixths of the mass of the universe, but which has never been detected directly.
"The entire periodic table is just small fraction of total matter in the universe," Feng said. "The rest is dark matter but it doesn't reflect light or shine light. We don't see it the way we see stars." When dark matter particles inside the sun and other stars collide with each other, neutrinos are created. If IceCube can detect these neutrinos and glean useful data about where they come from, Feng said, "there will be hundreds of scientists jumping up and down to see if it's a signal of dark matter."
Credit for coming up with the idea behind IceCube is generally given to Francis Halzen, a theoretical physicist at the University of Wisconsin. In the late 1980s, Halzen was intrigued by the problem of building a neutrino detector and had studied the failure of other projects. Interviewed at his office in Madison, Halzen said he's forgotten his "eureka" moment back in 1987. "One of my former graduate students says I told him one morning coming out of the elevator," Halzen said. "But I really don't remember. I didn't realize that I would spend most of the rest of my career doing this."
Halzen got together with colleagues at the University of California at Berkeley and began planning a pilot project, called the Antarctic Muon and Neutrino Detector Array, or AMANDA. It began operation at the South Pole in 1993, but only laid a few strings of detectors into the ice.
IceCube, which was conceived in 1999 as a collaboration between U.S. and European agencies, was on a much grander scale. Engineers on the project ran into formidable obstacles. "You can't just buy a drill in Texas and bring it to Antarctica," Halzen said. "We had to figure all these things out."
During the first year summer of drilling in 2004-05, technicians laid only one string of detectors, and Halzen said they nearly gave up. But a University of Wisconsin team developed a special drill that used hot water to drill nearly two miles deep into the ice. Once cooled, the water was pumped back to the surface, reheated and recycled in a closed-loop system. Then the huge hose that carried the water kept breaking under its own weight. "It was a struggle," Halzen said. Finally, one of the engineers found a firm in Venice with the right equipment, "and we eventually made it work."
There were also logistical challenges. Because of limited space at the South Pole station, the IceCube team could deploy no more than 40 workers at a time. Construction crews had to be rotated in by a three-hour flight from the main U.S. facility at McMurdo Station. "It was like solving a crossword puzzle," Halzen said. "Everything and everyone had to fit just perfectly." By the 2008 drilling season, they had put in 20 strings of detectors. The 86th and final string was laid Dec. 19.
IceCube has already found a strange asymmetry to cosmic rays reaching Earth from the southern hemisphere from the direction of a supernova named Vela. "Nobody knows what it means; that's why its interesting," Halzen said.
For all his work in pushing to get IceCube built, Halzen has never been to the South Pole. During the building phase, he said, he was loath to take up valuable space that could have been used for an engineer or construction worker.
"I have had no use to go there, but maybe now," Halzen said. "Last week it was colder here in Madison that at the South Pole."
health-science@washpost.com
AMUNDSEN-SCOTT BASE, ANTARCTICA - The world's newest astronomical observatory is defined by a field of 86 colored flags rippling across an ice-covered polar landscape. Each banner marks a line of glass-covered orbs that stretches down a mile and a half into the ice, like beads on a frozen string.
Known as IceCube, this massive underground array is designed to do what no other observatory has done before - catch a glimpse of elusive neutrinos, ghostly particles that are formed in the hearts of supernovas, black holes and other deep-space objects and may give scientists new information about the origins of the universe.
"The idea with IceCube is to do astronomy, but instead of using light, we're using neutrinos," said Greg Sullivan, a physicist at the University of Maryland who is one of the collaborators on the $279 million project.
"It opens up a window on energy in the universe," he explained. "We've seen particles in outer space that are 10 million times more energetic than the ones we can accelerate on Earth. Neutrinos are a way to try and find out what's causing those very high energy [particles]. It's been a mystery for 100 years."
Astronomers have flocked to the South Pole in the winter for decades, drawn by the sunless skies and atmospheric conditions that make superb star-gazing. A permanent U.S. station has been at the pole since 1956, and several telescopes have been built here to take advantage of the darkness that lasts from late February to early October.
But IceCube is something different, an observatory built entirely beneath the ice. Along each of the 86 cables are strung 60 three-foot spherical detectors, called digital optical modules or DOMs. These glass-covered orbs are designed to find evidence of neutrinos - particles formed in the hearts of stars that are so small they pass right through the Earth (and our bodies) without hitting molecules or other matter.
Since neutrinos have almost no mass and are too small to be seen with a normal telescope, researchers instead are looking for the extremely small and extremely brief flashes of bluish light that are given off when a neutrino's energy trail strikes an oxygen atom in the ice and creates a third particle, called a muon.
ad_icon
"We thought that if we could . . . detect that light, we could reconstruct the direction and energy of that muon, which would give us the direction of the neutrinos," Sullivan said during a visit last month to the South Pole sponsored by the National Science Foundation.
In the past, scientists have tried to build neutrino detectors in the deep ocean, abandoned mine shafts and the bottom of deep lakes. All the projects failed for different reasons: salt corroded the detectors, for example, or the muon trails were obscured by the natural light given off by plankton.
Astrophysicists have high hopes for the South Pole location. One advantage of the massive icepack is that it provides a "scaffolding . . . infrastructure for the detectors," holding them steady, Jonathan Feng, a particle physicist and cosmologist at the University of California at Irvine, explained in a phone interview. It also presented extreme challenges: Constructing IceCube involved more than 400 technicians and engineers and took seven summers of tough drilling through polar ice.
IceCube's detectors are pointed northward, toward the center of the Earth, so the planet's mass serves as a filter to block most cosmic rays and other particles. Feng noted that in addition to passing through most matter, neutrinos also are not bent by electric and magnetic fields, which can bend other forms of radiation - potentially bringing information more directly from farther corners of the universe.
The National Science Foundation picked up $242 million of Ice Cube's $279 million price tag. The rest was split among science agencies from Germany, Sweden and Belgium, which also cooperated on construction. The University of Wisconsin at Madison, the project's lead institution, coordinated the design, build and software to run it. The university is also coordinating the data distribution, making information available to scientists around the world.
Now that IceCube is up and running, Feng says he's especially interested in what it might reveal about dark matter, mysterious material that scientists postulate makes up five-sixths of the mass of the universe, but which has never been detected directly.
"The entire periodic table is just small fraction of total matter in the universe," Feng said. "The rest is dark matter but it doesn't reflect light or shine light. We don't see it the way we see stars." When dark matter particles inside the sun and other stars collide with each other, neutrinos are created. If IceCube can detect these neutrinos and glean useful data about where they come from, Feng said, "there will be hundreds of scientists jumping up and down to see if it's a signal of dark matter."
Credit for coming up with the idea behind IceCube is generally given to Francis Halzen, a theoretical physicist at the University of Wisconsin. In the late 1980s, Halzen was intrigued by the problem of building a neutrino detector and had studied the failure of other projects. Interviewed at his office in Madison, Halzen said he's forgotten his "eureka" moment back in 1987. "One of my former graduate students says I told him one morning coming out of the elevator," Halzen said. "But I really don't remember. I didn't realize that I would spend most of the rest of my career doing this."
Halzen got together with colleagues at the University of California at Berkeley and began planning a pilot project, called the Antarctic Muon and Neutrino Detector Array, or AMANDA. It began operation at the South Pole in 1993, but only laid a few strings of detectors into the ice.
IceCube, which was conceived in 1999 as a collaboration between U.S. and European agencies, was on a much grander scale. Engineers on the project ran into formidable obstacles. "You can't just buy a drill in Texas and bring it to Antarctica," Halzen said. "We had to figure all these things out."
During the first year summer of drilling in 2004-05, technicians laid only one string of detectors, and Halzen said they nearly gave up. But a University of Wisconsin team developed a special drill that used hot water to drill nearly two miles deep into the ice. Once cooled, the water was pumped back to the surface, reheated and recycled in a closed-loop system. Then the huge hose that carried the water kept breaking under its own weight. "It was a struggle," Halzen said. Finally, one of the engineers found a firm in Venice with the right equipment, "and we eventually made it work."
There were also logistical challenges. Because of limited space at the South Pole station, the IceCube team could deploy no more than 40 workers at a time. Construction crews had to be rotated in by a three-hour flight from the main U.S. facility at McMurdo Station. "It was like solving a crossword puzzle," Halzen said. "Everything and everyone had to fit just perfectly." By the 2008 drilling season, they had put in 20 strings of detectors. The 86th and final string was laid Dec. 19.
IceCube has already found a strange asymmetry to cosmic rays reaching Earth from the southern hemisphere from the direction of a supernova named Vela. "Nobody knows what it means; that's why its interesting," Halzen said.
For all his work in pushing to get IceCube built, Halzen has never been to the South Pole. During the building phase, he said, he was loath to take up valuable space that could have been used for an engineer or construction worker.
"I have had no use to go there, but maybe now," Halzen said. "Last week it was colder here in Madison that at the South Pole."
health-science@washpost.com
Wednesday, February 2, 2011
Kepler Finds 1200 Possibilities in New Planets
Kepler Planet Hunter Finds 1,200 Possibilities
By DENNIS OVERBYE
Published: February 2, 2011
New York Times
In a long-awaited announcement, scientists operating NASA’s Kepler planet-hunting satellite reported Wednesday that they had identified 1,235 possible planets orbiting other stars, potentially tripling the number of known planets in the universe.
Of the new candidates, 68 are one-and-a-quarter times the size of the Earth or smaller — smaller, that is, than any previously discovered planets outside the solar system. Fifty-four of the possible exoplanets are in the so-called habitable zones, where temperatures should be moderate enough for liquid water, of stars dimmer and cooler than the Sun; four of these are less than twice the size of Earth, and one is even smaller.
Astronomers said that it would take years to confirm that all these candidates are really planets — by using ground-based telescopes to try to measure their masses, for example — and not just double stars or other strange systems. Many of them might never be vetted because of the dimness of their stars and the lack of telescope time and astronomers to do it all. But statistical tests of a sample of the list suggest that 80 to 95 percent of the objects on it were real, as opposed to blips in the data.
“It boggles the mind,” said William Borucki of the Ames Research Center, Kepler’s leader.
At first glance, none of them appears to be another Earth, the kind of cosmic Eden fit for life as we know it, but the new results represent only four months worth of data on a three-and-a-half-year project, and have left astronomers enthused about the chances they will ultimately reach their goal of finding Earth-like planets in the universe.
“For the first time in human history we have a pool of potentially rocky habitable zone planets,” said Sara Seager of M.I.T., who works with Kepler. “This is the first big step forward to answering the ancient question, ‘How common are other Earths?’ ”
Mr. Borucki noted that since the Kepler telescope surveys only one four-hundredth of the sky, the numbers extrapolated to some 20,000 habitable-zone planets within 3,000 light-years of Earth. He is the lead author of a paper that has been submitted to The Astrophysical Journal describing the new results.
In a separate announcement, to be published in Nature on Thursday, a group of Kepler astronomers led by Jack Lissauer of Ames said they had found a star with six planets — the most Kepler has yet found around one star — orbiting in close ranks in the same plane, no farther from their star than Mercury is from the Sun.
This dense packing, Dr. Lissauer said, seems to violate all the rules astronomers thought they had begun to discern about how planetary systems form and evolve.
“This is sending me back to the drawing board,” he said.
Summarizing the news from the cosmos, Geoffrey W. Marcy of the University of California, Berkeley, a veteran exoplanet hunter and a mainstay of the Kepler work, said, “There are so many messages here that it’s hard to know where to begin.” He called the Borucki team’s announcement “an extraordinary planet windfall, a moment that will be written in textbooks. It will be thought of as watershed.”
Kepler, launched into orbit around the Sun in March 2009, stares at a patch of the Milky Way near the Northern Cross, measuring the brightness of 156,000 stars every 30 minutes, looking for a pattern of dips that would be caused by planets crossing in front of their suns.
The goal is to assess the frequency of Earth-like planets around Sun-like suns in the galaxy. But in the four months of data analyzed so far, a Kepler looking at our own Sun would be lucky to have seen the Earth pass even once. Three transits are required for a planet to show up in Kepler’s elaborate data-processing pipeline, which means that Kepler’s next scheduled data release, in June 2012, could be a moment of truth for the mission.
Habitable planets, in the meantime, could show up at fainter stars than our Sun, where the habitable, or “Goldilocks,” zone, would be smaller and closer to the star and planets in it would rack up transits more quickly.
Attention has been riveted on Wednesday’s data release since June, when Kepler scientists issued their first list, of some 300 stars suspected of harboring planets, but held back another 400 for further study. In the intervening months, Mr. Borucki said, some of those candidates have been eliminated, but hundreds more have been added that would otherwise have been reported in June this year.
One of the sequestered stars was a Sun-like star in the constellation Cygnus that went by the name of KOI 157, for Kepler Object of Interest. It first came to notice in the spring of 2009 when the astronomers saw that it seemed to have five candidate planets, four with nearly the same orbital periods, and in the same plane, like an old vinyl record, Dr. Lissauer recalled. Two of them came so close that every 50 days one of them would look as large as a full moon as seen from the other, Dr. Lissauer calculated.
“I got very interested in this system,” Dr. Lissauer said. “Five was the most we had around any target.” Moreover, the planets’ proximity to one another meant that they would interact gravitationally. In the fall, a sixth planet — the innermost — was found.
By measuring the slight variations in transit times caused by the gravitational interference of the inner five planets with one another, Dr. Lissauer and his colleagues were able to calculate the masses and densities of those planets. These confirmed they were so-called super-Earths, with masses ranging from two to 13 times that of the Earth. But they were also puffy, containing a mixture of rock and gas, rather than being pure rock and iron like another super-Earth, Kepler 10b, a hunk of lava announced last month at a meeting in Seattle.
Dr. Lissauer said, “It suggests that most super-Earths may be more like Neptune than Earth-like.”
Alan Boss, a planetary theorist at the Carnegie Institution of Washington, said the Kepler 11 system, as it is now known, should keep theorists busy and off the streets for a long time. “This system,” he wrote in an e-mail message, “certainly belongs in the pantheon of exoplanet systems: six planets lined up in a plane pointing toward us, waiting patiently for billions of years for humankind to develop sufficient technical capabilities to detect them.”
Mr. Borucki said the growing ubiquity of small planets as revealed by Kepler was a welcome relief from the early days of exoplanet research, when most of the planets discovered were Jupiter-size giants hugging their stars in close orbits, leading theorists to speculate that smaller planets might be thrown away from those environs by gravitational forces or even dragged right into their stars.
“Those little guys are still there,” he said, “and we’re delighted to see them.”
By DENNIS OVERBYE
Published: February 2, 2011
New York Times
In a long-awaited announcement, scientists operating NASA’s Kepler planet-hunting satellite reported Wednesday that they had identified 1,235 possible planets orbiting other stars, potentially tripling the number of known planets in the universe.
Of the new candidates, 68 are one-and-a-quarter times the size of the Earth or smaller — smaller, that is, than any previously discovered planets outside the solar system. Fifty-four of the possible exoplanets are in the so-called habitable zones, where temperatures should be moderate enough for liquid water, of stars dimmer and cooler than the Sun; four of these are less than twice the size of Earth, and one is even smaller.
Astronomers said that it would take years to confirm that all these candidates are really planets — by using ground-based telescopes to try to measure their masses, for example — and not just double stars or other strange systems. Many of them might never be vetted because of the dimness of their stars and the lack of telescope time and astronomers to do it all. But statistical tests of a sample of the list suggest that 80 to 95 percent of the objects on it were real, as opposed to blips in the data.
“It boggles the mind,” said William Borucki of the Ames Research Center, Kepler’s leader.
At first glance, none of them appears to be another Earth, the kind of cosmic Eden fit for life as we know it, but the new results represent only four months worth of data on a three-and-a-half-year project, and have left astronomers enthused about the chances they will ultimately reach their goal of finding Earth-like planets in the universe.
“For the first time in human history we have a pool of potentially rocky habitable zone planets,” said Sara Seager of M.I.T., who works with Kepler. “This is the first big step forward to answering the ancient question, ‘How common are other Earths?’ ”
Mr. Borucki noted that since the Kepler telescope surveys only one four-hundredth of the sky, the numbers extrapolated to some 20,000 habitable-zone planets within 3,000 light-years of Earth. He is the lead author of a paper that has been submitted to The Astrophysical Journal describing the new results.
In a separate announcement, to be published in Nature on Thursday, a group of Kepler astronomers led by Jack Lissauer of Ames said they had found a star with six planets — the most Kepler has yet found around one star — orbiting in close ranks in the same plane, no farther from their star than Mercury is from the Sun.
This dense packing, Dr. Lissauer said, seems to violate all the rules astronomers thought they had begun to discern about how planetary systems form and evolve.
“This is sending me back to the drawing board,” he said.
Summarizing the news from the cosmos, Geoffrey W. Marcy of the University of California, Berkeley, a veteran exoplanet hunter and a mainstay of the Kepler work, said, “There are so many messages here that it’s hard to know where to begin.” He called the Borucki team’s announcement “an extraordinary planet windfall, a moment that will be written in textbooks. It will be thought of as watershed.”
Kepler, launched into orbit around the Sun in March 2009, stares at a patch of the Milky Way near the Northern Cross, measuring the brightness of 156,000 stars every 30 minutes, looking for a pattern of dips that would be caused by planets crossing in front of their suns.
The goal is to assess the frequency of Earth-like planets around Sun-like suns in the galaxy. But in the four months of data analyzed so far, a Kepler looking at our own Sun would be lucky to have seen the Earth pass even once. Three transits are required for a planet to show up in Kepler’s elaborate data-processing pipeline, which means that Kepler’s next scheduled data release, in June 2012, could be a moment of truth for the mission.
Habitable planets, in the meantime, could show up at fainter stars than our Sun, where the habitable, or “Goldilocks,” zone, would be smaller and closer to the star and planets in it would rack up transits more quickly.
Attention has been riveted on Wednesday’s data release since June, when Kepler scientists issued their first list, of some 300 stars suspected of harboring planets, but held back another 400 for further study. In the intervening months, Mr. Borucki said, some of those candidates have been eliminated, but hundreds more have been added that would otherwise have been reported in June this year.
One of the sequestered stars was a Sun-like star in the constellation Cygnus that went by the name of KOI 157, for Kepler Object of Interest. It first came to notice in the spring of 2009 when the astronomers saw that it seemed to have five candidate planets, four with nearly the same orbital periods, and in the same plane, like an old vinyl record, Dr. Lissauer recalled. Two of them came so close that every 50 days one of them would look as large as a full moon as seen from the other, Dr. Lissauer calculated.
“I got very interested in this system,” Dr. Lissauer said. “Five was the most we had around any target.” Moreover, the planets’ proximity to one another meant that they would interact gravitationally. In the fall, a sixth planet — the innermost — was found.
By measuring the slight variations in transit times caused by the gravitational interference of the inner five planets with one another, Dr. Lissauer and his colleagues were able to calculate the masses and densities of those planets. These confirmed they were so-called super-Earths, with masses ranging from two to 13 times that of the Earth. But they were also puffy, containing a mixture of rock and gas, rather than being pure rock and iron like another super-Earth, Kepler 10b, a hunk of lava announced last month at a meeting in Seattle.
Dr. Lissauer said, “It suggests that most super-Earths may be more like Neptune than Earth-like.”
Alan Boss, a planetary theorist at the Carnegie Institution of Washington, said the Kepler 11 system, as it is now known, should keep theorists busy and off the streets for a long time. “This system,” he wrote in an e-mail message, “certainly belongs in the pantheon of exoplanet systems: six planets lined up in a plane pointing toward us, waiting patiently for billions of years for humankind to develop sufficient technical capabilities to detect them.”
Mr. Borucki said the growing ubiquity of small planets as revealed by Kepler was a welcome relief from the early days of exoplanet research, when most of the planets discovered were Jupiter-size giants hugging their stars in close orbits, leading theorists to speculate that smaller planets might be thrown away from those environs by gravitational forces or even dragged right into their stars.
“Those little guys are still there,” he said, “and we’re delighted to see them.”
Subscribe to:
Posts (Atom)