A very long time ago in a faraway galaxy, a star blew up. When the flash of light finally reached Earth on October 6, 2013, nobody noticed. Not at first. Three hours of supernova photons streamed by before an old telescope perched on a mountain north of San Diego started snapping pics.
A very long time ago in a faraway galaxy, a star blew up. When the flash of light finally reached Earth on October 6, 2013, nobody noticed. Not at first. Three hours of supernova photons streamed by before an old telescope perched on a mountain north of San Diego started snapping pics.
The 48-inch Samuel Oschin Telescope is a 60-year old veteran of astronomical missions. Its current duty is to keep a watchout for transient astronomical events—things like gamma ray bursts, gravitational lensing, and supernovae. So when it spied the flicker in the sky, it sent an electronic all-points bulletin to other telescopes around the world, which were able to capture, among other things, the supernova’s chemical signature.
The data astounded astronomers. This star, a red supergiant, erupted huge amounts of helium and hydrogen more than a year before fully exploding. That’s not how red supergiants were supposed to blow up. And the implications of that discovery, published today in Nature Physics, changes what astronomers thought they knew about how spectacularly large stars die.
Run back the reel a few million years. The red supergiant (known posthumously as SN 2013fs) is about 10 to 15 times the size of Earth’s sun, and it is running low on hydrogen and helium. But that doesn’t mean the fusion in its core just stops. Instead, like other dying red supergiants, it keeps burning heavier and heavier elements. Eventually, the superhot alchemy leaves behind only iron. “You don’t get any energy out of burning iron,” says Sean Couch, a theoretical astrophysicist at Michigan State University. “It just gets heavier and heavier, and eventually collapses under its own weight.”
Read more at Wired
Photo credit: PALOMAR/CALTECH