Higgs and the Amazing Technicolor Dreamcoat

Recently, I watched the documentary Particle Fever at Cine, a local theater here in Athens, GA. (The film is streaming on Netflix also.) It was a great look into the hoard of people at CERN that got the Large Hadron Collider running. It also highlighted the discovery of the Higgs boson, the elementary particle in the Standard Model that gives everything mass. Before the discovery of the Higgs, there were two competing theories: the Standard Model and the Multiverse. The Standard Model expected the Higgs boson to have an energy of about 115 GeV, while the Multiverse theory expected the Higgs boson to have an energy around 140 GeV. The LHC detected a signal at around 125 GeV, right smack dab in the middle. For reasons I don’t understand, the Standard Model won the battle, though many prominent scientists (Brian Greene, Stephen Hawking, Michio Kaku, NdGT, to name a few recognizable names) have supported the hypothesis and cashmoney prizes have been given to purveyors of the Multiverse theory (Alan Guth, Andrei Linde, and Alexei Starobinsky).


A simulated Higgs boson signal made by the collision of two proton beams.

Either way—whether you support the Standard Model or the Multiverse—everyone’s happy that the Higgs boson was found. That is, everyone except a team of researchers from the University of Southern Denmark. They’ve come to rain on the parade. In a new paper in Physics Review D, beautifully named “Technicolor Higgs boson in the light of LHC data”, they claim that the signal at 125 GeV could instead be the TC Higgs, the weird cousin of Higgs that was named after his uncle but everyone kind of forgets is there until Christmas when they’re reminded of his tie-dye shirts, sock-filled sandals, and suspiciously bloodshot eyes. The TC Higgs (TC = technicolor) belongs to a model one step past the Standard Model that is actually much less interesting (to us non particle physicists) then the name would lead one to believe. It has something to do with electroweak gauge symmetry breaking and how that makes mass.

“The current data is not precise enough to determine exactly what the particle is. It could be a number of other known particles,” Mads Toudal Frandsen, the PI of the work, said in a press release.

This new analysis doesn’t debunk the Higgs finding, it just politely states that maybe the evidence isn’t as strong as we might think. The 125 GeV signal fits other theories, so maybe we shouldn’t be so quick to jump on the Higgs train. Frandsen hopes more data from the LHC will help distinguish between the Higgs and TC Higgs. The LHC is set to resume research in early 2015, so we’ll have to be on the lookout for new happenings there. Whatever the LHC discovers, Peter Higgs and Francois Englert will be secure in the knowledge that their Nobel Prize comes with a no-takebacksies clause.

Big Ball Scars

Buckyballs are weird. They’re little soccer balls that collect electrons—one of the few materials used as electron acceptors in solar cells. They’re the largest object to show particle–wave duality, and that’s quite a feat for sixty carbon atoms. If an alkali-metal is shoved in the center, the whole thing acts metallic and, in some cases, superconducting. They even have a toy named after them. And we can’t forget that the 1996 Nobel Prize in Chemistry went to the discoverers of fullerene. Needless to say (of course, I’m about to say it anyways), these large spherical molecules are a hot topic.

Some researchers are moving away from the “small” fullerenes, C60 and C70, to larger versions. But as the size grows so do the problems. A recent report in ACS Nano by David Wales of the University of Cambridge details the defects that arise in larger configurations (sorry for the paywall link). This work is entirely theoretical, as causing defects in a specific arrangement is as difficult (or maybe more so) than forming defect-free molecules. The molecule can get a “scar” when the pentagon–hexagon–pentagon structure is dislocated to a pentagon–heptagon–pentagon structure.


C860 and C1160 fullerenes with red scars.

Defects are unavoidable in large spheres and the configuration of them constitutes a Thomson problem, the location of repulsive points on a sphere, which is one of eighteen unsolved mathematical problems on the list of Smale’s problems for the 21st century. While the Thomson model was previously used to describe the configuration of electrons in an electron shell (and has since been abandoned since it’s failure to do so), the problem is now being applied to fullerenes.


Defects in different sized fullerenes.


The paper also looked at funnels, or outward curving structures, and showed that defects can occur there, too. Interestingly, the fourth to last structure below (which is akin to a carbon nanotube) shows no defects (at least in their picture, they didn’t discuss the lack of defects in the text).


Defects in curved surfaces.

So what does this mean? What can defects do? Well, they can disrupt aromaticity and conjugation which would change the electronic structure of the molecule, diminishing conductivity and widening the band gap so that they’d be unsuitable for solar cells or other electronic applications. (That was a mouthful… fingerful?) But it’s not all bad. A paper from last year in the Journal of Physical Chemistry C (paywall again) states that defects in a graphene surface can lead to reactive sites. In other words, chemistry can happen. I don’t think it’s too far fetched to say there would be a similar situation with fullerenes. After all, they’re just balled-up graphene.

Future Made of Virtual Insanity

A team from Utrecht University and University of British Columbia have developed an amazing algorithm for “flexible muscle-based locomotion for bipedal creatures.” Their paper is available online (open-access) and they have a great video on Vimeo demonstrating their algorithm.

The “creatures” determine their own walking gait based on repeated cycles. There’s even a part in the video where they show walking at various cycles in the experiment. The first cycle creature falls down almost immediately. As the cycles progress, the creatures walk a little longer and a little straighter before falling over seemingly at random. Around cycle 1000 (or a few before) the creature finally gets the hang of walking and can move around without falling over.

The creatures (either humanoid or dinosaur-looking) aren’t given initial input on how to walk, but are “driven entirely by simulated muscle-based actuation.” The simulated muscles aren’t even put in a specific location. They start with an approximate structure and let the machine determine the best structure for walking. Essentially, the computer is evolving this creature, it’s muscles and it’s mechanism of walking, to make it stable.

Not only can these creatures walk, they can run (or hop in some cases), change directions, walk up and down small inclines, and, as for this unfortunate fellow, keep plodding along as boxes fly at him from all different directions.


An apt metaphor for the human condition.

There are great possibilities for the future of this research: understanding how creatures evolved for bipedal locomotion on Earth, simulating how creatures would evolve on another planet (maybe if we find water on a planet in a Goldilocks Zone, we could simulate what creatures living there would look like), animations could be more realistic without the need for input from an actor, or if we get really advanced we could simulate an entire universe, creating simulated life that evolves and grows, maybe even watch as they develop a philosophy saying that they’re just a simulation. It’s this kind of work that really makes you question our universe, but then makes you laugh when you see the guy above get pummeled by a giant box.

The Decline and Collapse of the Wavefunction Empire

Quantum mechanics is frustrating for everyone—theorists who don’t understand their own equations, experimentalists that have to jump around paradoxes to try and get a measurement, and students who just have no idea what’s happening. It also has the problem of being wrong. With space and time and everything made of particles, QM believers have to accept the enigma of wavefunction collapse (when the probability of states turns into a single state). Does it collapse when we poke at it? Are we somehow controlling the fate of the universe (even if the fate involves only a single photon)?

Experimentalists have had a notoriously hard time with understanding what the hell is going on, the main example being the famous double-slit experiment where electrons act as both particles and waves. In the wave form, electrons pass through both slits to create an interference pattern on the back wall. But when physicists tried to measure exactly which slit the electron passed through the electron turned back into a particle and the interference pattern was lost. It looked like measuring changed the nature of the electron—that we changed the nature of the electron. Crazy enough, buckyballs, which are large soccer-ball shaped collections of carbon atoms, also show wave-particle duality.

Now in a recent Nature paper, Berkeley scientists (of course) have mapped the slow demise of a wavefunction, in the solid state no less. They put a circuit in a superposition (a collection of energy states, i.e. acting as a wave), shoved it in a box, and shot microwaves at it. The energy states alter the microwaves in subtle ways as they pass through the circuit—something researchers can measure without collapsing their experiment. Over microseconds (horribly slow in quantum time), the researchers measured a “collection of snapshots” as the circuit’s wavefunction collapsed. They go on to suggest this measuring-at-a-distance can be used to manipulate wavefunctions, calling it ‘quantum steering.’

Nature wrote a nice article detailing the experiments in terms of quantum mechanics. But people blew up the comments complaining that the research should have been described in quantum field theory terms.  QFT gets rid of the pesky particles and replaces them with fields (like turning a molecule of water into an entire ocean… kind of), completely negating any paradox of wave-particle duality. One commenter, Timothy Eichfeld, had a good explanation of what was happening:

It is not so much as ‘measurement’ as it is the disruption of the very fragile energy states holding the system in place. The ‘wavefunction’ does not know if a human or a frog or stray muon has interfered with its energy system – it does not have a state of being ‘watched’ or not, the issue is that it is unstable and at the top of it’s energy state. It will be knocked back to it’s lowest state depending on what vibration was absorbed by the interaction of the interference pattern on it’s energy signature.

Also in the comments is everyone pitching the book they’ve written on the subject, because, of course, that’s what comment sections on a prestigious scientific website are for.

Simplify the Universe

Particle interactions calculated with a single term all done by hand?! That’s crazy. In case you have nothing to base calculating particle interactions in quantum field theory on, just imagine having thousands of puzzle pieces scattered everywhere to suddenly, without all the pesky pieces, having a single, unified picture. All it takes is some new thinking and a little geometry.

In this new model, physicists describe the universe by an amplituhedron, an infinite-dimensional geometric object. The volume of this object is equal to the scattering amplitude—the holy grail of particle physics—which physicists at the Large Hadron Collider use to describe particle interactions. In some amplituhedrons the amplitudes can be calculated directly. In others diagrams called “twistors” are needed.

What’s more, they’ve found the solution to everything. The volume of an infinite-sided “master amplituhedron” represents the amplitudes of all physical processes. The italics are supposed to impress you. Interactions between a finite number of processes, what us humans normally consider, are contained on various faces. Interesting to me, but probably not to anyone else, is that the master amplituhedron simplifies to a circle in 2D.


Amplituhedron representing an 8-gluon particle interaction, which normally needs ~500 pages of algebra.

The amplituhedron removes locality and unitarity. Particles that aren’t near either other in space or time can interact (what?) and the sum of the probabilities for all possible particle positions doesn’t have to be 1 (what?!). That works out for gravity though, which explodes—yes, explodes—in equations with locality and unitarity. Connecting gravity to particle physics is a big deal—no one’s been able to do it yet. Jacob Bourjaily, one of the researchers, described this method as “a starting point to ultimately describing a quantum theory of gravity.”

Nima Arkani-Hamed, the lead author (the main man, the head honcho), gave a talk about amplituhedrons at the SUSY 2013 conference, which is posted online. Warning: the talk is very technical, but interesting nonetheless (if only to watch a man in shorts and a shirt two sizes too large give a spitfire professional talk). Although, in the talk he says amplituhedrons can be “explained to a smart junior high school student,” which left me feeling like a stupid graduate student.

Shut up and calculate!

Quantum mechanics is confusing. Little by little you wrap your head around the math, just going with the assumption that objects are simultaneously particles and waves, until you feel like maybe there’s something to all those wavefunctions and probabilities. Then you ask what it all means. The confusion rushes back as your professor shrugs.

In the 1920s, the founders of quantum mechanics gathered together in Copenhagen, Denmark to discuss what their math said about the physical universe. In the end, they decided there are fundamental limits to what we can know. They came up with the Copenhagen Interpretation, which according to physicist David Mermin of Cornell University means “shut up and calculate!”

Everyone accepted the Copenhagen Interpretation—if the inventors of the theory aren’t quite sure what it means, how are you supposed to know?! But a small group of scientists aren’t content with the ambiguity of quantum mechanics. With the right perspective, they argue, the meaning will become clear.

The Perimeter Institute for Theoretical Physics is hoping to find clarity. As Christopher Fuchs, one of the Institute’s researchers, says the right perspective will “write a story—literally a story, all in plain words—so compelling and so masterful in its imagery that the mathematics of quantum mechanics in all its exact technical detail will fall out as a matter of course.”

Fuchs is proposing the probabilities inherent in quantum mechanics come from the viewer, not from the object itself. He’s designed a new approach called QBism that, using a type of statistics called Bayesian inference, gets rid of the wavefunctions, amplitudes, and Hilbert-space operators—all necessary in quantum mechanics—and replaces them with simple probabilities.

All in all it’s pretty exciting. I’ve been interested in quantum mechanics and what it really means since I was old enough to realize that we, as scientists, don’t really know everything about the universe. It will be a good day when we can explain the meaning of quantum mechanics to elementary school students in an interesting and understandable story. Maybe QBism will give us that story.

Probably not.

The abstract is the first thing anyone reads of a paper. And sometimes the only thing. Many people like to ask a question in their title, which they answer in the paper. This group, publishing in Journal of Physics A: Mathematical and Theoretical, lets you know right away that their paper is probably not worth reading. At least you know they have a sense of humor. And, hey, a negative result is worth reporting (although chemistry journals don’t seem to think so).

Can apparent superluminal neutrino speeds be explained as a quantum weak measurement?

Probably not.

The Next Generation (of Science Standards) to Boldly Go Where No One Has Gone Before

As I’m sure many people will point out today and in the coming weeks,the Next Generation Science Standards has been released. The plan sets forth standard teaching practices and subjects for K-12 students, which (unbeknownst to me) was previously unstandardized. Twenty-six states have agreed to follow the plan, including my current state of Georgia, but not my home state Florida. The campaign was carried out entirely by the states with no federal aid—an accomplishment in its own right—and there is no mandate for the other twenty-four states to adopt the lesson plans.

As Bruce Alberts, Editor-in-Cheif of Science and a proponent of NGSS, says, “We must teach out students to do something in science class, not to memorize facts.”

Some conservatives are worried by thought of teaching children about evolution and climate change, especially in Kansas and Texas, but many are excited about the curriculum. Last year, Barbara argill, the Republican chairwoman of Texas’ State Board of Education, said there was a “zero percent chance” of Texas adopting the plan. Other organizations, mostly science-based, are excited for the plan.

“This is revolutionary in many respects,” Michael Wysession, one of the scientists who helped with writing the standards, said. “First of all, it is incredible to have most states in the country adopting a single standard. Having each state do its own thing has been really detrimental to the science and engineering education of this country and this is a tremendous step forward.”

I, personally, am very excited about the new approach for teaching science. I have long thought that current methods weren’t reaching students. Even into college, I thought chemistry was boring because previously I had been given a list of facts and scientist names to memorize and regurgitate on a test. Hopefully, the standards will not only help understanding but boost interest as well. Now if only there were standards set in place for teaching mathematics, we’d regain our position on the forefront of science.

Goodnight, Sweet Prince

Today the Large Hadron Collider (LHC for short) will close its doors for two years to make much-needed upgrades, possibly to start exploration into the existence of dark matter and extra dimensions.

The LHC represents everything science should be. Physicists and Engineers from over 100 countries joined together to build the $9 billion particle collider with the sole purpose of answering fundamental questions of the universe. And, boy, did they. Even with a set back only nine days after opening–a magnetic quench that took over a year to fix–the machine has detected new fundamental particles, the Higgs boson being the most important.

Physicists have been using the Standard Model since the 1970s to explain all sorts of natural phenomena, with the catch that no one has proven the model to be experimentally correct. The LHC tests fundamental aspects of the Standard Model, but doesn’t take a side on whether it is right or wrong. There are even some scientists out there that hope the LHC finds something new and unexpected to turn the physics community on its head. But last November, the LHC (tentatively) found what the majority of scientists had crossed their fingers for: the Higgs boson, the particle that gives everything mass.

It’s interesting to think that while we were building computers and walking on the Moon, no one knew why particles had mass, just that they did. With the Standard Model, scientists believe that the particles we know–electrons, quarks, etc.–get their mass from interaction with a Higgs field and the amount of interaction with that field determines the mass; more interaction means a more massive particle.

But how do we know a Higgs field exists? If the field is given enough energy to “excite” it, a Higgs particle will pop into existence. Because the particle is so massive, it quickly decays–in fractions of fractions of a second–into two streams of electrons and hadrons, the building blocks of protons and neutrons. The LHC hopes to see these decay streams after smashing protons and bare lead nuclei together at ultra-high speeds. The decay streams will only be seen for a small fraction of collisions–1 possible Higgs boson for 10 billion collisions–and many decays need to be observed to guarantee a discovery. But with around 600 million collisions per second, the LHC should be able to see roughly one event every half hour.

Currently, scientists at the LHC think they saw enough decays to confirm the existence of the Higgs boson, though it will take a while to analyse the massive amount of data coming from the collider. Now is the opportune time for the LHC to shut down and make upgrades. While we won’t hear of any experiments for close to two years, there is still much to look forward to from the outcome of past runs.