What We Mean When We Say Evidence-Based Medicine

,

The mission of “evidence-based medicine” is surprisingly recent. Before its arrival, much of medicine was based on clinical experience. Doctors tried to figure out what worked by trial and error, and they passed their knowledge along to those who trained under them.

Image
The benefits of evidence-based medicine, when properly applied, are obvious. We can use evidence from treatments to help people make better choices.CreditJean-Christophe Bott/European Pressphoto Agency

Many were first introduced to evidence-based medicine through David Sackett’s handbook, first published in 1997. The book taught me how to use test characteristics, like sensitivity and specificity, to interpret medical tests. It taught me how to understand absolute risk versus relative risk. It taught me the proper ways to use statistics in diagnosis and treatment, and in weighing benefits and harms.

It also firmly established in my mind the importance of randomized controlled trials, and the great potential for meta-analyses, which group individual trials for greater impact. This influence is apparent in what I write for The Upshot.

But evidence-based medicine is often described quite differently.

Many of its supporters say that using evidence-based medicine can address the problems of cost, quality and access that bedevil the health care system. If we all agree upon best practices — based on data and research — we can reduce unnecessary care, save money and push people into pathways to yield better results.

Critics of evidence-based medicine, many of them from within the practice of medicine, point to weak evidence behind many guidelines. Some believe that medicine is more of an “art” than a “science” and that limiting the practice to a cookbook approach removes focus from the individual patient.

Some of these critics (as well as many readers who comment on my articles) worry that guidelines line the pockets of pharmaceutical companies and radiologists by demanding more drugs and more scans. Others worry that evidence-based medicine makes it harder to get insurance companies to pay for needed care. Insurance companies worry that evidence-based recommendations put them on the hook for treatment with minimal proven value.

Everyone is a bit right here, and everyone is a bit wrong. This battle isn’t new; it has been going on for some time. It’s the old guard versus the new. It’s the patient versus the system. It’s freedom versus rationing. It’s even the individual physician versus the proclamations of a specialized elite.

Because of the tensions in that last conflict, this debate has become somewhat political.

The benefits of evidence-based medicine, when properly applied, are obvious. We can use test characteristics and results to make better diagnoses. We can use evidence from treatments to help people make better choices once diagnoses are made. We can devise research to give us the information we are lacking to improve lives. And, when we have enough studies available, we can look at them together to make widespread recommendations with more confidence than we’d otherwise be able.

When evidence-based medicine is not properly applied, though, it not only undermines its reasons for existence, but it also can lead to harm. Guidelines — and there are many — are often promoted as “evidence-based” even though they rely on “evidence” unsuited to its application. Sometimes, these guidelines are used by vested interests to advance an agenda or control providers.

Further, too often we treat all evidence as equivalent. I’ve lost track of the number of times I’ve been told that “research” proves I’m wrong. All research is not the same. A hierarchy of quality exists, and we have to be sure not to overreach.

There is a difference between statistical significance and clinical significance. Get a large enough cohort together, and you will achieve the former. That by itself does not ensure that the result achieves clinical significance and should alter clinical practice.

Finally, we have to recognize that even when good studies are done, with clinically significant results, we shouldn’t over-extrapolate the findings. Just because something worked in a particular population doesn’t mean we should do the same things to another group and say that we have evidence for it.

Years ago, Trisha Greenhalgh and colleagues wrote an article in the BMJciting evidence-based medicine as “a movement in crisis.” It argued that we’ve moved too much from focusing on disease to risk. This point, more than any other, highlights the problem evidence-based medicine seems to have in the public sphere.

Too many articles, studies and announcements are quick to point out that something or other has been proved to be dangerous to our health, without a good explanation of the magnitude of that risk, or what we might reasonably do about it.

Big data, gene sequencing, artificial intelligence — all of these may provide us with lots of information on how we might be at risk for various diseases. What we lack is knowledge about what to do with what we might learn.

If evidenced-based medicine is to live up to its potential, it seems the focus should be on that side of the equation as well, instead of taking best guesses and calling them evidence-based. This, probably more than anything else, has made the term so widely mistrusted.

 

 

This article was originally published in The New York Times.  Read the original article.

 

Scientists Are Designing Artisanal Proteins for Your Body

Our bodies make roughly 20,000 different kinds of proteins, from the collagen in our skin to the hemoglobin in our blood. Some take the shape of molecular sheets. Others are sculpted into fibers, boxes, tunnels, even scissors.

A protein’s particular shape enables it to do a particular job, whether ferrying oxygen through the body or helping to digest food.

Scientists have studied proteins for nearly two centuries, and over that time they’ve worked out how cells create them from simple building blocks. They have long dreamed of assembling those elements into new proteins not found in nature.

But they’ve been stumped by one great mystery: how the building blocks in a protein take their final shape. David Baker, 55, the director of the Institute for Protein Design at the University of Washington, has been investigating that enigma for a quarter-century.

Now, it looks as if he and his colleagues have cracked it. Thanks in part to crowdsourced computers and smartphones belonging to over a million volunteers, the scientists have figured out how to choose the building blocks required to create a protein that will take on the shape they want.

In a series of papers published this year, Dr. Baker and his colleagues unveiled the results of this work. They have produced thousands of different kinds of proteins, which assume the shape the scientists had predicted. Often those proteins are profoundly different from any found in nature.

This expertise has led to a profound scientific advance: cellular proteins designed by man, not by nature. “We can now build proteins from scratch from first principles to do what we want,” said Dr. Baker.

Photo

Dr. David Baker in his lab at the University of Washington, where scientists are learning how to create cellular proteins to perform a variety of tasks. CreditEvan McGlinn for The New York Times

Scientists soon will be able to construct precise molecular tools for a vast range of tasks, he predicts. Already, his team has built proteins for purposes ranging from fighting flu viruses to breaking down gluten in food to detecting trace amounts of opioid drugs.

William DeGrado, a molecular biologist at the University of California, San Francisco, said the recent studies by Dr. Baker and his colleagues represent a milestone in this line of scientific inquiry. “In the 1980s, we dreamed about having such impressive outcomes,” he said.

Every protein in nature is encoded by a gene. With that stretch of DNA as its guide, a cell assembles a corresponding protein from building blocks known as amino acids.

Selecting from twenty or so different types, the cell builds a chain of amino acids. That chain may stretch dozens, hundreds or even thousands of units long. Once the cell finishes, the chain folds on itself, typically in just a few hundredths of a second.

Proteins fold because each amino acid has an electric charge. Parts of the protein chain are attracted to one another while other parts are repelled. Some bonds between the amino acids will yield easily under these forces; rigid bonds will resist.

The combination of all these atomic forces makes each protein a staggering molecular puzzle. When Dr. Baker attended graduate school at the University of California, Berkeley, no one knew how to look at a chain of amino acids and predict the shape into which it would fold. Protein scientists referred to the enigma simply as “the folding problem.”

The folding problem left scientists in the Stone Age when it came to manipulating these important biological elements. They could only use proteins that they happened to find in nature, like early humans finding sharp rocks to cut meat from bones.

We’ve used proteins for thousands of years. Early cheese makers, for example, made milk curdle by adding a piece of calf stomach to it. The protein chymosin, produced in the stomach, turned liquid milk into a semisolid form.

Today scientists are still looking for ways to harness proteins. Some researchers are studying proteins in abalone shells in hopes of creating stronger body armor, for instance. Others are investigating spider silk for making parachute cords. Researchers also are experimenting with modest changes to natural proteins to see if tweaks let them do new things.

To Dr. Baker and many other protein scientists, however, this sort tinkering has been deeply unsatisfying. The proteins found in nature represent only a minuscule fraction of the “protein universe” — all the proteins that could possibly be made with varying combinations of amino acids.

“When people want a new protein, they look around in nature for things that already exist,” Dr. Baker said. “There’s no design involved.”

Crowdsourced Discovery

Dr. Baker has an elfin face, a cheerful demeanor, hair that can verge on chaotic, and a penchant for wearing T-shirts to scientific presentations. But his appearance belies a relentless drive.

After graduating from Berkeley and joining the University of Washington, Dr. Baker joined the effort to solve the folding problem. He and his colleagues took advantage of the fact that natural proteins are somewhat similar to one another.

New proteins do not just pop into existence; they all evolve from ancestral proteins. Whenever scientists figured out the shape of a particular protein, they were able to make informed guesses about the shapes of related ones.

Scientists also relied on the fact that many proteins are made of similar parts. One common feature is a spiral stretch of amino acids called an alpha helix. Researchers learned how to recognize the series of amino acids that fold into these spirals.

Photo

CreditJohn Hersey

In the late 1990s, the team at the University of Washington turned to software for individual studies of complex proteins. The lab decided to create a common language for all this code, so that researchers could access the collective knowledge about proteins.

In 1998, they launched a platform called Rosetta, which scientists use to build virtual chains of amino acids and then compute the most likely form they will fold into.

A community of protein scientists, known as the Rosetta Commons, grew around the platform. For the past twenty years, they’ve been improving the software on a daily basis and using it to better understand the shape of proteins — and how those shapes enable them to work.

In 2005, Dr. Baker launched a program called Rosetta@home, which recruited volunteers to donate processing time on their home computers and, eventually, Android phones. Over the past 12 years, 1,266,542 people have joined the Rosetta@home community.

Step by step, Rosetta grew more powerful and more sophisticated, and the scientists were able to use the crowdsourced processing power to simulate folding proteins in greater detail. Their predictions grew startlingly more accurate.

The researchers went beyond proteins that already exist to proteins with unnatural sequences. To see what these unnatural proteins looked like in real life, the scientists synthesized genes for them and plugged them into yeast cells, which then manufactured the lab’s creations.

“There are subtleties going on in naturally occurring proteins that we still don’t understand,” Dr. Baker said. “But we’ve mostly solved the folding problem.”

Proteins and Pandemics

These advances gave Dr. Baker’s team the confidence to take on an even bigger challenge: They began to design proteins from scratch for particular jobs. The researchers would start with a task they wanted a protein to do, and then figure out the string of amino acids that would fold the right way to get the job done.

In one of their experiments, they teamed up with Ian Wilson, a virologist at Scripps Research Institute, to devise a protein to fight the flu.

Dr. Wilson has been searching ways to neutralize the infection, and his lab had identified one particularly promising target: a pocket on the surface of the virus. If scientists could make a protein that fit snugly in that pocket, it might prevent the virus from slipping into cells.

Dr. Baker’s team used Rosetta to design such a protein, narrowing their search to several thousand of chains of amino acids that might do the job. They simulated the folding of each one, looking for the combinations that might fit into the viral niche.

The researchers then used engineered yeast to turn the semifinalists into real proteins. They turned the proteins loose on the flu viruses. Some grabbed onto the viruses better than others, and the researchers refined their molecular creations until they ended up with one they named HB1.6928.2.3.

To see how effective HB1.6928.2.3 was at stopping flu infections, they ran experiments on mice. They sprayed the protein into the noses of mice and then injected them with a heavy doses of influenza, which normally would be fatal.

But the protein provided 100 percent protection from death. It remains to be seen if HB1.6928.2.3 can prove its worth in human trials.

“It would be nice to have a front-line drug if a new pandemic was about to happen,” Dr. Wilson said.

Photo

In Dr. Baker’s office are models of complex proteins. The human body makes roughly 20,000, each suited to a different task. CreditEvan McGlinn for The New York Times

HB1.6928.2.3 is just one of a number of proteins that Dr. Baker and his colleagues have designed and tested. They’ve also made a molecule that blocks the toxin that causes botulism, and one that can detect tiny amounts of the opioid fentanyl. Yet another protein may help people who can’t tolerate gluten by cutting apart gluten molecules in food.

Last week, Dr. Baker’s team presented one of its most ambitious projects: a protein shell that can carry genes.

The researchers designed proteins that assemble themselves like Legos, snapping together into a hollow sphere. In the process, they can also enclose genes and can carry that cargo safely for hours in the bloodstream of mice.

These shells bear some striking resemblances to viruses, although they lack the molecular wherewithal to invade cells. “We sometimes call them not-a-viruses,” Dr. Baker said.

A number of researchers are experimenting with viruses as a means for delivering genes through the body. These genes can reverse hereditary disorders; in other experiments, they show promise as a way to reprogram immune cells to fight cancer.

But as the product of billions of years of evolution, viruses often don’t perform well as gene mules. “If we build a delivery system from the ground up, it should work better,” Dr. Baker said.

Gary Nabel, chief scientific officer at Sanofi, said that the new research may lead to the invention of molecules we can’t yet imagine. “It’s a new territory, because you’re not modeling existing proteins,” he said.

For now, Dr. Baker and his colleagues can only make short-chained proteins. That’s due in part to the cost involved in making pieces of DNA to encode proteins.

But that technology is improving so quickly that the team is now testing longer, bigger proteins that might do more complex jobs — among them fighting cancer.

In cancer immunotherapy, the immune system recognizes cancer cells by the distinctive proteins on their surface. The immune system relies on antibodies that can recognize only a single protein.

Dr. Baker wants to design proteins that trigger a response only after they lock onto several kinds of proteins on the surface of cancer cells at once. He suspects these molecules will be better able to recognize cancer cells while leaving healthy ones alone.

Essentially, he said, “we’re designing molecules that can do simple logic calculations.” Indeed, he hopes eventually to make molecular machines.

Our cells generate fuel with one such engine, a gigantic protein called ATP synthase, which acts like a kind of molecular waterwheel. As positively charged protons pour through a ring of amino acids, it spins a hundred times a second. ATP synthase harnesses that energy to build a fuel molecule called ATP.

It should be possible to build other such complex molecular machines as scientists learn more about how big proteins take shape, Dr. Baker said.

“There’s a lot of things that nature has come up with just by randomly bumbling around,” he said. “As we understand more and more of the basic principles, we ought to be able to do far better.”

 

This article was originally published in The New York Times.  Read the original article.

 

Fact or old wives’ tale? A change in the weather can make bones and joints ache.

Fact or old wives’ tale? A change in the weather can make bones and joints ache. A new study has an answer: old wives’ tale.

Other studies have looked at whether an increase in humidity, rainfall or barometric pressure can bring on pain, but never with as much data as in this newest study, in BMJ. Researchers looked at medical records of 11,673,392 Medicare outpatient visits. Matching the dates of the visits to local weather reports, they found that 2,095,761 of them occurred on rainy days.

Using probability estimates, they predicted how many of those visits were for a condition related to joint or back pain.

 

After controlling for age, sex, race and various chronic conditions, including rheumatoid arthritis, they found that more visits for bone and joint pain happened on dry days than wet ones — 6.39 percent for dry and 6.35 percent for wet days, a difference so small as to have no clinical significance.

“The weather is not causing joint pain,” said the lead author, Anupam B. Jena, an associate professor of health care policy at Harvard. “But when it’s raining and you have joint pain you attribute it to the weather. When it’s sunny and you have joint pain, you don’t. People get upset when you say this.” He acknowledged a link still might be found with a larger and more detailed analysis.

This article was originally published in The New York Times.  Read the original article.

A Vanderbilt neuroscientist has discovered an unusual but shockingly fruitful way to study our most enigmatic organ.

One day in June 2012, at São Paulo’s international airport, Suzana Herculano-Houzel hauled two heavy suitcases onto an X-ray-machine conveyor belt. As the luggage passed through the scanner, the customs agent’s eyes widened. The suitcases did not contain clothes, toiletries or any of the usual accouterments of travel. Instead, they were stuffed with more than two dozen curiously wrapped bundles, each enclosing an amorphous blob suspended in liquid. The agent asked Herculano-Houzel to open her bags, suspecting that she was trying to smuggle fresh cheese into the country; two people had been caught doing exactly that just moments before.

“It’s not cheese,” Herculano-Houzel said. “It’s only brains.”

She was a neuroscientist, she explained, and she had just returned from an unusual — but completely legal — research expedition in South Africa, where she collected brains from a variety of species: giraffes, lions, antelopes, mongooses, hyenas, wildebeests and desert rats. She was taking the organs, sealed in containers of antifreeze, back to her lab in Rio de Janeiro. The customs agents reviewed her extensive collection of permits and documentation, and they eventually let her pass with suitcases in tow.

In the last 12 years, Herculano-Houzel, now a researcher and professor at Vanderbilt University in Nashville, has acquired the brains of more than 130 species. She has brains from commonplace creatures — mice, squirrels, pigeons — and more exotic ones, like Goodfellow’s tree kangaroo and the Tasmanian devil. She has brains from bees and an African elephant. She prefers to obtain whole brains if possible, and she goes to great lengths to protect the organs during transport.

A brain is a precious thing, containing many of science’s greatest unsolved mysteries. What we don’t know about the brain still eclipses what we do. We don’t know how the brain generates consciousness. We aren’t sure why we sleep and dream. The precise causes of many common mental illnesses and neurological disorders elude us. What is the physical form of a memory? We have only inklings. We still haven’t cracked the neural code: that is, how networks of neurons use electrical and chemical signals to store and transmit information. Until very recently — until Herculano-Houzel published an important discovery in 2009 — we did not even know how many cells the human brain contained. We only thought we did.

Before Herculano-Houzel’s breakthrough, there was a dominant narrative about the human brain, repeated by scientists, textbooks and journalists. It went like this: Big brains are better than small brains because they have more neurons, and what is even more important than size is the brain-to-body ratio. The most intelligent animals have exceptionally large brains for their body size. Humans have a brain seven times bigger than you would expect given our overall size — an unrivaled ratio. So, the narrative goes, something must have happened in the course of human evolution to set the human brain apart, to swell its proportions far beyond what is typical for other animals, even for our clever great-ape and primate cousins. As a result, we became the bobbleheads of the animal kingdom, with craniums spacious enough to accommodate trillions of brain cells: 100 billion electrically active neurons and 10 to 50 times as many supporting cells, known as glia.

Photo

Suzana Herculano-Houzel holding a wildebeest brain in her lab at Vanderbilt University.CreditJeff Minton for The New York Times

By comparing brain anatomy across a large number of species, Herculano-Houzel has revealed that this narrative is seriously flawed. Not only has she upended numerous assumptions and myths about the brain and rewritten some of the most fundamental rules about how brains are constructed — she has also proposed one of the most cohesive and evidence-based frameworks for human brain evolution to date.

But her primary methods are quite different from others’ in her field. She doesn’t subject living brains to arrays of electrodes and scanners. She doesn’t divide brains into prosciutto-thin slices and carefully sandwich them between glass slides. She doesn’t seal brains in jars of formaldehyde for long-term storage. Instead, she demolishes them. Each organ she took such great care to protect on her trans-Atlantic journey was destined to be liquefied into a cloudy concoction she affectionately calls “brain soup” — the key to her groundbreaking technique for understanding what is arguably the most complex congregation of matter in the universe. In dismantling the brain, she has remade it.

The history of studying the brain is a history of learning how to perceive it, literally and figuratively. Just as technological advances have allowed us to better examine the moon, stars and planets, they have significantly improved our ability to chart and inspect the thick constellations of cells in our own heads. The prevailing metaphor for the brain has long been a piece of biological machinery, but our conception of that machine has evolved in parallel with our technological prowess. At first, the brain was viewed as the body’s coolant system, a hydraulic pump for “animal fluids.” Then it was a collection of self-winding springs or an “enchanted loom,” then a clock, an electromagnet, a telephone switchboard, a hologram and, most recently, a biological supercomputer.

Despite all the advances we’ve made, there are still many fundamental aspects of the brain that we do not understand at all. This is mainly because the brain is a many-layered mystery, demanding intense scrutiny at vastly different scales, from the molecular to the perceptual. But it’s also because neuroscience has sometimes neglected, rushed or botched what should be its most elementary tasks, chasing holy grails before establishing primary principles. Case in point: We are well into the 21st century, and we are only now getting an accurate census of the brain’s cellular building blocks.

In part because the scientific portrait of the brain remains so patchy, it has long been embellished with numerous myths and misconceptions. For example, there’s no truth to the idea that the brain is half android and half artist, with a left hemisphere dedicated to logic and analytical thinking and a right hemisphere for intuition and creativity. You don’t have a primitive reptilian brain tucked inside your more sophisticated mammalian tissues. You can’t increase brainpower by eating nuts, blueberries, fish and other so-called brain foods. Entire books have been written to counter such falsehoods.

Misinformation about the brain is not isolated to the general public; it is surprisingly prevalent in academia too. By the time Herculano-Houzel was old enough to pursue graduate studies in science, she had long been inoculated with a strong dose of skepticism. When she was growing up in Brazil, her parents emphasized that “it was a good thing to not take somebody’s word, no matter how respected they were,” she recalls, “and rather ask: ‘Why? How do you know that?’ ” It was not until she earned a Ph.D. in neuroscience in Europe and returned to Rio de Janeiro in 1999, however, that she confronted neuromythology head on.

Instead of pursuing postdoctoral studies — which she thought would be too intellectually restricting — she persuaded the city’s recently opened Museum of Life to offer her a job giving presentations on the brain to the public. One of her first projects was a survey regarding general beliefs about the brain: E.g., did consciousness depend on the brain? Did drugs physically alter the brain? She was shocked to learn that 60 percent of college-educated people in Rio de Janeiro believed that humans used only 10 percent of their brains — a longstanding fallacy. In truth, the brain is highly active across its entirety just about all the time, even when we are spacing out or sleeping. She couldn’t let it go. Where did such a prevalent falsehood come from? How did it spread?

She started looking for clues in research papers and popular science writing. In the foreword to the first edition of Dale Carnegie’s “How to Win Friends and Influence People,” the American psychologist William James is misquoted as declaring that “the average man develops only 10 percent of his latent mental ability.” In the ’30s and ’40s, another pioneering psychologist, Karl Lashley, discovered that he could scoop out large portions of a rat’s brain without seriously impairing its ability to solve a maze. Herculano-Houzel also recalled that early editions of the textbook “Principles of Neural Science,” along with countless studies, claimed that the human brain contained at least 10 times as many glial cells as neurons. Glia are now known to be every bit as important as neurons, facilitating electrical and chemical communication, clearing cellular detritus, protecting and healing injured brain cells and guiding the development of new neural circuits. But until the mid- to late 20th century, scientists mostly regarded glia as passive scaffolding for neurons. Perhaps the widely cited fact that glia outnumbered neurons by at least 10 to one helped cement the notion that only 10 percent of the brain really mattered. But where were the studies establishing the oft-repeated glia-to-neuron ratio?

After an exhaustive search, Herculano-Houzel concluded that there was no scientific basis for the claim. She and her collaborator Christopher von Bartheld, a professor at the University of Nevada School of Medicine, published a paper last year summing up their detective work. In the 1950s and ’60s, a few scientists proposed that glia were about 10 times as common as neurons, based on studies of small brain regions, ones that happened to have particularly high glia-to-neuron ratios. In a decades-long game of telephone, other researchers repeated these estimates, extrapolating them to the entire brain. Science journalists parroted the numbers. Soon this misconception spread to textbooks and educational websites run by the government and respected scientific organizations. Even the latest edition of “Principles of Neural Science” states that the brain as a whole contains “two to 10 times more glia than neurons.” The truth is that not a single study has ever demonstrated this. “I realized we didn’t know the first thing about what the human brain is made of, much less what other brains were made of, and how we compared,” Herculano-Houzel says.

So she decided to find out herself. For decades, the standard method for counting brain cells was stereology: slicing up the brain, tallying cells in thin sheets of tissue splayed on microscope slides and multiplying those numbers by the volume of the relevant region to get an estimate. Stereology is a laborious technique that works well for small, relatively uniform areas of the brain. But many species have brains that are simply too big, convoluted and multitudinous to yield to stereology. Using stereology to take a census of the human brain would require a daunting amount of time, resources and unerring precision.

“I realized we didn’t know the first thing about what the human brain is made of, much less what other brains were made of, and how we compared”

In a study from the 1970s, Herculano-Houzel discovered a curious proposal for an alternative to stereology: Why not measure the total amount of DNA in a brain and divide by the average amount of DNA per cell? The problem with this method is that neurons are genetically diverse, the genome is a highly dynamic structure — continuously unraveling and reknitting itself to amplify or silence certain genes — and even small errors in measuring quantities of DNA could throw off the whole calculation. But it gave Herculano-Houzel a better idea: “Dissolve the brain, yes! But don’t count DNA. Count nuclei!” — the protein-rich envelopes that enclose every cell’s genome. Each cell has exactly one nucleus. “A nucleus is a nucleus, and you can see it,” she says. “There is no ambiguity there.”

By 2002, Herculano-Houzel had moved from the Museum of Life to a new science-communications job at the Federal University of Rio de Janeiro, where she also had access to lab space and the freedom to pursue research of her choice. She began experimenting with rat brains, freezing them in liquid nitrogen, then puréeing them with an immersion blender; her initial attempts sent chunks of crystallized neural tissue flying all around the lab. Next she tried pickling rodent brains in formaldehyde, which forms chemical bridges between proteins, strengthening the membranes of the nuclei. After cutting the toughened brains into little pieces, she mashed them up with an industrial-strength soap in a glass mortar and pestle. The process dissolved all biological matter except the nuclei, reducing a brain to several vials of free-floating nuclei suspended in liquid the color of unfiltered apple juice.

To distinguish between neurons and glia, Herculano-Houzel injected the vials with a chemical dye that would make all nuclei fluoresce blue under ultraviolet light, and then with another dye to make the nuclei of neurons glow red. After vigorously shaking each vial to evenly disperse the nuclei, she placed a droplet of brain soup on a microscope slide. When she peered through the eyepiece, the globular nuclei looked like Hubble photos of distant stars in the black velvet of space. Counting the number of neurons and glia in several samples from each vial, and multiplying by the total volume of liquid, gave Herculano-Houzel her final tallies. By reducing a brain, in all its daunting intricacy, to a homogeneous fluid, she was able to achieve something unprecedented. In less than a day, she accurately determined the total number of cells in an adult rat’s brain: 200 million neurons and 130 million glia.

In the early years of Herculano-Houzel’s research, especially once she graduated from rats to primates, she encountered substantial resistance from her peers. Here was a young, essentially unknown scientist from Brazil not only proposing a radically different way of studying the brain but also contradicting centuries of conventional wisdom. “At first I shared the same opinion as everyone else,” says Andrew Iwaniuk, an evolutionary neuroscientist at the University of Lethbridge in Alberta, Canada. “This is insane. This can’t possibly work. What do you mean you are blending an entire brain and coming up with the number of neurons?” As Herculano-Houzel’s data set expanded, however, reservations began to recede. In the last few years, several independent teams of scientists have validated the brain-soup technique with carefully controlled studies, winning the confidence of most researchers. “The technique works — no doubt about that,” Iwaniuk says. “It’s hundreds or thousands of times faster than using traditional methods. And that means we can rapidly compare so many different species and see what might make the human brain special — or not.”

Rat brains were just the beginning. “Once I realized I could actually do this,” Herculano-​Houzel told me, “there was a whole world of questions out there just waiting to be examined.” Which is to say, there was a whole planet of brains waiting to be dissolved.

By 2016, HerculanoHouzel had migrated to Vanderbilt University. When we walked through the doors to her new lab, one of the first things I noticed was a row of four large white freezers covered with souvenir magnets: a toadstool-red crab with jiggling legs, the Loch Ness monster sporting a plaid bonnet and a bear chasing a human stick figure with the caption “Canadian fast food!” “That’s one of my airport pastimes — the gaudier the better,” Herculano-Houzel told me with a characteristically boisterous laugh. Her personality, much like her approach to science, is defined by exuberance. During our conversations, she punctuated her speech with vigorous head shakes and staccato guffaws, leaning halfway across the table when she really got excited. Unlike many of her peers, Herculano-Houzel does not shy away from a little showmanship; a TED Talk she gave has been viewed nearly two and a half million times. One neuroscientist I spoke to referred with mild disapproval to her “self-aggrandizement.”

She swung one of the freezers open, revealing shelves crowded with Tupperware boxes. Each container was labeled with a bit of masking tape inked with a numerical ID: Box 19, Box 6, Box 34. “What’s in here?” I asked. “Oh, all sorts,” she said. “About 200 different brains. Birds and mammals.” One particularly large brain sat in its plastic bin as casually as a sliced cantaloupe. As I leaned in for a closer look, its distinctive exterior came into view: a labyrinth of flesh, now sallow and cold, that once fizzed with electric current and pulsated with freshly pumped blood. “Here you have different carnivoran species,” she continued. “Lion, leopard, dogs, cats, raccoons. There are ostrich brains. A few primates. A bunch of giraffes — their spinal cords as well. Four meters’ worth of spinal cord.”

At this point, Herculano-Houzel has published studies on the brains of more than 80 species. The more species she has compared, the clearer it has become that much of the dogma about brains and their cellular components is simply wrong. First of all, a large brain does not necessarily have more neurons than a small one. She has found that some species have especially dense brains, packing more cells into the same volume of brain tissue as their spongier counterparts. As a rule, because their neurons are smaller on average, primate brains are much denser than other mammalian brains. Although rhesus monkeys have brains only slightly larger than those of capybaras, the planet’s largest rodents, the rhesus monkey has more than six times the number of neurons. Birds appear to have the densest brains of all, but their brains are not particularly large. An emu, one of the biggest birds alive today, has a brain that weighs about as much as an AA battery. Were there a bird with a brain the size of a grapefruit, however, it would probably rule the world.

The brain-soup technique further revealed that the human brain, contrary to the numbers frequently cited in textbooks and research papers, has 86 billion neurons and roughly the same number of glia — not 100 billion neurons and trillions of glia. And humans certainly do not have the most neurons: The African elephant has about three times as many, with a grand total of 257 billion. When Herculano-Houzel focused on the cerebral cortex, however — the brain’s wrinkled outermost layer — she discovered a staggering discrepancy. Humans have 16 billion cortical neurons. The next runners-up, orangutans and gorillas, have nine billion cortical neurons; chimpanzees have six billion. The elephant brain, despite being three times larger than our own, has only 5.6 billion neurons in its cerebral cortex. Humans seemed to possess the most cortical neurons — by far — of any species on earth.

A cross-section of a preserved human brain looks like a slice of gnarled squash, with an undulating cream-colored interior outlined by an intensely puckered gray rind. That rind — composed of layers of densely packed neurons and glia — is the cerebral cortex. Its deep grooves and ridges significantly increase its total surface area, providing more room for cells within the confines of the skull. All mammals have a cortex, but the extent to which the cortex is wrinkled depends on the species. Squirrels and rats have cortices as smooth as soft-serve, whereas human and dolphin brains look like heaps of udon noodles. Over the years, some researchers have proposed that the more corrugated the cortex, the more cells it contains, and the more intelligent the species. But no one had precise cell counts to back up those claims.

Were there a bird with a brain the size of a grapefruit, however, it would probably rule the world.

The cerebral cortex is the difference between impulse and insight, between reflex and reflection. It is essential for voluntary muscle control, sensory perceptions, abstract thinking, memory and language. Perhaps most profound, the cerebral cortex allows us to create and inhabit a simulation of the world as it is, was and might be; an inner theater that we can alter at will. “The cortex receives a copy of everything else that happens in the brain,” Herculano-Houzel says. “And this copy, while technically unnecessary, adds immense complexity and flexibility to our cognition. You can combine and compare information. You can start to find patterns and make predictions. The cortex liberates you from the present. It gives you the ability to look at yourself and think: This is what I am doing, but I could be doing something different.”

The sheer density of the human cortex dovetails with an emerging understanding of interspecies intelligence: It’s not that the human mind is fundamentally distinct from the minds of other primates and mammals, but rather that it is dialed up to 11. It’s a matter of scale, not substance. Many mental abilities once regarded as uniquely human — toolmaking, problem-solving, sophisticated communication, self-awareness — turn out to be far more widespread among animals than previously thought. Humans just manifest these talents to an unparalleled degree. Herculano-Houzel thinks the simplest explanation for this disparity is the fact that humans have nearly twice as many cortical neurons as any other species studied so far. How, then, did our species gain such a huge lead?

The standard explanation for our unrivaled intelligence is that humans bucked the evolutionary trends that restricted other animals. Somehow, perhaps because of a serendipitous genetic mutation millions of years ago, the human brain inflated far beyond the norm for a primate of our body size. But Herculano-Houzel’s careful measurements of dozens of primate species demonstrated that the human brain is not out of sync with the rest of primatekind. In both mass and number of cells, the brains of all primates, including humans, scale in a neat line from smallest to biggest species — with the exception of gorillas, orangutans and chimpanzees. The great apes, our closest evolutionary cousins, are the anomalies, with oddly shrunken brains considering their overall heft. While contemplating this incongruity, Herculano-Houzel remembered a book she read a few years earlier: “Catching Fire: How Cooking Made Us Human,” by the Harvard anthropologist Richard Wrangham.

Wrangham proposed that the mastery of fire profoundly altered the course of human evolution, to the extent that humans are “adapted to eating cooked food in the same essential way as cows are adapted to eating grass, or fleas to sucking blood.” Cooking neutralized toxic plant compounds, broke down proteins in meat and made all foods much easier to chew and digest, meaning we got many more calories from cooked foods than from their raw equivalents. Because our digestive systems no longer had to work as hard, they began to shrink; in parallel, our brains grew, nourished by all those extra calories. The human brain makes up only 2 percent of our body weight, yet it demands 20 percent of the energy we consume each day.

Herculano-Houzel realized that she could extend and modify this line of thought. In the wild, modern great apes spend about eight hours a day foraging just to meet their minimal caloric requirements, and they routinely lose weight when food is scarce. In the course of their evolutionary history, as they developed much larger bodies than their primate ancestors, with larger organs to match, their brains most likely hit a metabolic growth limit. Great apes could no longer obtain enough calories from raw plants to nourish brains that would be in proportion with their overall mass.

Herculano-Houzel tested this insight with math. Based on their body size, gorillas and orangutans should have brains at least as large as ours, with neuron counts to match. Knowing how much energy a neuron needs on average, however, and how much time an ape can spend foraging, Herculano-Houzel calculated that modern great apes are physiologically restricted to brains with about 30 billion neurons. There simply aren’t enough hours in the day, or enough calories in raw plants, to push them over that threshold. “That’s not something I thought about,” Wrangham says. “It’s an ingenious way of looking at things.”

Cooking liberated our ancestors from this same physiological straitjacket and put us back on track to develop brains as large as expected for primates our size. And because primates have such dense brains, all that new brain mass rapidly added a huge number of neurons. It took 50 million years for primates as a group to evolve brains with around 30 billion neurons total. But in a mere 1.5 million years of evolution, the human brain gained an astounding 56 billion additional neurons. To use the metaphor of our time, cooking tripled the human brain’s processing power.

There is something almost comical about this revelation. For so long, we have struggled to keep the human brain perched on its pedestal. We have insisted that although we are the product of evolution just like any other animal, our evolutionary journey was special — that we inherited decently large brains from our ape ancestors and transformed them into the most formidable thinking machines on the planet. As it turns out, quite the opposite is true. The evolutionary path of the human brain is not one of inordinate growth, but rather a long-overdue game of catch-up.

Even if we now have more cortical neurons than any other species, the true significance of that discrepancy remains unclear. Consider that the elephant, which has three times fewer cortical neurons than humans, is one of the smartest animals ever studied: It crafts tools, recognizes itself in the mirror and even seems to have some understanding of death. Likewise, the octopus — an invertebrate with no cerebral cortex, a meager 100 million neurons in its brain and 300 million more in its arms — is one of the most intelligent species in the ocean, capable of remembering individuals, opening complex puzzle boxes and escaping “escape-proof” tanks. Honeybees have minuscule brains, yet their talents for collaboration and communication exceed those of many more densely brained creatures. Then there are organisms like plants, which, despite having no neurons whatsoever, are exquisitely sensitive to their environments, adapting to changes in light and moisture, recognizing kin and eavesdropping on one another’s chemical alarm signals.

Ultimately, the brain-soup technique’s central strength — its reductionism — is also its weakness. By transforming a biological entity of unfathomable complexity into a small set of numbers, it enables science that was not previously possible; at the same time, it creates the temptation to exalt those numbers. In her book, “The Human Advantage,” Herculano-Houzel stresses the distinction between cognitive capacity and ability. We have about the same number of neurons as humans who lived 200,000 years ago, yet our respective abilities are vastly different. At least half of human intelligence derives not from biology but from culture — from the language, rituals and technology into which we are born. Perhaps that is also why parrots, dolphins and apes raised by scientists in intellectually demanding environments often develop a degree of intelligence not seen in their wild counterparts: Culture unlocks the brain’s latent potential.

For centuries, we have regarded the brain as a kind of machine: a ludicrously convoluted one, but a machine nonetheless. If we could only pick it apart, quantify and examine all its components, we could finally explain it. But even if we could count and classify every cell, molecule and atom, we would still lack a satisfying explanation of its remarkable behavior. The brain is more than a thing; it’s a system. So much of intelligence is neither within the brain nor in its environment, but vibrating through the space in between.

Bingeing on ice cream has physiological effects that can potentially strain the cardiovascular system

Q. Is there a cardiovascular difference between eating a pint of ice cream in one sitting versus eating it over a period of time, such as a week?

A. Bingeing on ice cream has physiological effects that can potentially strain the cardiovascular system, “but let’s be honest, tons of people have eaten a pint of ice cream without consequences, including myself,” said Dr. Martha Gulati, chief of cardiology at the University of Arizona College of Medicine and editor of the American College of Cardiology’s patient education website Cardiosmart.org.

Still, while most people will “go on with their life” after an ice cream binge, she said, “we worry about people who are at high risk for heart disease or who already have heart disease — they should be careful about eating a calorie-heavy meal” or indulging excessively in high-fat, sugar-laden desserts.

One study from 2004 of 209 heart attack patients in Israel determined that people were more likely to sustain a serious coronary event in the several hours after eating an unusually heavy meal. That study, however, relied on patients’ recollections of what they had eaten on various days, memories that can be skewed or inaccurate after a serious health scare. A later study that analyzed data from the Israeli national survey of acute coronary syndromes found patients rarely identified a big meal as the trigger for their heart attacks, linking fewer than 2 percent of heart attacks to overeating.

Still, fat-laden and sugary foods have an immediate physiological effect that can potentially increase the risk of heart attacks or strokes, “especially if you have heart disease or underlying coronary disease that hasn’t pronounced itself,” Dr. Gulati said.

Such foods lead to a surge in insulin and triglycerides, raise systolic blood pressure and heart rate, and cause blood platelets to become sticky and to clump, which can cause blockages in the small vessels of the heart and reduce blood flow to the heart. Those conditions could “eventually cascade to a heart attack,” if blood flow to the heart doesn’t improve, Dr. Gulati said.

Another good reason not to make a habit of eating a pint of ice cream in a single sitting is that over time it will lead to weight gain, which increases cardiovascular risk, she said, adding, “We advise ice cream in moderation.”

 

 

This article was originally published in The New York Times.  Read the original article.

New Genome Scores Predict Breast Cancer Odds for Any Woman

The actress Angelina Jolie prompted droves of women to seek genetic testing after she revealed, in 2013, that a “faulty gene” called BRCA1 had given her an 87 percent chance of developing breast cancer.

In the face of those odds, Jolie had decided to have her breasts removed. “I chose not to keep my story private because there are many women who do not know they that they might be living under the shadow of cancer,” the Oscar winner said.

The subsequent surge in women asking for DNA tests was dubbed the “Angelina effect.” Yet most never found out what they wanted to know. That’s because only 10 percent of women with a family history of breast cancer are ever found to have an inherited cancer gene.

Now Myriad Genetics, the Utah firm that tested Jolie’s genes, says it has started offering a new type of DNA test that could eventually tell any woman her risk of breast cancer.

The new test works very differently from older ones. Instead of checking notorious genes like BRCA1 and BRCA2, another gene with variations tightly linked to breast cancer, it instead scavenges for small clues distributed throughout a woman’s genome. Adding these together creates what is called a “polygenic” risk score. In some cases, women are learning that their potential risk of breast cancer is 60 percent or more.

“It’s like we’ve discovered another BRCA, but it is not one gene,” says Peter Kraft, an epidemiologist at Harvard University who is involved in genetic studies of breast cancer.

Scientists are calling polygenic risk scores a potential crystal ball. In addition to breast cancer, tests to predict who will get Alzheimer’s or suffer a heart attack are in development. Myriad is the first large company to market one, several doctors said.

The new predictions are the payoff of billions spent by the U.S. and other governments over the last decade on giant population studies that searched for the genetic causes of disease. By comparing people’s DNA, scientists began zeroing in on individual genetic letters—among the billions in a genome—that, statistically, appeared more often in those who got specific diseases like breast cancer.

“A poly-gene is a gene that acts in concert with other genes. It has a small effect, but when you put them all together, we found it did predict the risk of breast cancer,” says Myriad’s chief scientist, Jerry S. Lanchbury.

In its new test, Myriad combines 86 DNA variants with a person’s history, including information on how old a woman was at puberty. The results can be as strongly indicative of breast cancer risk as a mutation in the BRCA gene, but those odds will apply to many more women.

For now, Myriad’s risk score, which it began offering in September as part of its standard cancer test, is available only for women of European background who have a family history of cancer.

Extra research will be needed to build genetic predictors for blacks or Hispanics. Myriad says it is planning that work. A small company called Phenogen Sciences says it has a test it sells for $199 that’s valid for all races, although it is not yet widely used.

Ora Karp Gordon, a cancer doctor in Los Angeles, says the scores are helping patients she calls the “worried well” who don’t have a BRCAmutation but do have a frightening family history of cancer. For about one in five of these women, she says, the extra information determines whether they are really in a high-risk category or not. If they are, she can advise them to seek mammograms at a younger age and get MRI tests.

Gordon says the predictions aren’t yet certain enough to cause anyone to have her breasts removed, as Jolie did. She says one young patient, only 30, received an exceptionally high life risk score of 72 percent. “It would be terrible if she chose to get preventive surgery,” says Gordon. That’s because until more follow-up research is done, the new scores can’t be taken at face value as someone’s “absolute” risk, she says.

The big question going forward is whether every woman should get the test. “That is where technology is going to push us, but I don’t think we have the infrastructure in medical care [yet] to give everyone a personalized and accurate risk assessment,” says Sara Pirzadeh-Miller, assistant director for cancer genetics at the University of Texas Southwestern Medical Center.

Risk scores can easily be calculated for anyone whose genome has been analyzed. That means it might not be long before direct-to-consumer testing companies get involved in predicting serious common diseases. It’s also becoming more common for newborns and even IVF embryos to be sequenced. At least one company says it plans to use risk scores to predict disease risk even before people are born.

 

 

This article was originally published in MIT News. Read the original article.

Scientists are vying for credit in the complex story of the gene-editing system

In the biological sciences today, many have probably heard of Crispr, the gene editing system that is transforming medicine. But fewer people are likely to know the story of Francisco J.M. Mojica, a professor at the University of Alicante in Spain, who coined the now famous name 16 years ago.

Crispr has sparked more than a half-billion dollars of investment and a contentious, ongoing legal fight over who controls the patent and intellectual property rights to the gene editing technology. Among the scientists involved, there’s another, equally important battle under way: for a place in Crispr history.

When it comes to science and technology, there has long been a tension between the desire to single out an individual inventor and the reality that breakthroughs usually result from an incremental, collective process. “Science likes origin stories,” says Kara Swanson, a law professor and historian of science at Northeastern University.

In the case of Crispr, Dr. Mojica isn’t leaving things to chance. He co-wrote two historical reviews and a chapter in an academic book that chronicle his own role, along with many other scientists, in identifying Crispr (which stands for clustered regularly interspaced short palindromic repeats) in bacteria and in the microorganisms called archaea. It took years of research for scientists to elucidate how Crispr serves as the immune system for these organisms. Only later did other scientists adapt and find a way to use the system for gene editing.

In the late 1980s and 1990s, Dr. Mojica devoted his attention to salt-loving archaea found in the marshes a short drive from Alicante, on the southeast coast of Spain. Studying the archaea’s genome, the young researcher noted clusters of regularly spaced repeats. Such repeats were also found in bacteria. He and other researchers set out to learn more about what function they might have.

As scientists published their results, different names and acronyms proliferated. Dr. Mojica’s favorite was one of his own: SRSR, for short regularly spaced repeats. A colleague working in a research group in the Netherlands, Ruud Jansen, was using a different scientific acronym, Spidr, for spacers interspersed direct repeats.

Feng Zhang of the Broad Institute participated in a panel discussion at the National Academy of Sciences international summit on the safety and ethics of human gene editing in December 2015, in Washington.
Feng Zhang of the Broad Institute participated in a panel discussion at the National Academy of Sciences international summit on the safety and ethics of human gene editing in December 2015, in Washington. PHOTO: SUSAN WALSH/ASSOCIATED PRESS

In a bid for greater cohesiveness in the field, Dr. Jansen suggested to Dr. Mojica that they propose a new name, something that everyone could agree to use in future papers. Trying to find an acronym that captured the system’s most salient features but still rolled off the tongue, Dr. Mojica came up with Crispr. He shared the idea with his wife, who told him that Crispr “might be a good name for a dog but perhaps not for a sequence,” he says. But Dr. Jansen was enthusiastic about what he called the “snappy” new moniker.

In today’s justifiable excitement over the myriad ways that the Crispr invention can be used—to delete, insert or edit DNA, to treat intractable diseases, and perhaps someday to edit deleterious genes from embryos—the humble early story of discovery, through years of basic research, can get overlooked. “There would be no gene editing without us,” says Dr. Mojica.

For scientists and their institutions, a lot rides on how the story is ultimately told. A decade after Crispr was named, other researchers showed how the system, and particularly the Cas9 enzyme that it produces, could be adapted for use in editing the DNA of plants, animals and humans. The Broad Institute of MIT and Harvard argues that a member of its faculty, Feng Zhang, and his team demonstrated how to use Crispr in this way. Jennifer Doudna of the University of California, Berkeley, her collaborator Emmanuelle Charpentier and their institutions also claim the invention. In the U.S., the Broad holds the patent, and a ruling by the U.S. patent board this past February affirmed the institute’s rights. The Berkeley group has filed an appeal in federal court that is pending.

The spoils already include royalties, venture-capital investment in spinoff companies, speaking engagements and prestigious honors. There is talk that someday one or more scientists could be recognized with a Nobel Prize. So it’s no surprise that multiple Crispr histories have emerged, each highlighting a particular angle.

In 2016, Eric Lander, the president and founding director of the Broad Institute, published a 7,500-word article in the journal Cell called “The Heroes of Crispr.” Dr. Lander says that he wanted to bring attention to the many players who advanced basic research into Crispr, but some critics thought that the piece diminished the accomplishments of the Broad’s key rivals and favored the role of Dr. Zhang. About the reaction to the piece, both positive and negative, Dr. Lander now says, “Different people may examine the same facts but see different perspectives.”

In June, Dr. Doudna and her former graduate student, Samuel H. Sternberg, published their own Crispr history, a book titled “A Crack in Creation.” The scientists say that they wanted to give an inside look at a transformative scientific discovery. “We viewed the writing of the book as part of a broader public engagement effort,” not related to the patent dispute, says Dr. Doudna.

Jennifer Doudna and a former graduate student co-wrote their own Crispr history.
Jennifer Doudna and a former graduate student co-wrote their own Crispr history.PHOTO: NICK OTTO FOR THE WASHINGTON POST/GETTY IMAGES

An advantage of writing your own history is the opportunity to offer a more nuanced account. In the patent ruling, the federal board had cited early statements by Dr. Doudna in which she seemed to express doubts about being able to use Crispr technology on animal and human DNA. In their own history, the scientists say they always expected it to work in these broader applications.

Nathaniel Comfort, a professor of the history of medicine at Johns Hopkins University, has written about the proliferation of Crispr histories. He says that today’s access to social media and a better understanding of the stories left out of institutional accounts have helped togenerate “alternative narratives,” including those by secondary players or “‘losers’ in a battle.”

The controversy over Crispr comes at a time when scholars increasingly recognize what Graeme Gooday, professor of the history of science and technology at the University of Leeds and co-author of a book about patent law and inventions, calls “the pluralism of invention.” Patent disputes often challenge that concept. In the case of Alexander Graham Bell, his lawyers in the long-running patent battle over the telephone were eager to promote the idea of “a single invention by a single inventor at a single point in time,” says Christopher Beauchamp, a professor at Brooklyn Law School and the author of a book on the case.

Prof. Swanson of Northeastern said that an inventor’s victory in court often leads to histories stripped of the rich and varied contributions that resulted in the breakthrough. Others who had claims are typically “erased and dropped.”

It is too early to tell if something similar will happen when the Crispr patent battle is finally decided. The appeals court is expected to weigh in next year. Whatever the outcome regarding rights to the new technology, Dr. Mojica says that Crispr offers another lesson. Like history, he says, “Science often turns on serendipity.”

This article was originally published in The Wall Street Journal. Read the original article.

Sitting quietly for extended periods of time could be hurting your heart

Sitting quietly for extended periods of time could be hurting your heart, according to a surprising new study. It finds that the more people sit, the greater the likelihood that they will show signs of injury to their heart muscles.

We all have heard by now that sitting for hours on end is unhealthy, even if we also occasionally exercise. People who sit for more than about nine or 10 hours each day — a group that includes many of us who work in offices — are prone to developing diabetes, heart disease and other problems, and most of these risks remain relatively high, even if we exercise.

Excessive sitting also has been associated with heart failure, a condition in which the heart becomes progressively weaker and unable to pump enough blood to keep the rest of the body oxygenated and well. But how sitting, which seems to demand so little from the heart, could be linked to heart failure, a condition in which the heart cannot respond adequately to exertion, has been unclear.

So recently a group of cardiologists from around the world began to wonder about troponins.

Troponins are proteins produced by cardiac-muscle cells when they are hurt or dying. A heart attack releases a sudden tsunami of troponins into the bloodstream.

But even slightly elevated troponin levels, lower than those involved in heart attacks, are worrisome if they persist, most cardiologists believe. Chronically high troponin levels indicate that something is going wrong inside the heart muscle and that damage is occurring and accruing there. If the damage is not halted or slowed, it could eventually result in heart failure.

No research, however, had ever examined whether sitting was associated with high troponin levels.

So for the new study, published in Circulation, the researchers turned to existing data from the Dallas Heart Study, a large, ongoing examination of cardiac health among a group of ethnically diverse men and women, overseen by the University of Texas Southwestern Medical Center. The study’s participants had completed cardiac testing, given blood samples and health information and worn activity trackers for a week.

The researchers pulled information about more than 1,700 of these participants, excluding any who had heart disease or symptoms of heart failure, such as chest pain or shortness of breath.

They checked the men’s and women’s blood samples for troponins and the readouts from their activity trackers to see how much or little they had moved most days.

Then they made comparisons.

Many of the study participants turned out to be sitters, remaining sedentary for as much as 10 hours or more on most days. Not surprisingly, those men and women rarely exercised.

Some of the men and women did work out, though, usually by walking. They were not exercising a lot, but the more exercise they undertook, the fewer hours they sat, on average.

And this physical activity, limited as it was, was associated with relatively normal levels of troponin. The people who moved the most tended to have lower amounts of troponin in their blood, although the benefits statistically were slight.

On the other hand, the people who sat for 10 hours or more tended to have above-average troponin levels in their blood. These levels were well below those indicative of a heart attack. But they were high enough to constitute “subclinical cardiac injury,” according to the study’s authors.

This relationship remained strong, even after the researchers controlled for other factors that could have influenced troponin levels, including age, gender, body mass index and cardiac function.

Overall, sitting was more strongly associated with unhealthy troponin levels than exercise was with desirable amounts.

Of course, this was an observational study and can show only that sitting is linked to high troponin, not that it causes troponins to rise.

It also cannot explain how physical stillness might injure cardiac cells.

But the impacts are probably indirect, says Dr. James de Lemos, a cardiologist and professor at UT Southwestern Medical Center who oversaw the new study.

“Sedentary behavior is associated with obesity, insulin resistance and fat deposition in the heart, all of which can lead to injury to heart cells,” he says.

“The other side of the coin is what you are not doing while you are sitting,” he adds. You are not moving. Although this study found little benefit from exercise in terms of improving troponin levels, that result is probably related to how little movement people were doing, he says.

“Most of the cardiovascular literature, including other work from our group, suggests that both exercise and being less sedentary are important,” he says.

He and his colleagues are conducting a number of follow-up studies to look at whether sitting less, exercising more, or both affect troponin levels and the risk for subsequent heart failure, he says.

But for now and especially as we plan New Year’s resolutions, “we should consider that reducing sedentary behavior is an important part of a healthy lifestyle,” Dr. de Lemos says.

“Focus on both less sitting and more exercise,” he says. “Take the stairs. Park at the outside of the parking lot. Have walking or standing meetings.”

This article was originally published in The New York Times.  Read the original article.

Geneticists analyse DNA from preserved pup, more than 80 years after the last of its kind died.

The last known thylacine, a marsupial predator that once ranged from New Guinea to Tasmania, died on 7 September 1936 in a zoo in Hobart, Australia. The species’ complete genome, reported on 11 December in Nature Ecology and Evolution, offers clues to its decline and its uncanny resemblance to members of the distantly related dog family1.

“They were this bizarre and singular species. There was nothing else like them in the world at the time,” says Charles Feigin, an evolutionary developmental biologist at the University of Melbourne, Australia, who was involved in the sequencing effort. “They look just like a dog or wolf, but they’re a marsupial.”

People have been nothing but bad news for the thylacine (Thylacinus cynocephalus), commonly known as the Tasmanian tiger. The species’ range throughout Australasia shrivelled as early hunter-gatherers expanded across the region, and the introduction by humans of the dingo (Canis lupus dingo) to Australia several thousand years ago reduced numbers still further, leaving an isolated thylacine population clinging on only in Tasmania. European colonists in the nineteenth century saw the predators as a threat to their sheep, and paid a bounty of £1 per carcass. Thylacines were on the cusp of extinction in the wild when the rewards were ended in 1909, leading zoos to pay handsomely for the last few individuals.

Hundred-year-old pup

Thylacine pup specimen C5757 preserved in jar.

Geneticists sequenced DNA from a thylacine pup preserved in alcohol since it died in 1909.Credit: Benjamin Healley/Museums Victoria

Geneticists had previously sequenced the species’ mitochondrial genome — a short stretch of DNA that is maternally inherited — using hairs plucked from a thylacine stored at the Smithsonian Institution in Washington DC2. In the latest study, a team led by developmental geneticist Andrew Pask of the University of Melbourne obtained the much longer nuclear genome, by sampling tissue from a one-month-old thylacine that had been found in its mother’s pouch in 1909 and was preserved in alcohol.

The nuclear genome holds information about many more ancestors than a mitochondrial genome. The team saw a steep drop in genetic diversity suggesting that thylacine numbers began dwindling some 70,000–120,000 years ago, well before humans reached Australia. Similar patterns have been seen in the genome of the Tasmanian devil (Sarcophilus harrisii)3. Feigin suspects that a cooling climate shrank the habitats of both species, potentially making them more vulnerable to humans.

Although thylacines are not particularly closely related to the dog family, or canids — the two groups share a common ancestor that lived around 160 million years ago — the shapes of their heads are remarkably similar. This hints that both species may have adapted similarly to their predatory lifestyles.

To test for such convergent evolution, Feigin and Pask’s team identified 81 protein-coding genes in which both canids and thylacines had acquired similar DNA changes, including some with roles in skull development. But none of the genes in which these changes occurred seemed to be evolving under natural selection in both lineages, so it’s unlikely that they were responsible for the species’ shared traits.

Instead, the researchers propose that DNA that does not affect protein sequences — but instead influences how they are expressed — underlies the long snouts and other features shared by the two groups.

“That’s a reasonable inference, given what we increasingly understand from development and the evolution of development,” says Sean B. Carroll, an evolutionary developmental biologist at the University of Wisconsin–Madison. New physical traits tend to arise when the expression of developmental pathways shared across animals is tweaked. Identifying the specific pathways and regulatory DNA sequences that shaped the thylacine’s skull will be difficult, Carroll says, because it requires identifying suspect thylacine DNA sequences and inserting them into another organism, such as a mouse or related marsupial, to see their effects.

Back from the dead

Michael Archer, a palaeontologist at the University of New South Wales in Sydney, sees the genome-sequencing effort as important progress towards his long-time vision of bringing the marsupial back from extinction. Advances in genome editing and reproductive biology, such as artificial wombs, provide a viable path to de-extinction, Archer says. “What they have done is provided the wherewithal to take the next giant step.”

But Beth Shapiro, an evolutionary geneticist at the University of California, Santa Cruz, whose team has sequenced the genomes of extinct animals such as the passenger pigeon, warns that genome projects cannot solve the problem of disappearing species.

“The thylacine is extinct, because we made it so. We cannot bring it back,” Shapiro says. “When I see the videos or images of the last thylacines, these are harsh reminders of the pressing need to develop technologies to stop other species from becoming extinct.”

 

 

This article was originally published in Nature. Read the original article.

Ticks Trapped in Amber Were Likely Sucking Dinosaur Blood

Paleontologists have found entombed in amber a 99-million-year-old tick grasping the feather of a dinosaur, providing the first direct evidence that the tiny pests drank dinosaur blood.

Immortalized in the golden gemstone, the bloodsucker’s last supper is remarkable because it is rare to find parasites with their hosts in the fossil record. The finding, which was published Tuesday, gives researchers tantalizing insight into the prehistoric diet of one of today’s most prevalent pests.

“This study provides the most compelling evidence to date for ticks feeding on feathered animals in the Cretaceous,” said Ryan C. McKellar, a paleontologist at the Royal Saskatchewan Museum in Canada who was not involved in the study. “It demonstrates just how much detail can be obtained from a few pieces of amber in the hands of the right researchers.”

Photo

Adult ticks, extant and preserved in ancient amber, compared to the tick nymph found attached to the dinosaur feather, above left. Scientists concluded that the tick nymph fed on a nanoraptor, a fledgling dinosaur no bigger than a hummingbird. CreditE. Peñalver

David Grimaldi, an entomologist at the American Museum of Natural History and an author of the paper published in the journal Nature Communications, was inspecting a private collection of amber from northern Myanmar when he and his colleagues spotted the eight-legged stowaway.

“Holy moly this is cool,” he recounted thinking at the time. “This is the first time we’ve been able to find ticks directly associated with the dinosaur feathers.”

Upon further inspection, he and his colleagues concluded that the tick was a nymph, similar in size to a deer tick nymph, and that its host was most likely some sort of fledgling dinosaur no bigger than a hummingbird, which Dr. Grimaldi referred to as a “nanoraptor.” The parasites were most likely unwanted roommates living in the dinosaurs’ nests and sucking their blood.

“These nanoraptors were living in trees and fell into these great big blobs of oozing resin and were snagged,” he said. Trapped too were the ticks. “We’re looking at a microcosm here of life in the trees 100-million years ago in northern Myanmar.”

They determined that the host was more likely a nonavian dinosaur and not a modern bird based on molecular dating, which suggested the specimen was at least 25 million years older than modern birds.

The team also reported finding a few more ticks in amber, including two that were covered in microscopic hairs belonging to a beetle. The team traced the origins of the beetle hair to a particular type of insect known as a skin beetle, which today lives in nests and scavenges on molted feathers as well as shedded skin and hair. In prehistoric times they most likely bothered dinosaurs in their nests.

The beetle hair suggested that the ticks lived in the same nests as the skin beetles. It provided indirect evidence that the prehistoric ticks infested dinosaurs, according to Ricardo Pérez-de la Fuente a paleobiologist at the Oxford University Museum of Natural History and an author on the paper.

They also found one tick that was engorged with blood, making it about eight times larger than its normal size. Dr. Pérez-de la Fuente said it was impossible to determine the host animal for that tick, and alas, he added there was no chance they could perform any Jurassic Park shenanigans by extracting its stolen blood.

This article was originally published in The New York Times.  Read the original article.