Making anaesthesia safer by tracking brain activity

AROUND 1936 three neurologists at Harvard Medical School raided the medicine cabinet, filling their boots with morphine, barbiturates, ethers and even cobra venom. They applied those substances to (apparently) willing volunteers and cemented primitive electrodes to their scalps and earlobes. They also collared a drunk and wired him up. With pen and paper, they then recorded how the electrical signals in their volunteers’ brains changed as the drugs began to take hold.

This kind of gonzo science might meet a touch of resistance from the institutional review board if proposed today, but the work of Gibbs, Gibbs and Lennox still stands. The trio showed, without meaning to, that sedatives lower the activity of the brain through several clear stages, and that each stage is observable in that organ’s electrical readings. Their results have been refined over the years, of course, to the extent that Emery Brown, a successor of theirs at Harvard, now thinks, as he told the annual meeting of the American Association for the Advancement of Science, that statistical analysis of such electroencephalography (EEG) signals has become so good that it can be used to make anaesthesia safer and better.

The EEG of a conscious brain shows no striking features, just low-amplitude and seemingly uncorrelated ups and downs in the frequency of oscillations in the brain’s electric field. That is because the brain’s neurons are firing independently of one another as they go about the various tasks that render their owner conscious. Then (as the Harvard trio found) as the patient goes under the oscillations smooth out, deepening into a stark, uniform wave which vibrates ten times a second. The drug has tripped the neurons into singing from the same hymn sheet. Their unified song takes over from the cacophony of a conscious brain, and the patient is out.

That, Dr Brown believes, gives anaesthetists a better way to assess how deeply someone is under than measuring blood pressure and heart rate. He regularly uses brain waves clinically. In a recent operation, for example, he was able to administer a third of the normal dose of an anaesthetic called propofol to an 81-year-old cancer patient, monitoring her brain waves to ensure that she was deeply under at all times. Indeed, he thinks he may be able to automate the whole process, and has designed a machine which adjusts the dose in response to brainwave changes.

He also believes that the potential for using EEG to understand unconscious brainwaves goes beyond the operating table. Sleeping pills, for instance, do not so much aid sleep as sedate their recipient. Dr Brown thinks insomniacs might be guided into true sleep through a more precise examination of their brain activity, and the application of commensurate drugs.

Moreover, true successor to Gibbs, Gibbs and Lennox that he is, Dr Brown reveals his own gonzo side when he says his understanding of EEG readouts is such that he believes he could safely place someone into, and then retrieve him from, a “locked-in” state—one in which a person is fully aware of his surroundings, but incapable of any movement or action. When your correspondent offered himself as a test subject, only partially in jest, Dr Brown flashed an arch grin, before sombrely explaining that such an experiment would be beyond the tolerance of modern review boards, too.

 

 

This article was originally published in The Economist. Read the original article.

Food Tailored to Our Genes May Be on the Menu Soon

What if you could take a blood test to determine the best diet for you?

Right now most dietary guidelines are developed by looking at an average population. But not everyone responds to a given diet the same way. Some ethnic groups, for instance, are more prone than others to high blood pressure, abnormal cholesterol levels and excess body fat on certain diets.

 

New research raises the tantalizing possibility of creating personalized diets. The study, published by the journal Genetics, suggests genes play a strong role in influencing how our bodies respond to diets. Based on the results, the authors hope that someday people will be able to take a blood test to determine if a given diet is likely to work for them.

“The idea that as long as you stick to a certain diet you’ll do well is probably not the complete story,” says David Threadgill, the study’s co-author and a professor in the departments of veterinary pathobiology and molecular and cellular medicine at Texas A&M University.

In the study, mice with different genes were fed popular human diets: a typical North American diet, high in refined carbohydrates and fat; a Mediterranean diet, high in fiber; a Japanese diet, which for mice consisted of rice and green tea extract; and a ketogenic diet, based on the diet eaten by the Maasai in Kenya, which is high in fat and lacks carbohydrates entirely.

Researchers collected data on measures of metabolic health, such as body-mass index, glucose regulation, cholesterol levels and liver function.

Among the mice tested, each genetic subgroup had a unique response, with certain diets working for some groups, but not others. The mice generally did worst on the American diet and best on the Japanese diet, just like an average group of people.

But what was really striking, Dr. Threadgill says, “is that every strain had a unique pattern where it was optimally healthy.”

For example, one strain, when put on the ketogenic diet, overate and then became obese and developed metabolic syndrome, while another strain also overate on the ketogenic diet, but didn’t gain weight. The mice in that second group, in effect, could eat whatever they wanted without getting heavy. Rather than conserving energy, their bodies burned it off by raising their body temperature.

The researchers next are trying to identify genetic factors that allow one strain to overconsume without health effects, and to identify the specific genes causing each mouse strain to respond the way it did. “That will then allow us to actually go into human populations and start looking at how individuals respond based on their genotype,” says Dr. Threadgill.

The scientists are also looking for biomarkers that could enable doctors to use a blood test to predict a diet’s effectiveness. Dr. Threadgill cautions, however, that scientists don’t know enough yet to use genetic testing to recommend a diet. He believes that is several years away.

“It may happen sooner,” he says, “but it’s going to require a more in-depth understanding.”

 

This article was originally published in The Wall Street Journal. Read the original article.

A New Regulatory Threat to Cancer Patients

The federal government is threatening to limit treatment options for doctors fighting cancer. A regulatory decision due Wednesday from the Centers for Medicare and Medicaid Services could undermine the care delivered to the more than 1.6 million Americans who are diagnosed with cancer each year.

At issue is whether reimbursements will be available to most physicians, hospitals and patients for a diagnostic technology known as next-generation sequencing. A cornerstone of the emerging field of precision medicine, NGS tests analyze molecular changes that occur in cancerous tumors and show up in biopsies.

To fight tumors, DNA-sequencing-based tests can determine how genes and mutations differ from one patient to the next. NGS tests enable oncologists to prescribe and administer customized, highly targeted drug therapies. The technology limits patients’ exposure to unnecessary toxic drugs and helps doctors make vital treatment decisions. Hundreds of thousands of cancer patients have already received NGS testing.

The proposed new CMS policy would abruptly change the way NGS testing is regulated and administered. It would drastically limit insurance coverage by requiring that tests be approved by the Food and Drug Administration. Current NGS tests are conducted at accredited clinical laboratories and premier academic medical centers under strict regulation. They are as accurate and reliable as FDA-approved testing. There is no evidence that restricting reimbursement to FDA-approved tests would improve care.

Under the proposed policy, only one of hundreds of laboratories that currently offer NGS testing would meet all the new reimbursement requirements. The policy would in effect force clinicians and institutions to send all NGS testing to a single vendor, Foundation Medicine.

This is unfair to cancer patients. The proposal would result in a monopoly, allowing price manipulations, decreasing quality, and potentially contributing to market failure. It would turn the entire genomic-testing industry upside-down. The FDA is already unable to keep up with advances in precision medicine. Restricting access to cutting-edge molecular testing would stifle growth in precision medicine at approved testing sites nationwide. The limits could prevent desperately needed innovation, setting back progress in genomic testing and oncology by at least a decade.

The CMS proposal is another example of faulty government regulation in health care, this time at the expense of cancer patients. This government intervention is more than a regulatory nuance in a reimbursement issue. It’s a matter of life and death.

 

This article was originally published in The Wall Street Journal. Read the original article.

The ramifications of a new type of gene

WHAT’S a gene? You might think biologists had worked that one out by now. But the question is more slippery than may at first appear. The conventional answer is something like, “a piece of DNA that encodes the structure of a particular protein”. Proteins so created run the body. Genes, meanwhile, are passed on in sperm and eggs to carry the whole process to the next generation.

None of this is false. But it is now clear that reality is more complex. Many genes, it transpires, do not encode proteins. Instead, they regulate which proteins are produced. These newly discovered genes are sources of small pieces of RNA, known as micro-RNAs. RNA is a molecule allied to DNA, and is produced when DNA is read by an enzyme called RNA polymerase. If the DNA is a protein-coding gene, the resulting RNA acts as a messenger, taking the protein’s plan to a place where proteins are made. Micro-RNAs regulate this process by binding to the messenger RNA, making it inactive. More micro-RNA means less of the protein in question, and vice versa.

Often, this regulation is in response to environmental stimuli such as stress. And sometimes, the responses acquired in this way seem to be passed down through the generations, in apparent defiance of conventional genetic theory. The best known example in people comes from the Netherlands, which suffered famine in 1944, at the end of the second world war. Children born of starved mothers were, as might be expected, smaller than usual. But the children of those children were also small. Experiments carried out on mice confirm these observations.

Stress city

In the case of mothers, it is now believed that this process, called intergenerational epigenesis, is caused by micro-RNAs from the parent getting into eggs as they form in a developing fetus. That makes sense. Eggs are large cells, with room to accommodate these extra molecules. But intergenerational epigenetic effects can pass down the male line as well. And how paternal micro-RNAs come to be in an egg is a mystery, for the sperm that would have to carry them there are tiny and have no spare room. Work by Jennifer Chan, a graduate student at the University of Pennsylvania, has, however, shed light on the process.

Ms Chan’s solution was described on February 16th by her research supervisor, Tracy Bale of the University of Maryland, at the annual meeting of the American Association for the Advancement of Science (AAAS), in Austin, Texas. The crucial insight behind her study was that micro-RNAs need not actually get inside sperm cells as they form. They could equally well be attached to sperm just before sexual intercourse. Ms Chan therefore concentrated her attentions on part of the male genital tract called the epididymis. This is where sperm mature. Cells lining the epididymis constantly discharge small, fluid-filled, membrane-bound bubbles called vesicles. When Ms Chan, working with mice, looked in detail at these vesicles, she found that they contained lots of micro-RNAs.

That was interesting. But she then went on to do an experiment. Mice are easily stressed. Simply putting new objects into their living space is enough to induce significant changes in their levels of stress hormones. Stress a male in this way and his offspring (of either sex) will react less to stress than do the offspring of unstressed males. That looks like intergenerational epigenesis. It also makes evolutionary sense, since it calibrates a mouse’s stress response to the stressfulness of the environment—which is likely to be the same as that of its father. To prove that this intergenerational effect was caused by epididymal micro-RNAs, Ms Chan collected these molecules and injected them into fertilised mouse eggs. Those eggs, as she had hypothesised they would, grew into less-stress-reactive adults.

This work is all in mice. But Dr Bale has now roped some men into the experiment, too—namely 25 male students who have provided regular semen samples in order that the micro-RNAs therein can be tracked and correlated with such stressful events as sitting exams. The results of this are yet to come in. But, with her mouse work alone, it looks as if Ms Chan has cracked an important part of the puzzle of intergenerational epigenesis.

Response to stress is not, however, the only thing in which micro-RNAs are implicated. They are also suspected of involvement in schizophrenia and bipolar disorder. To investigate this, a second speaker at the AAAS meeting, Paul Kenny, of the Icahn School of Medicine, in New York, also turned to mice.

The root of Dr Kenny’s suspicion was the discovery, post mortem, in the brains of patients who had been suffering from these conditions, of elevated levels of three micro-RNAs, called MiR206, MiR132 and MiR133b. He and his colleague Molly Heyer therefore looked at the role of these micro-RNAs in regulating brain cells called parvalbumin interneurons, which are thought to be involved in schizophrenia.

Picking one, MiR206, for closer examination, the two researchers created a mouse strain in which the gene for MiR206 was switched off in the parvalbumin interneurons. They then performed experiments to study the behaviour of these mice, assuming that switching the gene off might protect them against schizophrenia-like symptoms. Surprisingly, they found the opposite.

Their first experiment was to play the mice a sudden, loud noise. This will startle any creature, mouse or man. If the noise is preceded by a softer one, however, both humans and murines react far less when the loud noise comes. They are expecting it. But people with schizophrenia seem never to learn this expectation. And neither, to the researchers’ surprise, do mice with the MiR206 knockout.

The scary moment

For people, these observations are often explained by the fact that one symptom of schizophrenia is increased fear. And, in a second experiment, Dr Kenny and Dr Heyer showed, again contrary to expectation, that MiR206-knockouts were unusually fearful as well.

The researchers used a box which contained two lights, each positioned above a lever. First, a light would blink on and go off. Then, after a delay, both lights would come on. That was the signal for the mouse to press a lever. If the lever the mouse pressed was the one not under the initial light, the animal received some food. Drs Kenny and Heyer found that the knocked-out mice collected less food than did normal ones. But this was not because they were making mistakes. If they pressed a lever, they picked the correct one as often as a normal mouse would. Instead, they were pressing any lever less often. That was because they spent most of their time hiding in the corners of the box opposite the wall with the lights and levers. Again, they seemed abnormally afraid.

What all this means for the study of schizophrenia is unclear. It is possible that examination of the other two pertinent micro-RNAs may shed more light on the matter. More generally, though, both Dr Kenny’s work and Ms Chan’s are good examples of the fact that there is more to genes than was once believed.

 

 

This article was originally published in The Economist. Read the original article.

What We Know (and Don’t Know) About How to Lose Weight

The endless array of diets that claim to help you shed pounds tend to fall into two camps: low fat or low carbohydrate. Some companies even claim that genetics can tell us which diet is better for which people.

A rigorous recent study sought to settle the debate, and it had results to disappoint both camps. On the hopeful side, as The New York Times noted, people managed to lose weight no matter which of the two diets they followed.

The study is worth a closer look to see what it did and did not prove.

Researchers at Stanford University took more than 600 people (which is huge for a nutrition study) aged 18 to 50 who had a body mass index of 28 to 40 (25-30 is overweight, and 30 and over is obese). The study subjects had to be otherwise healthy. They couldn’t even be on statins, or drugs for Type 2 diabetes or hypertension, which might affect weight or energy expenditure. They were all randomly assigned to a healthful low-fat or a healthful low-carbohydrate diet, and they were clearly not blinded to which group they were in.

All participants attended 22 instructional sessions over one year in groups of about 17 people. The sessions were held weekly at first and were then spaced out so that they were monthly in the last six months. Everyone was encouraged to reduce intake of the avoided nutrient to 20 grams per day over the first eight weeks, then participants slowly added fats or carbohydrates back to their diets until they reached the lowest level of intake they believed could be sustained for the long haul.

Everyone was followed for a year (which is an eternity for a nutrition study). Everyone was encouraged to maximize vegetable intake; to minimize added sugar, refined flour and trans fat intake; and to focus on whole foods that were minimally processed. The subjects were also encouraged to cook at home as much as possible.

All the participants took a glucose tolerance test as a measurement of insulin sensitivity. Some believe that insulin resistance or sensitivity may affect not only how people respond to diets, but also how well they adhere to them. The participants were also genotyped, because some believe that certain genes will make people more sensitive to carbohydrates or fat with respect to weight gain. About 40 percent of participants had a low-fat genotype, and 30 percent had a low-carbohydrate genotype.

Data were gathered at the beginning of the study, at six months and at one year. At three unannounced times, researchers checked on patients to see how closely they were sticking to the instructions.

This was a phenomenally well-designed trial.

People did change their diets according to their group assignment. Those in the low-fat group consumed, on average, 29 percent of their calories from fats, versus 45 percent in the low-carbohydrate group. Those in the low-carbohydrate group consumed 30 percent of their calories from carbohydrates, versus 48 percent in the low-fat group.

They did not, however, lose meaningfully different amounts of weight. At 12 months, the low-carbohydrate group had lost, on average, just over 13 pounds, compared with more than 11.5 pounds in the low-fat group. The difference was not statistically significant.

Insulin sensitivity didn’t make a difference. People who secreted more or less insulin lost no more or less weight in general on either a low-fat or low-carbohydrate diet. Genetics didn’t make a difference either. People who had genes that might indicate that they would do better on one diet or the other didn’t.

In fact, when you look at how every single participant in this study fared on the diet to which he or she was assigned, it’s remarkable how both diets yielded an almost identical, curving range of responses — from lots of weight lost to a little gained. It wasn’t just the averages.

Some have taken this study to prove that avoiding processed foods, eating more whole foods, and cooking at home leads to weight loss. While I’d like that to be true — I have advocated this healthful approach in my Upshot article on food recommendations and in a recent book — that’s not what this study showed. Although that advice was given to all participants, there was no control group in which that advice was omitted, and so no conclusions can be made as to the efficacy of these instructions.

Others have taken this study as evidence debunking the idea that counting calories is the key to weight loss. While that wasn’t the main thrust of this study, nor the instructions given, participants did reduce their intake by an average of 500-600 calories a day (even if they didn’t count them). This study didn’t prove the unimportance of calories.

The researchers also asked everyone, not just those in the low-carb group, to avoid “added sugars.” Therefore, we can’t really say anything new about added sugars and weight loss.

What this study does show is that people who have staked a claim on one diet’s superiority over another don’t have as strong a case as they think. It’s hard to overstate how similarly these two diets performed, even at an individual level.

It shows us that the many people, and the many studies, suggesting that we can tell which diets are best for you based on genetics or based on insulin levels might not be right either. Almost all of the studies that backed up such ideas were smaller, of shorter duration or less robust in design than this one. Granted, it’s still possible that there might be some gene discovered in the future that makes a difference, but those who think they’ve found it already might want to check their enthusiasm.

 

This study was focused mostly on people who were obese, so people looking to lose just a few pounds might benefit more from one diet or the other; we don’t know. It’s also worth noting that the people in this study received significant support on both diets, so the results seen here might not apply to those attempting to lose weight on their own.

You should be wary of those who tell you that they know what diet is best for you, or that there’s a test out there to tell you the same. Successful diets over the long haul are most likely ones that involve slow and steady changes. The simplest approach — and many have espoused it, including Jane Brody recently here at The Times — is to cut out processed foods, think about the calories you’re drinking, and try not to eat more than you intend to.

The bottom line is that the best diet for you is still the one you will stick to. No one knows better than you what that diet might be. You’ll most likely have to figure it out for yourself.

 

 

This article was originally published in The New York Times.  Read the original article.

Powerful Antibiotics Found in Dirt

Many of us think of soil as lifeless dirt. But, in fact, soil is teeming with a rich array of life: microbial life. And some of those tiny, dirt-dwelling microorganisms—bacteria that produce antibiotic compounds that are highly toxic to other bacteria—may provide us with valuable leads for developing the new drugs we so urgently need to fight antibiotic-resistant infections.

Recently, NIH-funded researchers discovered a new class of antibiotics, called malacidins, by analyzing the DNA of the bacteria living in more than 2,000 soil samples, including many sent by citizen scientists living all across the United States [1]. While more work is needed before malacidins can be tried in humans, the compounds successfully killed several types of multidrug-resistant bacteria in laboratory tests. Most impressive was the ability of malacadins to wipe out methicillin-resistant Staphylococcus aureus (MRSA) skin infections in rats. Often referred to as a “super bug,” MRSA threatens the lives of tens of thousands of Americans each year [2].

It might seem strange that soil would be the place to look for the most promising new antibiotics. But bacterial species in soil have been locked in a continuous antibiotic arms race with one another for millennia. When it comes to the chemistry needed to produce highly effective antibiotics, they are the experts.

In fact, molecules derived from bacteria have been the major source for antibiotics that doctors now prescribe to help with certain infections. But scientists had all but given up on discovering any new antibiotic compounds from bacteria cultured in the lab. It seemed that source had been completely mined, and new searches were mostly coming up empty.

Time for a new approach! In the study reported in Nature Microbiology, researchers led by Sean Brady at The Rockefeller University, New York took a different approach. They scoured DNA extracted from trillions of soil-dwelling bacteria, most of them collected by citizen scientists living all around the country and mailed to Brady and colleagues in plastic baggies.

While scouring through all that DNA, the researchers looked for something quite specific: novel clusters of genes that structurally look like those already known to be involved with calcium-dependent antibiotics. These are antibiotics that attack bacteria only in places where they have calcium present to help them communicate, move, differentiate, and carry out other basic cellular functions.

Brady and team didn’t have a lot of leads, as only a few calcium-dependent antibiotics are now known. What’s more, each of these known antibiotics fight bacteria in a slightly different calcium-dependent way.

But the researchers soon hit pay dirt (sorry, you knew that pun was coming). About 75 percent of their soil samples contained bacteria carrying the kinds of genes the researchers were looking for, suggesting that there are many promising calcium-dependent antibiotics yet to be found.

They focused on a particular new family of antibiotics, which turned up in almost 20 percent of the sequenced bacterial samples. They called the new family malacidins (Latin for, “killing the bad”). To recover all the genes needed to produce malacidins, the researchers went back to a sample of sandy, desert soil that they knew to contain many malacidin-producing bacteria.

First, they isolated and cloned the DNA. Then they inserted the cloned DNA into the genome of Streptomyces albus, a bacterium that is especially good at producing molecules in the lab. Those laboratory microbes began churning out two similar malacidin molecules. Interestingly, those malacidins didn’t look quite like what the researchers had expected to find. But, as expected, antibiotic activity of these compounds did depend on calcium.

The researchers found that the antibiotic compounds killed many multidrug-resistant pathogens, including several different strains of Staph aureus. They were also successful in treating a Staph-infected skin wound on rats, without causing any apparent toxicity or damage to the animals’ own cells.

Malacidins attack an essential part of the bacterial cell wall in a unique way compared to other existing calcium-dependent antibiotics. This mechanism is not only unique, but it is apparently difficult for other bacteria to circumvent. The researchers found that after 20 days of exposure to sublethal levels of malicidins, none of the tested lab bacteria showed any signs of becoming resistant to it.

Brady says that the next step is to tinker with the structure of the malicidins to see if they can come up with an even more effective version of the molecule. They’re also continuing to explore other related compounds found in nature.

As promising as malacidins are, they are just the start. Brady is convinced the bacterial world contains a largely untapped reservoir of antibiotics that have yet to be discovered. With the sophisticated genomic, analytical, and other tools now available, many of them will soon be found. It looks as though some of the solutions to the growing problem of antibiotic resistance have been hiding, quite literally, right in our own backyards.

 

 

This article was originally published in NIH. Read the original article.

Deep learning for biology

Four years ago, scientists from Google showed up on neuroscientist Steve Finkbeiner’s doorstep. The researchers were based at Google Accelerated Science, a research division in Mountain View, California, that aims to use Google technologies to speed scientific discovery. They were interested in applying ‘deep-learning’ approaches to the mountains of imaging data generated by Finkbeiner’s team at the Gladstone Institute of Neurological Disease in San Francisco, also in California.

Deep-learning algorithms take raw features from an extremely large, annotated data set, such as a collection of images or genomes, and use them to create a predictive tool based on patterns buried inside. Once trained, the algorithms can apply that training to analyse other data, sometimes from wildly different sources.

The technique can be used to “tackle really hard, tough, complicated problems, and be able to see structure in data — amounts of data that are just too big and too complex for the human brain to comprehend”, Finkbeiner says.

He and his team produce reams of data using a high-throughput imaging strategy known as robotic microscopy, which they had developed for studying brain cells. But the team couldn’t analyse its data at the speed it acquired them, so Finkbeiner welcomed the opportunity to collaborate.

“I can’t honestly say at the time that I had a clear grasp of what questions might be addressed with deep learning, but I knew that we were generating data at about twice to three times the rate we could analyse it,” he says.

Today, those efforts are beginning to pay off. Finkbeiner’s team, with scientists at Google, trained a deep algorithm with two sets of cells, one artificially labelled to highlight features that scientists can’t normally see, the other unlabelled. When they later exposed the algorithm to images of unlabelled cells that it had never seen before, Finkbeiner says, “it was astonishingly good at predicting what the labels should be for those images”. A publication detailing that work is now in the press.

Finkbeiner’s success highlights how deep learning, one of the most promising branches of artificial intelligence (AI), is making inroads in biology. The algorithms are already infiltrating modern life in smartphones, smart speakers and self-driving cars. In biology, deep-learning algorithms dive into data in ways that humans can’t, detecting features that might otherwise be impossible to catch. Researchers are using the algorithms to classify cellular images, make genomic connections, advance drug discovery and even find links across different data types, from genomics and imaging to electronic medical records.

More than 440 articles on the bioRxiv preprint server discuss deep learning; PubMed lists more than 700 references in 2017. And the tools are on the cusp of becoming widely available to biologists and clinical researchers. But researchers face challenges in understanding just what these algorithms are doing, and ensuring that they don’t lead users astray.

Training smart algorithms

Deep-learning algorithms (see ‘Deep thoughts’) rely on neural networks, a computational model first proposed in the 1940s, in which layers of neuron-like nodes mimic how human brains analyse information. Until about five years ago, machine-learning algorithms based on neural networks relied on researchers to process the raw information into a more meaningful form before feeding it into the computational models, says Casey Greene, a computational biologist at the University of Pennsylvania in Philadelphia. But the explosion in the size of data sets — from sources such as smartphone snapshots or large-scale genomic sequencing — and algorithmic innovations have now made it possible for humans to take a step back. This advance in machine learning — the ‘deep’ part — forces the computers, not their human programmers, to find the meaningful relationships embedded in pixels and bases. And as the layers in the neural network filter and sort information, they also communicate with each other, allowing each layer to refine the output from the previous one.

Source: Jeremy Linsley/Drew Linsley/Steve Finkbeiner/Thomas Serre

Eventually, this process allows a trained algorithm to analyse a new image and correctly identify it as, for example, Charles Darwin or a diseased cell. But as researchers distance themselves from the algorithms, they can no longer control the classification process or even explain precisely what the software is doing. Although these deep-learning networks can be stunningly accurate at making predictions, Finkbeiner says, “it’s still challenging sometimes to figure out what it is the network sees that enables it to make such a good prediction”.

Still, many subdisciplines of biology, including imaging, are reaping the rewards of those predictions. A decade ago, software for automated biological-image analysis focused on measuring single parameters in a set of images. For example, in 2005, Anne Carpenter, a computational biologist at the Broad Institute of MIT and Harvard in Cambridge, Massachusetts, released an open-source software package called CellProfiler to help biologists to quantitatively measure individual features: the number of fluorescent cells in a microscopy field, for example, or the length of a zebrafish.

But deep learning is allowing her team to go further. “We’ve been shifting towards measuring things that biologists don’t realize they want to measure out of images,” she says. Recording and combining visual features such as DNA staining, organelle texture and the quality of empty spaces in a cell can produce thousands of ‘features’, any one of which can reveal fresh insights. The current version of CellProfiler includes some deep-learning elements, and her team expects to add more-sophisticated deep-learning tools in the next year.

“Most people have a hard time wrapping their heads around this,” Carpenter says, “but there’s just as much information, in fact maybe more, in a single image of cells as there is in a transcriptomic analysis of a cell population.”

That type of processing allows Carpenter’s team to take a less supervised approach to translating cell images into disease-associated phenotypes — and to capitalize on it. Carpenter is a scientific adviser to Recursion Pharmaceuticals in Salt Lake City, Utah, which is using its deep-learning tools to target rare, single-gene disorders for drug development.

Mining genomic data

When it comes to deep learning, not just any data will do. The method often requires massive, well-annotated data sets. Imaging data provide a natural fit, but so, too, do genomic data.

One biotech firm that is using such data is Verily Life Sciences (formerly Google Life Sciences) in San Francisco. Researchers at Verily — a subsidiary of Google’s parent company, Alphabet — and Google have developed a deep-learning tool that identifies a common type of genetic variation, called single-nucleotide polymorphisms, more accurately than conventional tools. Called DeepVariant, the software translates genomic information into image-like representations, which are then analysed as images (see ‘Tools for deep diving’). Mark DePristo, who heads deep-learning-based genomic research at Google, expects DeepVariant to be particularly useful for researchers studying organisms outside the mainstream — those with low-quality reference genomes and high error rates in identifying genetic variants. Working with DeepVariant in plants, his colleague Ryan Poplin has achieved error rates closer to 2% than the more-typical 20% of other approaches.

 

 

This article was originally published in Nature. Read the original article.

Wild primates threaten efforts to wipe out skin disease

Global health officials are intensifying efforts to eradicate yaws, a disfiguring skin disease that infects more than 64,000 people a year in 14 African and southeast Asian countries. But some critics say that the plans could fail, because they don’t take account of discoveries in the past few years that wild primate populations harbour the bacterial infection. That could complicate or foil eradication efforts, they say.

Public-health officials met in Geneva, Switzerland, on 29–30 January to discuss how to expand the eradication programme in 6 of the 14 countries in which yaws is endemic. But they did not discuss the part played by wild animals. “Even if this is not the main cause of re-emerging yaws nowadays, it would jeopardize global eradication,” says Sascha Knauf, who studies neglected tropical diseases at the Leibniz Institute for Primate Research in Göttingen, Germany.

Five years ago, the World Health Organization (WHO) committed to eradicating yaws by 2020, motivated in part by the discovery that it can be treated using an easy-to-administer oral antibiotic, azithromycin. It estimated that the initiative would cost at least US$100 million. At the time, public-health officials thought that the disease occurred only in humans. Eradicating a disease that affects only people is much easier than one that also occurs in pets and wild animals.

However, Knauf reported in 20111 and 20132 that gorillas, chimpanzees, baboons and smaller primates in several West and Central African countries were infected with the same bacterium that causes yaws (Treponema pallidum subsp. pertenue).

Patient with Treponema pertenue bacterial infection.

Yaws is a disfiguring skin disease that affects more than 1 million people in 14 African and East Asian countries.Credit: Peter Perine/CDC

Primate problem

Epidemiologist Michael Marks at the London School of Hygiene and Tropical Medicine, who attended the workshop, says that the WHO has not addressed the threats posed by wild primate populations to the eradication of yaws. But he says that scientists have not yet shown that humans can catch the disease from primates. Even so, “it would be remiss not to pay attention to it.”

A similar problem arose during the decades-long programme to eradicate Guinea worm disease in 2010, when public-health officials learned that dogs and possibly other animals can carry the parasite (Dracunculus medinensis). Health authorities have had to invest in a public information scheme and extra monitoring and treatment, and the disease has yet to be eradicated.

The WHO is still waiting for proof that animals are transmitting yaws to humans, says its medical officer for the disease, Kingsley Asiedu. In the meantime, Asiedu says, “We are not taking that into account, because there has not been proof of an epidemiological link between those yaws-like cases that have been found in primates and in humans.”

 

This article was originally published in Nature. Read the original article.

The Amazing Metabolism of Hummingbirds

Hummingbirds have long intrigued scientists. Their wings can beat 80 times a second. Their hearts can beat more than 1,000 times a minute. They live on nectar and can pack on 40 percent of their body weight in fat for migration.

But sometimes they are so lean that they live close to caloric bankruptcy. At such times, some hummingbirds could starve to death while they sleep because they’re not getting to eat every half-hour or so. Instead they enter a state of torpor, with heartbeat and body temperature turned way down to diminish the need for food.

Kenneth C. Welch Jr. at the University of Toronto, Scarborough has studied the metabolisms of hummingbirds for more than a decade. His most recent research with Derrick J. E. Groom, in his lab, and other colleagues is on the size and energy efficiency in hummingbirds. By using data on oxygen consumption and wing beats to get an idea of how much energy hummingbirds take in and how much work they put out, the scientists found that during strenuous hovering flight, bigger hummingbirds are more efficient energy users than smaller ones. The research was published in Proceedings of the Royal Society B.

Excerpts from a telephone conversation with Dr. Welch have been edited for clarity and length.

Q. You manage to get hummingbirds to voluntarily put their heads in masks while they hover and feed. How in the world do you do that?

A. I learned to become a bit of the hummingbird whisperer. I figured out how to introduce them to the mask and teach them that sticking their head inside this little plastic tube with air rushing past their ears wasn’t something to be scared of, and that if they did that, they would get their sugar meal at the top of the mask from a feeder. And they said, “Very well. O.K. I’ll do this in order to get my food. And you can watch me while I do it.”

Q. And what are you watching for?

A. We’re measuring oxygen consumption rates. If I were to stick an elite Olympic athlete, a cross-country skier, onto a cycle ergometer and ask them to wear a mask and say, ‘O.K., go as hard as you can go, and I want to measure your peak metabolic rate,’ one of the ways we can quantify that is in oxygen consumption. So I can say, ‘This human athlete is consuming four milliliters of oxygen per gram of body weight per hour.’

A hummingbird can easily hit 40 milliliters of oxygen per gram per hour. And if I ask the hummingbird to do extra, if I give it a little bit of extra weight to wear, that can go up to well above 60. So, their tissues are using oxygen at rates that are many, many times what we can possibly achieve.

Q. And they need the oxygen to help them metabolize the relatively enormous amounts of sugar they are taking in to get the energy to power their muscles?

A. I did a calculation back in graduate school. It turns out that when they’re hovering around and foraging during the day, they’re pretty much exclusively burning the sugar that they’ve been eating in the last 30 minutes to an hour. And so, they have this incredible ability to move sugar through their system. And I did the calculation and said, ‘O.K., if I scale one of my hummingbirds up to adult male human size, my size, how much sugar would I need to drink per minute if I were theoretically a hovering hummingbird as big as I am?’ It turned out to be right around the amount of sugar that’s in a can of Coca-Cola per minute. I haven’t actually tried to do this. My doctor advised against it.

Q. So they move sugar through their system an awful lot faster than we do?

A. Unlike us, hummingbirds can use the glucose that they’re ingesting in nectar and can move it through their guts, through their circulatory system, and to their muscle cells so fast that they can essentially keep that pipeline going in real time.

You and I can’t do that. We can support some small portion of exercise with newly ingested glucose, about 30 percent. But what’s just as remarkable is that the diet of hummingbirds is nectar and that’s half glucose and half fructose.

Fructose is getting a lot of bad press these days because of high fructose corn syrup in the Western diet and its association with metabolic disease and obesity. We’re not good at using fructose at all. Hummingbirds can use that fructose at very high rates.

Q. How do they do it?

A. Birds and nectar-feeding bats have evolved the ability to enhance the flux of nutrients like small sugars like fructose and glucose or amino acids to more effectively absorb their food. And getting them to their tissues is enhanced because hummingbird muscles and hummingbird hearts and hummingbird blood vessels are so good. The hummingbird heart rate is high and it’s pumping so much blood per unit of time. They have lots and lots of capillaries that allow the blood to get up close and personal to their muscle cells.

And hummingbirds can apparently take up fructose in their cells, and we’re trying to figure out what enables that. We do know that there is a different form of glucose transporter that is a specialist at taking up fructose. In our muscle cells that transporter is barely present. But hummingbird muscle fibers tend to have a lot of this transporter. So, we think we’re a little closer to understanding how they can take up fructose so fast. But the story’s not yet complete. We need to do some follow up work in order to really confirm that.

Q. What about your recent work on whether big or small hummingbirds are more efficient at energy use?

A. Hummingbirds really do vary in size. Our Ruby-throated hummingbirds weigh less than a penny, 2.5 to 3 grams. Down in South and Central America, you can find some much larger hummingbird species. Up in the range of 10-12 grams. And then there’s one species, the giant hummingbird, that sits at about 18-20 grams.

In essence, larger hummingbirds have higher efficiency. They’re converting a greater proportion of their sugar energy into mechanical power to hover than our smallest hummingbirds.

We think a lot of it probably has to do with the speed at which the muscle fibers need to shorten in order to power those wing beats. Small hummingbirds beat their wings at a higher frequency than larger hummingbirds. It’s something they need to do to generate sufficient mechanical power. But the faster you move or the faster you make a muscle shorten, there is evidence that suggests the less efficiently it does so.

And so, what we see is that the apparent efficiency of the smallest hummingbirds is down around 10 percent. If you go up to the larger hummingbirds they are — they’re up in the range of, you know, 30 percent.

This article was originally published in The New York Times.  Read the original article.

In a Cockroach Genome, ‘Little Mighty’ Secrets

The American cockroach is the largest common house cockroach, about the length of a AA battery. Also called the water bug, it can live for a week without its head. It eats just about anything, including feces, the glue on book bindings, and other cockroaches, dead or alive. It can fly short distances and run as fast as the human equivalent of 210 miles per hour, relative to its size.

All these feats and more are encoded in the American cockroach’s genome, its complete set of genetic instructions, which was sequenced by Chinese scientists and published on Tuesday in Nature Communications. It is the second largest insect genome ever sequenced (the first belonging to a species of locust), and larger even than the human genome.

In China, the cockroach is often called “xiao qiang,” meaning “little mighty,” said Sheng Li, an entomology professor at South China Normal University in Guangzhou and lead author of the paper. “It’s a tiny pest, but has very strong vitality.”

His team found that groups of genes associated with sensory perception, detoxification, the immune system, growth and reproduction were all enlarged in the American cockroach, likely underpinning its scrappiness and ability to adapt to human environments.

Their study comes on the heels of the sequencing of the German cockroach genome, which was published in Nature Ecology & Evolution last month. While the German cockroach only inhabits human environments (particularly kitchens), the American cockroach flourishes in a wide range of habitats.

Both species, however, succeed worldwide as omnivorous scavengers, and are notoriously adept at dealing with the insecticides and other pest control methods we throw at them. They’re both frequent visitors to homes in the United States, though the German cockroach is slightly more common.

That generalist lifestyle is reflected in the species’ genomes, both of which are massive, said Coby Schal, an entomology professor at North Carolina State University and an author of the German cockroach study.

Can You Pick a Bedbug Out of a Lineup?

In a survey, scientists found many travelers could not distinguish bedbugs from other pests, which could have implications for hotels and the travel industry.

Consider, in comparison, more specialized insects like bedbugs or termites. Feasting exclusively on blood, bedbugs no longer need sugar receptors. Most termites, which live in the dark, are blind.

Cockroaches, on the other hand, need eyes, sugar receptors, ways to survive nasty environments — you name it. As a result, “cockroaches have to have a very large repertoire of proteins, and therefore a lot of genes,” Dr. Schal said.

In the American cockroach, Dr. Li and collaborators annotated thousands of genes, including more than 1,000 thought to help the insect detect chemical cues from the environment. Among these are more than 300 genes associated with perceiving bitter tastes, which could help them decide which foods are safe.

The scientists also interfered with more than 20 genes thought to be related to immunity, reproduction and development, and found that doing so had damaging effects on the cockroaches.

Genes such as these are promising targets for future pest control methods, said Xavier Bellés, a research professor at the Institute of Evolutionary Biology in Barcelona. Such methods are already being developed for agricultural pests.

In terms of basic biology, comparing the genomes of primitive cockroaches and termites — which evolved from cockroaches — will allow scientists to learn more about eusociality, a rare phenomenon in which organisms cooperate through sophisticated division of labor, said Tanya Dapkey, an entomologist at the University of Pennsylvania who was not involved in the new research. Termites evolved eusociality long before other insects like ants and bees.

For now, Dr. Li is following up on the American cockroach’s extraordinary healing capabilities: cut a leg off, and the insect will quickly regenerate it.

His team is identifying the proteins and pathways involved in this process, with the hope that they can be harnessed for medical treatments. Cockroach extract has long been used in traditional Chinese medicine to speed healing on cuts and burns.

“We’ve uncovered the secret of why people call it ‘xiao qiang,’” he said. “Now we want to know the secrets of Chinese medicine.”

This article was originally published in The New York Times.  Read the original article.