A history of big-headedness

“HOW the human got his brain” is probably the most important “Just So” story that Rudyard Kipling never wrote. Kipling did not ignore people in his quirky take on evolution. Two of his tales describe the invention of the alphabet and the invention of letter-writing. But he took for granted the human brains behind these inventions, which are three times the size of those of humanity’s closest living relatives, the great apes, and are thus as characteristic of people as trunks are of elephants or humps are of camels.

This week, though, sees the publication of two studies which, added together, form an important paragraph in the story of the human brain. Both concern a version of a gene called NOTCH2, which has been known for some time to be involved in embryonic development. Both point to an event in the past which changed the activity of this gene in the evolutionary line that leads to modern people. And both are supported by experiments which suggest that the change in question is crucial to the emergence of the big brains which distinguish human beings from all other living animal species.

The two studies, which were carried out independently, are published in Cell. One was by a team led by David Haussler, a bioinformatician at the University of California, Santa Cruz. The other was directed by Pierre Vanderhaeghen, a developmental biologist at the Free University of Brussels, in Belgium.

Dr Haussler stumbled on his discovery while comparing the development of the brain’s cortex in human beings and in macaques, a type of monkey. He and his colleagues found in humans what appeared to be several previously undiscovered versions of NOTCH2, alongside the established one. The new genes, which they refer to as NOTCH2NLs, were absent from their macaques and—as a search of genetic databases showed—from all other living animals except chimpanzees and gorillas. In these two great apes there were two NOTCH2NLgenes, but they seemed to be inactive. The difference between apes and humans is that in the human line one of these NOTCH2NLs has now become active, and has multiplied to create three versions, known as AB and C.

Crucially, this ABC pattern is replicated in the DNA of two extinct species of human, Neanderthals and Denisovans. By looking at minor differences between the various NOTCH-related genes in the three human species and the two great apes, the researchers were able to estimate when the active NOTCH2NL arose: 3m-4m years ago. That is when, according to the fossil record, the craniums of mankind’s ancestors started expanding.

To follow up this discovery Dr Haussler created what are known as organoids (specifically, brainoids), which are in vitro replicas of developing brains, made in this case using mouse cells. He used these to test the effects of adding or deleting his newly discovered genes. In the absence of NOTCH2NL, the organoids developed normally. With it added, stem cells in the organoid which would otherwise have generated new neurons divided instead to create more stem cells. The result, when those stem cells did eventually turn into neurons, was more neurons than normal, and thus a bigger organoid. In effect, NOTCH2NL had generated a larger brain.

Encouraged by this discovery, Dr Haussler and his colleagues performed one further test, with the co-operation of real human beings. These were people with macro- or microcephaly (unusually large or small brains). After testing the DNA of each of these volunteers, the team found that NOTCH2NL, though present in people with larger than average brains, was absent from those whose brains were abnormally small—confirming the suspicion that it is involved in the hypertrophication of human brains.

Cogito ergo sum

Unlike Dr Haussler, who came across his initial result serendipitously, Dr Vanderhaeghen set out from the start to find genes that are unique to people, are directly responsible for creating new brain cells in the cortex, are active and are specifically working to encourage the development of stem cells into neurons. The needle that emerged from this haystack of demands was the same set of NOTCH2NLs that Dr Haussler’s team had lit upon. Seeking confirmation of the genes’ function, Dr Vanderhaeghen introduced them into mouse embryos and found that the number of stem cells in the embryos’ brains was thereby increased. He then repeated the experiment using stem cells taken from human fetuses and got the same results as Dr Haussler’s team had observed in their organoids. Sure enough, NOTCH2NLs encouraged stem cells to proliferate without turning into neurons, increasing the total number of neurons generated.

Taken together, these two studies suggest that NOTCH2NL has played a crucial role in the tale of “How the human got his brain”. They do not, however, answer the question of why this happened. Mutations occur all the time. It is improbable that this was the first occasion in history something like NOTCH2NL has arisen. For NOTCH2NL to have prospered in the way that it did, natural selection would have had to have favoured it. Big brains, in other words, must have been useful in the context in which the mutation occurred.

What that context was is unclear. Though it is hard for human beings to contemplate the idea that big brains could ever be undesirable, small-brained animals do perfectly well without them. And big brains are expensive to maintain. Some calculations suggest humans could not afford them calorifically without the invention of cooking—a process that liberates otherwise indigestible nutrients. Humans now dominate Earth, but that was not true for most of the 3m-4m years since active NOTCH2NL arose and brain hypertrophication began. Until 10,000 years or so ago, when agriculture was adopted, humans were rare.

The ultimate cause of human brain expansion thus remains unknown. Tool-making is one explanation. A more intriguing theory is that human brains are the equivalent of brightly coloured plumage in birds, permitting the sexes to show off to each other what good mates they would make. Yet another idea, the Machiavellian-intelligence hypothesis, is that big brains enable people to manipulate others to their own advantage—a trick that the invention of language would also assist. Nor need manipulation be malevolent. Collaboration is also a form of manipulation.

These ideas are not, of course, mutually exclusive. Any or all of them may be correct. Whether human beings are big-brained enough to decide between them and thus complete the missing “Just So” story remains to be seen.

 

 

This article was originally published in The Economist. Read the original article.

Artificial intelligence will improve medical treatments

OUR years ago a woman in her early 30s was hit by a car in London. She needed emergency surgery to reduce the pressure on her brain. Her surgeon, Chris Mansi, remembers the operation going well. But she died, and Mr Mansi wanted to know why. He discovered that the problem had been a four-hour delay in getting her from the accident and emergency unit of the hospital where she was first brought, to the operating theatre in his own hospital. That, in turn, was the result of a delay in identifying, from medical scans of her head, that she had a large blood clot in her brain and was in need of immediate treatment. It is to try to avoid repetitions of this sort of delay that Mr Mansi has helped set up a firm called Viz.ai. The firm’s purpose is to use machine learning, a form of artificial intelligence (AI), to tell those patients who need urgent attention from those who may safely wait, by analysing scans of their brains made on admission.

That idea is one among myriad projects now under way with the aim of using machine learning to transform how doctors deal with patients. Though diverse in detail, these projects have a common aim. This is to get the right patient to the right doctor at the right time.

In Viz.ai’s case that is now happening. In February the firm received approval from regulators in the United States to sell its software for the detection, from brain scans, of strokes caused by a blockage in a large blood vessel. The technology is being introduced into hospitals in America’s “stroke belt”—the south-eastern part, in which strokes are unusually common. Erlanger Health System, in Tennessee, will turn on its Viz.ai system next week.

The potential benefits are great. As Tom Devlin, a stroke neurologist at Erlanger, observes, “We know we lose 2m brain cells every minute the clot is there.” Yet the two therapies that can transform outcomes—clot-busting drugs and an operation called a thrombectomy—are rarely used because, by the time a stroke is diagnosed and a surgical team assembled, too much of a patient’s brain has died. Viz.ai’s technology should improve outcomes by identifying urgent cases, alerting on-call specialists and sending them the scans directly.

The AIs have it

Another area ripe for AI’s assistance is oncology. In February 2017 Andre Esteva of Stanford University and his colleagues used a set of almost 130,000 images to train some artificial-intelligence software to classify skin lesions. So trained, and tested against the opinions of 21 qualified dermatologists, the software could identify both the most common type of skin cancer (keratinocyte carcinoma), and the deadliest type (malignant melanoma), as successfully as the professionals. That was impressive. But now, as described last month in a paper in the Annals of Oncology, there is an AI skin-cancer-detection system that can do better than most dermatologists. Holger Haenssle of the University of Heidelberg, in Germany, pitted an AI system against 58 dermatologists. The humans were able to identify 86.6% of skin cancers. The computer found 95%. It also misdiagnosed fewer benign moles as malignancies.

There has been progress in the detection of breast cancer, too. Last month Kheiron Medical Technologies, a firm in London, received news that a study it had commissioned had concluded that its software exceeded the officially required performance standard for radiologists screening for the disease. The firm says it will submit this study for publication when it has received European approval to use the AI—which it expects to happen soon.

This development looks important. Breast screening has saved many lives, but it leaves much to be desired. Overdiagnosis and overtreatment are common. Conversely, tumours are sometimes missed. In many countries such problems have led to scans being checked routinely by a second radiologist, which improves accuracy but adds to workloads. At a minimum Kheiron’s system looks useful for a second opinion. As it improves, it may be able to grade women according to their risks of breast cancer and decide the best time for their next mammogram.

Efforts to use AI to improve diagnosis are under way in other parts of medicine, too. In eye disease, DeepMind, a London-based subsidiary of Alphabet, Google’s parent company, has an AI that screens retinal scans for conditions such as glaucoma, diabetic retinopathy and age-related macular degeneration. The firm is also working on mammography.

Heart disease is yet another field of interest. Researchers at Oxford University have been developing AIs intended to interpret echocardiograms, which are ultrasonic scans of the heart. Cardiologists looking at these scans are searching for signs of heart disease, but can miss them 20% of the time. That means patients will be sent home and may then go on to have a heart attack. The AI, however, can detect changes invisible to the eye and improve the accuracy of diagnosis. Ultromics, a firm in Oxford, is trying to commercialise the technology and it could be rolled out later this year in Britain.

There are also efforts to detect cardiac arrhythmias, particularly atrial fibrillation, which increase the risk of heart failure and strokes. Researchers at Stanford University, led by Andrew Ng, have shown that AI software can identify arrhythmias from an electrocardiogram (ECG) better than an expert. The group has joined forces with a firm that makes portable ECG devices and is helping Apple with a study looking at whether arrhythmias can be detected in the heart-rate data picked up by its smart watches. Meanwhile, in Paris, a firm called Cardiologs is also trying to design an AI intended to read ECGs.

Seeing ahead

Eric Topol, a cardiologist and digital-medicine researcher at the Scripps Research Institute, in San Diego, says that doctors and algorithms are comparable in accuracy in some areas, but computers have the advantage of speed. This combination of traits, he reckons, will lead to higher accuracy and productivity in health care.

Artificial intelligence might also make medicine more specific, by being able to draw distinctions that elude human observers. It may be able to grade cancers or instances of cardiac disease according to their risks—thus, for example, distinguishing those prostate cancers that will kill quickly, and therefore need treatment, from those that will not, and can probably be left untreated.

What medical AI will not do—at least not for a long time—is make human experts redundant in the fields it invades. Machine-learning systems work on a narrow range of tasks and will need close supervision for years to come. They are “black boxes”, in that doctors do not know exactly how they reach their decisions. And they are inclined to become biased if insufficient care is paid to what they are learning from. They will, though, take much of the drudgery and error out of diagnosis. And they will also help make sure that patients, whether being screened for cancer or taken from the scene of a car accident, are treated in time to be saved.

 

 

This article was originally published in The Economist. Read the original article.

The Dilemma of the Gluten-Free Diet

It’s not unusual after eating that the symptoms set in for Lee Graham : severe stomach pain and worse.

“It’s always sort of a game of Russian roulette when you go out to eat,” says Ms. Graham, executive director of the National Celiac Association, a Needham, Mass.-based nonprofit that advocates for people with celiac disease.

What’s frustrating for Ms. Graham is that this can happen even when she’s eating what is supposed to be a gluten-free meal.

A new study in the American Journal of Clinical Nutrition shows that eating gluten-free is nearly impossible, underscoring the need for better treatments for patients with celiac disease.

The good news: About half a dozen potential treatments are in the works, ranging from a vaccine to a capsule designed to regulate the gut. But most are at least a couple of years from entering the market. And celiac patients would still have to maintain a gluten-free diet, which is currently the only answer for the disease.

Experts say up to 1% of the global population has celiac disease, an autoimmune condition in which people develop an immune reaction to gluten. Gluten is a protein that appears in any food containing wheat, barley and rye. The immune system reaction results in inflammation and damage in the lining of the small intestine, which can lead to medical complications, such as acute stomach pain and failure to absorb nutrients.

The odds of getting celiac disease in the U.S. have increased four- to fivefold over four decades, says Peter Green, director of the Celiac Disease Center at Columbia University Medical Center, but have leveled off in recent years.

“There has been this increased rate of diagnosis as well, but there’s still a lot of people with celiac disease who don’t know they have it,” Dr. Green says. He was an author on another study looking at the global prevalence of celiac disease published in 2017 in the journal Clinical Gastroenterology and Hepatology.

Dr. Green says it’s unclear what the lowest level of gluten is that causes intestinal damage in patients. “About 30% of people don’t get better on a gluten-free diet,” he says.

Manufacturers of gluten-free food go through the painstaking task of trying to ensure that there’s no gluten in products. The U.S. Food and Drug Administration requires that foods that are labeled gluten-free contain fewer than 20 parts per million of gluten.
Manufacturers of gluten-free food go through the painstaking task of trying to ensure that there’s no gluten in products. The U.S. Food and Drug Administration requires that foods that are labeled gluten-free contain fewer than 20 parts per million of gluten.PHOTO: NATALIE BEHRING/BLOOMBERG NEWS

The American Journal of Clinical Nutrition study used data from three prior clinical trials to estimate how much gluten 246 celiac patients were ingesting. The gluten measurements were based on either a stool or urine sample.

The study found that on average patients were ingesting 200 to 250 milligrams of gluten a day, says Jack Syage, CEO of ImmunogenX, a Newport Beach, Calif.-based biotechnology company, and first author on the study. Someone without celiac disease eats about 7,500 to 10,000 milligrams of gluten a day.

The study didn’t look at where the gluten comes from. Dr. Syage says researchers guess much of it is inadvertently consumed when food is contaminated during processing or preparation.

Those with celiac disease typically need to limit exposure to under 100 milligrams, but the threshold can vary depending on a person’s sensitivity, says Joseph Murray, a professor of gastroenterology at the Mayo Clinic in Rochester, Minn.

The U.S. Food and Drug Administration requires that packaged foods labeled gluten-free contain fewer than 20 parts per million of gluten—the equivalent of 20 milligrams of gluten in one kilogram of food. Gluten Free Watchdog, a group that tests packaged gluten-free foods, has found that foods test at or above 20 parts per million of gluten about 4% of the time.

Dr. Murray is working with ImmunogenX on developing an enzyme called latiglutenase, designed to be taken with meals to help patients digest gluten. ImmunogenX acquired the enzyme mixture from Alvine Pharmaceuticals in 2016 after a Phase 2 trial failed to demonstrate healing of the small intestine. The study showed improvement of symptoms for a subgroup of celiac patients. The current ImmunogenX trial will focus on the 20% of celiac patients who have persistent symptoms while following a gluten-free diet.

The company is launching a final Phase 2 clinical trial in a few months. If successful, it would have to do a Phase 3 trial before applying to the FDA for approval as a drug. The earliest a commercially available drug could hit the market is late 2020.

Innovate Biopharmaceuticals in Raleigh, N.C., expects to launch a Phase 3 trial later this year for larazotide acetate, a drug taken as a capsule that would be taken before every meal.

Jay Madan, founder and president of the company, says the drug helps regulate the leakiness of the gut. Results from Phase 2 clinical trials showed improvement in abdominal symptoms. The drug isn’t absorbed by the body and was well tolerated in more than 500 patients.

ImmusanT, a Cambridge, Mass.-based biotechnology company, will enter into a Phase 2 clinical trial of its vaccine, Nexvax2, this year, says Leslie Williams, president and CEO of the company.

The vaccine doesn’t prevent celiac disease but is a therapeutic treatment akin to an allergy shot, she says. It would require self-injecting weekly to maintain non-responsiveness to gluten.

“Our initial approach is to protect patients against inadvertent exposure to gluten,” Dr. Williams says. “Ultimately we will see if we can get them to reintroduce gluten, too.”

Researchers will present the results of Phase 2 clinical trials of the drug AMG 714 in June, says Francisco Leon, the co-founder and former CEO of Celimmune, which was recently acquired by Amgen .

Dr. Leon, who now works as a consultant to Amgen, says the drug is an antibody that blocks interleukin-15, a protein that stimulates the immune system in the gut, causing damage and gastrointestinal symptoms.

Researchers tested the drugs in a pair of Phase 2 studies. One featured patients with refractory celiac disease type 2, the most severe form of celiac disease. It affects about 1 in 200 celiac patients and is considered lymphoma of the gut. Doctors also tested the drug with a larger population of celiac disease patients.

Several other treatments are in earlier stages of development. Cour Pharmaceutical in Chicago and Takeda Pharmaceutical in Japan are using nanotechnology to try to reprogram the body’s immune system to enable patients to develop a tolerance to gluten and potentially reverse symptoms of the disease.

The companies launched a Phase 1 trial in February which they expect to complete in March 2019.

 

This article was originally published in The Wall Street Journal. Read the original article.

A Crispr Conundrum: How Cells Fend Off Gene Editing

,

Human cells resist gene editing by turning on defenses against cancer, ceasing reproduction and sometimes dying, two teams of scientists have found.

The findings, reported in the journal Nature Medicine, at first appeared to cast doubt on the viability of the most widely used form of gene editing, known as Crispr-Cas9 or simply Crispr, sending the stocks of some biotech companies into decline on Monday.

Crispr Therapeutics fell by 13 percent shortly after the scientists’ announcement. Intellia Therapeutics dipped, too, as did Editas Medicine. All three are developing medical treatments based on Crispr.

But the scientists who published the research say that Crispr remains a promising technology, if a bit more difficult than had been known.

“The reactions have been exaggerated,” said Jussi Taipale, a biochemist at the University of Cambridge and an author of one of two papers published Monday. The findings underscore the need for more research into the safety of Crispr, he said, but they don’t spell its doom.

“This is not something that should stop research on Crispr therapies,” he said. “I think it’s almost the other way — we should put more effort into such things.”

Crispr has stirred strong feelings ever since it came to light as a gene-editing technology five years ago. Already, it’s a mainstay in the scientific tool kit.

The possibilities have led to speculations about altering the human race and bringing extinct species back to life. Crispr’s pioneers have already won a slew of prizes, and titanic battles over patent rights to the technology have begun.

To edit genes with Crispr, scientists craft molecules that enter the nucleus of a cell. They zero in on a particular stretch of DNA and slice it out.

The cell then repairs the two loose ends. If scientists add another piece of DNA, the cell may stitch it into the place where the excised gene once sat.

Recently, Dr. Taipale and his colleagues set out to study cancer. They used Crispr to cut out genes from cancer cells to see which were essential to cancer’s aggressive growth.

For comparison, they also tried to remove genes from ordinary cells — in this case, a line of cells that originally came from a human retina. But while it was easy to cut genes from the cancer cells, the scientists did not succeed with the retinal cells.

Such failure isn’t unusual in the world of gene editing. But Dr. Taipale and his colleagues decided to spend some time to figure out why exactly they were failing.

They soon discovered that one gene, p53, was largely responsible for preventing Crispr from working.

p53 normally protects against cancer by preventing mutations from accumulating in cellular DNA. Mutations may arise when a cell tries to fix a break in its DNA strand. The process isn’t perfect, and the repair may be faulty, resulting in a mutation.

When cells sense that the strand has broken, the p53 gene may swing into action. It can stop a cell from making a new copy of its genes. Then the cell may simply stop multiplying, or it may die. This helps protect the body against cancer.

If a cell gets a mutation in the p53 gene itself, however, the cell loses the ability to police itself for faulty DNA. It’s no coincidence that many cancer cells carry disabled p53 genes.

Dr. Taipale and his colleagues engineered retinal cells to stop using p53 genes. Just as they had predicted, Crispr now worked much more effectively in these cells.

A team of scientists at the Novartis Institutes for Biomedical Research in Cambridge, Mass., got similar results with a different kind of cells, detailed in a paper also published Monday.

They set out to develop new versions of Crispr to edit the DNA in stem cells. They planned to turn the stem cells into neurons, enabling them to study brain diseases in Petri dishes.

Someday, they hope, it may become possible to use Crispr to create cell lines that can be implanted in the body to treat diseases.

When the Novartis team turned Crispr on stem cells, however, most of them died. The scientists found signs that Crispr had caused p53 to switch on, so they shut down the p53 gene in the stem cells.

Now many of the stem cells survived having their DNA edited.

The authors of both studies say their results raise some concerns about using Crispr to treat human disease.

For one thing, the anticancer defenses in human cells could make Crispr less efficient than researchers may have hoped.

One way to overcome this hurdle might be to put a temporary brake on p53. But then extra mutations may sneak into our DNA, perhaps leading to cancer.

Another concern: Sometimes cells spontaneously acquire a mutation that disables the p53 gene. If scientists use Crispr on a mix of cells, the ones with disabled p53 cells are more likely to be successfully edited.

But without p53, these edited cells would also be more prone to gaining dangerous mutations.

One way to eliminate this risk might be to screen engineered cells for mutant p53 genes. But Steven A. McCarroll, a geneticist at Harvard University, warned that Crispr might select for other risky mutations.

“These are important papers, since they remind everyone that genome editing isn’t magic,” said Jacob E. Corn, scientific director of the Innovative Genomics Institute in Berkeley, Calif.

Crispr doesn’t simply rewrite DNA like a word processing program, Dr. Corn said. Instead, it breaks DNA and coaxes cells to put it back together. And some cells may not tolerate such changes.

While Dr. Corn said that rigorous tests for safety were essential, he doubted that the new studies pointed to a cancer risk from Crispr.

The particular kinds of cells that were studied in the two new papers may be unusually sensitive to gene editing. Dr. Corn said he and his colleagues have not found similar problems in their own research on bone marrow cells.

“We have all been looking for the possibility of cancer,” he said. “I don’t think that this is a warning for therapies.”

“We should definitely be cautious,” said George Church, a geneticist at Harvard and a founding scientific adviser at Editas.

He suspected that p53’s behavior would not translate into any real risk of cancer, but “it’s a valid concern.”

And those concerns may be moot in a few years. The problem with Crispr is that it breaks DNA strands. But Dr. Church and other researchers are now investigating ways of editing DNA without breaking it.

“We’re going to have a whole new generation of molecules that have nothing to do with Crispr,” he said. “The stock market isn’t a reflection of the future.”

Hot Heads: Why Mammals Need R.E.M. Sleep

On a December evening in 1951, Eugene Aserinsky, a physiologist at the University of Chicago, placed electrodes on the scalp of his 8-year-old son, Armond, before putting him to bed. Then the scientist retired to another room to watch a row of pens quiver across a rolling sheet of paper, recording the electrical activity in the boy’s facial muscles.

Hours later, the pens started to swing wildly. To judge from the chart, it seemed as if Armond were awake, his eyes darting about the room. But when Aserinsky looked in on him, his son was fast asleep.

Aserinsky had discovered R.E.M. sleep.

Eventually he and other researchers learned that during this state, the brain shifts from low-frequency to high-frequency electrical waves, like those produced in waking hours. When Aserinsky woke his subjects from R.E.M. sleep, they often reported vivid dreams.

Almost all mammals experience R.E.M. sleep, but even today researchers debate why it exists. On Thursday, a team of American and Russian researchers reported that fur seals may provide an important clue.

While they swim, fur seals switch off R.E.M. sleep entirely. It returns when they come back to land — a pattern never seen before.

Jerome M. Siegel, a sleep expert at the University of California, Los Angeles, and a co-author of the new study published Thursday in Current Biology, said that the seals provide evidence that our brains switch to R.E.M. sleep from time to time to generate heat in our skulls.

“R.E.M. sleep is like shivering for the brain,” he said.

Many scientists have argued that our brains require R.E.M. sleep each night to function properly. One clue comes from experiments in which researchers deprive rats of R.E.M. sleep for a few days.

As soon as the rats can sleep normally again, they experience a “rebound,” spending more time each night in R.E.M. — as if they need to catch up.

Some studies have suggested that the brain needs R.E.M. sleep to keep its metabolism in balance. Rats deprived of R.E.M. will eat more, and yet they also will lose weight.

This disruption can be lethal. “If you deprive rats of R.E.M. sleep, they’ll die in two weeks,” said Dr. Siegel.

But other findings have raised doubts about the importance of R.E.M. Certain types of antidepressant drugs reduce R.E.M. sleep in users, for example, without evidence of harm.

R.E.M. isn’t even essential for dreaming. Researchers have found that people also dream during periods of so-called slow-wave sleep.

Some of the most puzzling evidence about R.E.M. sleep has come from the sea.

In the 1970s, a Russian biologist named Lev M. Mukhametov placed electrodes on the heads of dolphins. He discovered that they can put one side of the brain to sleep as they swim while the other side remains alert. Then they can switch, putting the other hemisphere to sleep.

But as hard as Dr. Mukhametov and his colleagues looked, they never found a dolphin in R.E.M. sleep.

In the 1990s, Dr. Siegel and Dr. Mukhametov started collaborating on studies of relatives of dolphins and found the hemisphere-switching sleep pattern in other species, such as gray whales.

More recently, the scientists wondered what they might find if they looked at a species between the two ends of the spectrum: a mammal that regularly slept both at sea and on land.

The researchers decided to study four fur seals. The animals spend weeks or months swimming in the ocean, but they come on land to mate and rear their young.

Oleg I. Lyamin, a neuroscientist who splits his time between U.C.L.A. and the Severtsov Institute of Ecology and Evolution in Moscow, implanted electrodes in the seals and strapped data recorders to their backs.

The fur seals lived in a pool where they could swim around or haul themselves onto a dry platform. After two days of recordings, the researchers took away the platform.

For up to two weeks, the seals could only swim in the pool. Then the researchers put the platform back, allowing the fur seals to doze out of the water again.

On the platform, the researchers found, the fur seals slept much as land mammals do. Their entire brains slipped into slow-wave sleep, interrupted from time to time by periods of R.E.M.

But when the seals had to sleep in the water, the brain patterns resembled those of dolphins. Only one hemisphere of their brain slept at a time. What’s more, the fur seals experienced almost no R.E.M. sleep.

“The R.E.M. sleep pretty much goes to zero and stays there as long as they’re in the water,” said Dr. Siegel.

When the seals got back on the platform, ordinary R.E.M. sleep returned. Their long spell of R.E.M.-free sleep did them no apparent harm, and they didn’t experience any R.E.M.-sleep rebound.

The results undermine the idea that R.E.M. sleep is essential to mammals, like food and water, Dr. Siegel said. In fact, the earlier studies on R.E.M. deprivation might not have been as compelling as they once seemed.

In those earlier studies, researchers kept animals from going into R.E.M. sleep by waking them up. “In some experiments, they wake up the animals a thousand times a day,” Dr. Siegel said.

Image

Fur seals in various states of repose on the Pribilof Islands in Alaska. When they sleep in the water, their brain patterns resemble those of dolphins, but when they sleep on land, R.E.M. sleep returns.CreditEnrique R. Aguirre Aves, via Getty Images

The stress of being awakened over and over could have done the animals harm, rather than just the lack of R.E.M. sleep in particular.

A more telling clue about R.E.M. sleep can be found in human behavior, Dr. Siegel thinks. When people wake up on their own, they tend to move out of R.E.M. sleep and become alert. Those awakened from slow-wave sleep are groggy and disoriented.

Dr. Siegel and his colleagues propose that the brain cools during slow-wave sleep. To keep the brain from getting too cold, however, the brain periodically unleashes a torrent of activity. Oxygen-rich blood flows into the brain to fuel the activity, warming the brain in the process.

“It keeps the brain temperature within a functional limit by cycling on and off the same way your heater in your house might do at night,” Dr. Siegel said.

 

This article was originally published in The New York Times.  Read the original article.

Secrets of the Y Chromosome

In advance of Father’s Day, let’s take a moment to sort out the differences and similarities between “Dad jeans” and “Dad genes.”

Dad jeans are articles of sex-specific leisure clothing, long mocked for being comfy, dumpy and elastic-waisted but lately reinvented as a fashion trend, suitable for male bodies of all shapes and ages.

Dad genes are particles on the sex-specific Y chromosome, long mocked for being a stunted clump of mostly useless nucleic waste but lately revealed as man’s fastest friend, essential to the health of male bodies and brains no matter the age.

Yes, dear fathers and others born with the appurtenances generally designated male. We live in exciting times, and that includes novel insights into the sole chromosomal distinction between you and the women now prowling the aisles at the hardware store. (“Didn’t he say he could use a new bow saw? Or some halogen light bulbs?”)

Researchers have discovered that, contrary to longstanding assumptions, the Y chromosome is not limited to a handful of masculine tasks, like specifying male body parts in a developing embryo or replenishing the sperm supply in an adult man.

New evidence indicates that the Y chromosome participates in an array of essential, general-interest tasks in men, like stanching cancerous growth, keeping arteries clear and blocking the buildup of amyloid plaque in the brain.

As a sizable percentage of men age, their blood and other body cells begin to spontaneously jettison copies of the Y chromosome, sometimes quickly, sometimes slowly. That unfortunate act of chromosomal decluttering appears to put the men at a heightened risk of Alzheimer’s disease, leukemia and other disorders.

“I’m quite certain,” said Lars Forsberg, an associate professor of medical genetics at Uppsala University in Sweden, “that the loss of the Y chromosome with age explains a very large proportion of the increased mortality in men, compared to women.”

Other researchers are tracing the evolution of the Y chromosome and comparing the version found in modern men with those of our close relatives, both living and extinct.

Takeaway A: We can drop the man-equals-caveman caricature. Although human DNA has been found to contain vestiges of our dalliances with Neanderthals from about 50,000 years ago, none of those genomic imprints are on the human Y chromosome.

By the look of it, something specific to the Neanderthal Y chromosome ultimately proved inimical to human health and survival, and so any trace of the Neanderthal Y chromosome was ejected from the human gene poollike a poorly matched kidney.

The immune system analogy may be particularly apt. Fernando Mendez, a geneticist, and his colleague Carlos Bustamante of Stanford University reported that one of the notable differences between the human and Neanderthal Y chromosomes lies in a gene linked to transplant rejection.

Whatever the reason for the purification of the human Y over time, women’s equivalent X chromosome does not appear to have been similarly cleansed, with the result that women on average may be slightly more Neanderthal than men, which could explain our comparative fondness for animal print shoes.

Yes, but apercu B: Hang on to the gorilla suit. From a global genomic perspective, our closest living relative is the chimpanzee, followed by the gorilla. When it comes to the Y chromosome, however, humans look considerably more Magilla than Bonzo.

Kateryna Makova, director of the Center for Medical Genomics at Penn State University, and her colleagues recently determined that if you line up a man’s Y chromosome with a chimpanzee’s, only about 70 percent of the two spans will stick together. Align a human Y with a gorilla’s, and 83 percent of the paired chromosomes will comfortably conjoin.

Looking at nine distinct sets of genes that have been identified on the human Y chromosome, Dr. Makova said, “eight of them are shared with the gorilla, while only six gene families are shared with the chimpanzee. It’s very surprising.”

The researchers propose that the observed patterns could be the result of mating practices. Among gorillas, fertile females generally mate with one male at a time — the local silverback. Women, too, are mostly, though by no means unerringly, monogamous.

By contrast, female chimpanzees mate wildly and promiscuously during each ovulatory cycle. As a rule, female promiscuity promotes sperm competition among males, and because the Y chromosome oversees sperm production, Dr. Makova said, the chimpanzee Y is likely evolving at hyperspeed to keep up.

David Page of the Whitehead Institute in Cambridge, Mass., a world authority on the male sex chromosome who could well be called the Y Guy, believes the Y and the X “each deserve a full novel of their own.”

Whether in the double-X format that specifies a female fetus, or the X and Y pair found in males, the sex chromosomes stand apart from the other 22 normal chromosome pairs, or autosomes, that constitute the complete human genome and that are stuffed into nearly every cell nucleus of the body.

That tendency toward molecular aloofness led to the initial designation of the female chromosome as “X,” for strange or unknown; the Y was simply named for the next letter in the alphabet.

The Y chromosome is a true chromosomal outlier, holding a fraction of the number of genes found on all the other chromosomes, including the X. Its genetic impoverishment is a legacy of its role in sex determination.

Among our pre-mammalian forebears, an offspring’s sex was dictated as it is today in crocodiles and turtles: not by genetics, but by temperature.

Among turtles, if an egg develops in warm conditions, the embryo turns female. If it’s cooler outside, the embryo becomes male.

But with the rise of internal gestation and its uniform weather conditions, embryos needed another clue for sex development. That demand led to the evolution of the male sex determination gene, called sry, and the related need to keep the male and female genetic programs segregated.

As a result, the Y chromosome on which sry was located could no longer freely recombine and swap its pieces with its corresponding X chromosome, as the other chromosomal pairs do to freshen things up whenever a new egg or sperm cell is created.

Lacking the standard repair system of chromosomal recombination, genes on the Y chromosome began to decay and were eventually tossed out or reassigned to other chromosomes.

“The erection of ‘trade barriers’ allowed X and Y to follow divergent paths,” Dr. Page said. “The X chromosome could continue to recombine with another X chromosome in the making of eggs, but the Y chromosome followed an isolationist strategy, which led to its rapid decline.”

It’s not total isolationism: The tips of the X and Y chromosome can still swap pieces, but most of Y is off limits to trans-chromosomal barter and amendments.

“There’s a striking loneliness to the Y chromosome,” said George Vassiliou of the Wellcome Trust Sanger Institute and Cambridge University.

Nevertheless, the Y still has powers to divulge. After speculation in the 1990s that the Y chromosome was still shrinking and might someday vanish altogether — leaving who knows what sex determination protocol in its wake — scientists are now confident the chromosomal attrition has ended.

“It’s dynamic but stable,” said Melissa Wilson Sayres, who studies sex chromosomes at Arizona State University. “It may lose a gene or two, but it may also gain sequences. It’s not a dead end.”

EARLIER REPORTING ON THE X AND Y CHROMOSOMES
Seeing X Chromosomes in a New Light

A Gene Mystery: How Are Rats With No Y Chromosome Born Male?

This Mutant Crayfish Clones Itself, and It’s Taking Over Europe

Fathered by the Mailman? It’s Mostly an Urban Legend

Moreover, new research indicates that the Y chromosome can patch up some internal problems without benefit of free trade and recombination with the X — by shuffling around duplicate copies of genes on its own lonely span.

The Y also holds a host of genes that have yet to be fully appreciated or understood.

Dr. Vassiliou and his colleagues reported last month on a Y-specific gene called UT-Y that protects against leukemia in mice and likely performs a similar role in men. The chromosome more generally is committed to its bearer’s health and persistence.

Dr. Forsberg of Uppsala University and his colleague Jan Dumanski have published a series of papers about the phenomenon called L.O.Y., or loss-of-Y, in which men’s blood and other cells mysteriously start shedding their Y chromosomes with age.

Smoking hastens the depletion of the Y chromosome in men’s blood cells, the researchers have found. Men with a high percentage of Y-free cells — 10 percent or more — are at a heightened risk of dying in the near future, compared with similarly aged men whose cells have hung onto their Y’s.

And men with Alzheimer’s disease are more likely to be L.O.Y. men than are their non-demented cohorts.

The researchers propose that a weakening of the immune system may explain the many perils of L.O.Y. When white blood cells that serve as immune sentries lose their Y chromosome, Dr. Dumanski said, their surveillance skills falter.

They fail to clean up messes on arterial walls or to spot cancer cells in need of destruction. They allow plaques and tangles to accrete in the brain.

Dr. Dumanski admitted that the association between the loss of Y and disease has yet to be definitively proved, and that much remains to be understood about what’s driving the chromosomal loss in blood cells and how it might be stopped.

“It’s still early, and there’s still a lot of skepticism,” he said. “It will take a couple more years before the idea is widely accepted, but we are quite convinced ourselves that we are right.”

At which he laughed — and admitted his extreme self-confidence could well be the result of the Y chromosome that made him a man.

 

This article was originally published in The New York Times.  Read the original article.

The Search for Cancer Treatment Beyond Mutant-Hunting

On my way to a meeting on cancer and personalized medicine a few weeks ago, I found myself thinking, improbably, of the Saul Steinberg New Yorker cover illustration “View From Ninth Avenue.” Steinberg’s drawing (yes, you’ve seen it — in undergraduate dorm rooms, in subway ads) depicts a mental map of the world viewed through the eyes of a typical New Yorker. We’re somewhere on Ninth Avenue, looking out toward the water. Tenth Avenue looms large, thrumming with pedestrians and traffic. The Hudson is a band of gray-blue. But the rest of the world is gone — irrelevant, inconsequential, specks of sesame falling off a bagel. Kansas City, Chicago, Las Vegas and Los Angeles are blips on the horizon. There’s a strip of water denoting the Pacific Ocean, and faraway blobs of rising land: Japan, China, Russia. The whole thing is a wry joke on self-obsession and navel gazing: A New Yorker’s world begins and ends in New York.

Image
CreditPhoto illustration by Cristiana Couceiro. Source Credit: Michael Bonert, via Wikimedia Commons.

In the mid-2000s, it felt to me, at times, as if cancer medicine were viewing the world from its own Ninth Avenue. Our collective vision was dominated by genomics — by the newfound capacity to sequence the genomes of cells (a “genome” refers to the complete set of genetic material present in an organism or a cell). Cancer, of course, is typically a disease caused by mutant genes that drive abnormal cellular growth (other features of cellular physiology, like the cell’s metabolism and survival, are also affected). By identifying the mutant genes in cancer cells, the logic ran, we would devise new ways of killing the cells. And because the exact set of mutations was unique to an individual patient — one woman’s breast cancer might have mutations in 12 genes, while another breast cancer might have mutations in a different set of 16 — we would “personalize” cancer medicine to that patient, thereby vastly increasing the effectiveness of therapy.

This kind of thinking had an exhilarating track record. In the 2000s, a medicine called Herceptin was shown to be effective for women with breast cancer, but only if the cancer cells carried a genetic aberration in a gene called HER-2. Another drug, Gleevec, worked only if the tumor cells had a mutant gene called BCR-ABL, or a mutation in a gene called c-kit. In many of our genome-obsessed minds, the problem of cancer had become reduced to a rather simple, scalable algorithm: find the mutations in a patient, and match those mutations with a medicine. All the other variables — the cellular environment within which the cancer cell was inescapably lodged, the metabolic and hormonal milieu that surrounded the cancer or, for that matter, the human body that was wrapped around it — might as well have been irrelevant blobs receding in the distance: Japan, China, Russia.

To bring the promise of mutation-directed therapies to life, researchers began two kinds of trials. The first was called a “basket trial,” in which different forms of cancer (e.g., lung, breast and stomach) containing the same mutations were treated with the same drug — in essence, lumping genetically similar cancers into the same “basket.” The obverse of the basket trial was an “umbrella trial.” Here, one kind of cancer — say, lung cancer or melanoma — was divided into different subtypes based on genetic mutations, and each subtype was targeted by a different medicine. Under a seemingly common umbrella — lung cancer, say — genetically distinct tumors would be treated with therapeutically distinct drugs.

Basket trials worked — somewhat. In one landmark study published in 2015, 122 patients with several different types of cancer — lung, colon, thyroid — were found to have the same mutation in common, and thus treated with the same drug, vemurafenib. The drug worked in some cancers — there was a 42 percent response rate in lung cancer — but not at all in others: Colon cancers had a 0 percent response rate. More recent basket trials with newer drugs have demonstrated striking, even durable, response rates, although the mutations targeted by the drugs are relatively rare across all human cancers.

And the umbrella trials? The record here was also mixed — and, to some, disappointing. In the so-called BATTLE-2 study, patients with lung cancer were divided into different groups based on gene sequencing, and each group was treated with four different drug combinations. The hope was that patients with tumors that contained a mutant version of a gene called K-ras would be uniquely susceptible to one particular drug combination (preclinical data, gathered in mice, suggested that this combination would be potent in these patients). But the laborious strategy deployed in this study — biopsying the tumor, sequencing it and then dividing the patients into mutation-guided treatments — provided few novel therapeutic inroads. In general, patients carrying mutations in the K-ras gene, a key driver of cancer growth, did not survive longer when given the combined drug therapy. “Ultimately,” one reviewer commented, “the trial failed to identify any new promising treatments.” Sequencing, it seemed, had made us none the wiser about treatment.

The disappointments of these early studies fueled public criticisms of precision medicine. Perhaps we had been seduced by the technology of gene sequencing — by the sheer wizardry of being able to look at a cancer’s genetic core and the irresistible desire to pierce that core with targeted drugs. “We biomedical scientists are addicted to data, like alcoholics are addicted to cheap booze,” Michael Yaffe, a cancer biologist from M.I.T., wrote in the journal Science Signaling. “As in the old joke about the drunk looking under the lamppost for his lost wallet, biomedical scientists tend to look under the sequencing lamppost where the ‘light is brightest’ — that is, where the most data can be obtained as quickly as possible. Like data junkies, we continue to look to genome sequencing when the really clinically useful information may lie someplace else.”

It’s that vision of “someplace else” — a view of the world beyond Ninth Avenue — that oncologists and patients are now seeking. Mutations within a cancer cell certainly carry information about its physiology — its propensity for growth, its vulnerabilities, its potential to cause lethal disease — but there’s a world of information beyond mutations. To grow and flourish within its human host, the cancer cell must co-opt dozens, or even thousands, of nonmutant genes to its purpose — turning these genes “on” and “off,” like a pathological commander who has hijacked a ship and is now using all its normal gears and levers to take a new, malignant course. And the cell must live in a particular context within its host — dodging the immune system, colonizing some tissues and not others, metastasizing to very particular sites: bones but not kidneys for some cancers; liver but not the adjacent spleen for others. What if the “really clinically useful information” lies within these domains — in the networks of normal genes co-opted by cancer cells, in the mechanisms by which they engage with their host’s immune system or in the metabolic inputs that a cell needs to integrate in order to grow?

At the annual meeting of the American Society of Clinical Oncology (ASCO) in Chicago this year, it was this altered — and more expansive — vision of precision cancer medicine that was on display. Perhaps the most significant among the presented studies was a very large clinical trial that identified breast cancers that were unlikely to benefit from chemotherapy based on information carried by patterns of gene expression — not single gene mutations — in cancer cells. By identifying tumors that carry these “safer” genetic fingerprints, the study hopes to reduce the use of toxic, expensive — and ineffective — chemo for tens of thousands of women every year. This, too, is precision medicine: Our capacity to find women who should not be lumped into the basket of standard chemotherapy must rank among one of the most worthwhile goals of personalized cancer therapy. Other teams at ASCO reported responses to new generations of drugs that enable the immune system to attack certain cancers, beginning an intensive search for biological markers on cancer cells that predict which tumors are likely to respond (hint: It may not be a single gene mutation).

The point is that precision medicine is not just precision mutant-hunting. It may be decidedly low-tech and may apply to conditions other than cancer. In orthopedics, precision medicine might involve finding an anatomical variant in some shoulders that have sustained fractures, say, that predicts that conventional shoulder surgery will not succeed for those patients. It might invoke gene sequencing again — but this time with computational algorithms that use combinations of genes to predict outcomes (Does A plus B without C predict a response to a drug?). Or it might skip gene sequencing altogether: In my own laboratory, a postdoctoral researcher is trying to grow cancer cells from individual patients in the form of tiny “organoids” — three-dimensional cellular structures that recapitulate living tumors — and testing thousands of drugs to find ones that might work in these organoids before deploying them on patients.

These strategies must, of course, be tested in randomized clinical trials to see if they provide benefit. Can they be deployed at reasonable costs? Will the benefits have an impact on a public scale? But the reinvention of cancer therapy needs time, patience and diligence — and, yes, skepticism. By narrowing our definition of precision medicine too much, we almost narrowed our ambition to deliver precise, thoughtful therapy — or, at times, no therapy — to our patients. It would be a shame to view cancer through such narrow lenses again.

 

This article was originally published in The New York Times.  Read the original article.

Mammals Go Nocturnal in Bid to Avoid Humans

Humans, it turns out, can annoy more than just one another. In fact, some animal populations are escaping their Homo sapien cohabitants by sleeping more during the day, a new study finds.

Mammals across the globe are becoming increasingly nocturnal to avoid humans’ expanding presence, according to the study, published Thursday in Science magazine. The findings show that humans’ presence alone can cause animals across continents — including coyotes, elephants and tigers — to alter their sleep schedules.

“We’re just beginning to scratch the surface on how these behavioral changes are affecting entire ecosystems,” said Kaitlyn Gaynor, an ecologist and graduate student in environmental science at the University of California, Berkeley, who led the study.

Previous research has found that mammals went from being noctural to being active during both day and night about 65.8 million years ago, roughly 200,000 years after most dinosaurs went extinct. “Species for millions of years have been adapting to diurnal activity, but now we’re driving them back into the night and may be driving natural selection,” Ms. Gaynor said in an interview.

The researchers compiled data from 76 studies of 62 species living on six continents in reaching their conclusions. On average, human disruption is making these animals 1.36 times more nocturnal, according to the study.

“For example,” it says, “an animal that typically split its activity evenly between the day and night would increase its proportion of nocturnal activity to 68 percent of total activity near human disturbance.”

Illuminating the Effects of Light Pollution

Unlike many environmental issues, light pollution is a problem researchers say could disappear with the flick of a switch.

In California’s Santa Cruz mountains, for example, coyotes are opting to sleep more during the day in response to recreational human activities such as hiking and bicycling. As a result, coyotes are eating more nocturnal prey, whose waking hours match up more closely with theirs. Recent research such as this was used to provide data for the new study, Ms. Gaynor said.

Thousands of miles from these night-walking coyotes, tigers living in Nepal at the base of the Himalayas are making similar lifestyle decisions. To avoid contact with humans traversing their favorite forest trails, tigers are increasingly walking the same paths during the night instead, said Neil Carter, who researched this tiger population and co-authored the new study.

This isn’t necessarily a bad thing. “The optimist in me is saying there’s a pathway for coexistence here in an otherwise challenging landscape,” said Dr. Carter, an assistant professor at Boise State University. “What we don’t know is how that might negatively affect tigers.”

Future research might strive to show how these animals’ diets, reproductive patterns and mating behavior are being affected by humans, he said.

Humans do not necessarily need to exhibit violent or blatantly destructive behavior to evoke this fear response in animals; often, our simple presence is enough, Ms. Gaynor said. Her own research in Mozambique showed that elephants that typically ate human-grown crops, like maize, were avoiding areas that humans inhabit during the day, but came out after sundown in full force.

Ms. Gaynor said improving technology, such as infrared cameras that can capture vivid images of animals at night and GPS collars that track their whereabouts, have helped to document the trend toward nocturnal existence.

“Working on this study reminds me that we aren’t alone on this planet,” she said. “Being mindful of the ways our activities are shaping the animals habitat will enable coexistence.”

 

 

This article was originally published in The New York Times.  Read the original article.

The Dangers of Belly Fat

If you do nothing else today to protect your health, consider taking an honest measurement of your waist. Stand up straight, exhale (no sucking in that gut!) and with a soft tape measure record your girth an inch or two above your hip bones.

The result has far greater implications than any concerns you might have about how you look or how your clothes fit. In general, if your waist measures 35 or more inches for women or 40 or more inches for men, chances are you’re harboring a potentially dangerous amount of abdominal fat.

Subcutaneous fat that lurks beneath the skin as “love handles” or padding on the thighs, buttocks or upper arms may be cosmetically challenging, but it is otherwise harmless. However, the deeper belly fat — the visceral fat that accumulates around abdominal organs — is metabolically active and has been strongly linked to a host of serious disease risks, including heart disease, cancer and dementia.

You don’t even have to be overweight or obese to face these hazards if you harbor excess fat inside your abdomen. Even people of normal weight can accumulate harmful amounts of hidden fat beneath the abdominal wall. Furthermore, this is not fat you can shed simply by toning up abdominal muscles with exercises like situps. Weight loss through a wholesome diet and exercise — activities like walking and strength-training — is the only surefire way to get rid of it.

Until midlife, men usually harbor a greater percentage of visceral fat than women do, but the pattern usually reverses as women pass through menopause. Few females seem to escape a midlife waistline expansion as body fat redistributes and visceral fat pushes out our bellies. Even though in my eighth decade I weigh less than I did at age 13, my waist is many inches bigger.

Here’s why visceral fat cells are so important to your well-being. Unlike the cells in subcutaneous fat, visceral fat is essentially an endocrine organ that secretes hormones and a host of other chemicals linked to diseases that commonly afflict older adults. One such substance is called retinol-binding protein 4 (RBP4) that was found in a 16-year study of nurses to increase the risk of developing coronary heart disease. This hazard most likely results from the harmful effects of this protein on insulin resistance, the precursor to Type 2 diabetes, and development of the metabolic syndrome, a complex of cardiac risk factors.

The Million Women Study conducted in Britain demonstrated a direct link between the development of coronary heart disease and an increase in waist circumference over a 20-year period. Even when other coronary risk factors were taken into account, the chances of developing heart disease were doubled among the women with the largest waists. Every additional two inches in the women’s waist size raised their risk by 10 percent.

Cancer risk is also raised by belly fat. The chances of getting colorectal cancer were nearly doubled among postmenopausal women who accumulate visceral fat, a Korean study found. Breast cancer risk increases as well. In a study of more than 3,000 premenopausal and postmenopausal women in Mumbai, India, those whose waists were nearly as big as their hips faced a three- to four-times greater risk of getting a breast cancer diagnosis than normal-weight women.

Dutch study published last year linked both total body fat andabdominal fat to a raised risk of breast cancer. When the women in the study lost weight — about 12 pounds on average — changes in biomarkers for breast cancer, like estrogen, leptin and inflammatory proteins, indicated a reduction in breast cancer risk.

Given that two-thirds of American women are overweight or obese, weight loss may well be the single best weapon for lowering the high incidence of breast cancer in this country.

Perhaps most important with regard to the toll on individuals, families and the health care system is the link between abdominal obesity and risk of developing dementia decades later. A study of 6,583 members of Kaiser Permanente of Northern California who were followed for an average of 36 years found that those with the greatest amount of abdominal obesity in midlife were nearly three times more likely to develop dementia three decades later than those with the least abdominal fat.

Having a large abdomen raised dementia risk in the women even if they were of normal weight overall and lacked other health risks related to dementia like heart disease, stroke and diabetes.

Among other medical problems linked to abdominal fat are insulin resistance and the risk of Type 2 diabetes, compromised lung function and migraine headaches. Even asthma risk is raised by being overweight and especially by abdominal obesity, a study of 88,000 California teachersfound.

Over all, according to findings among more than 350,000 European men and women published in The New England Journal of Medicine, having a large waist can nearly double one’s risk of dying prematurely even if overall body weight is normal.

All of which raises the question: How best to shed abdominal fat and, even more important, how to avoid accumulating it in the first place?

Chances are you’ve periodically seen ads on the internet for seemingly magical ways to reduce belly fat. Before you throw good money after bad, let it be said that no pill or potion has been scientifically shown to dissolve abdominal fat. You have to work at it. And that means avoiding or drastically limiting certain substances in your diet, controlling overall caloric intake and engaging in exercise that burns calories.

Perhaps the worst offender is sugar — all forms and especially fructose, which makes up half of sucrose and 55 percent of high-fructose corn syrup. One of the best ways to reduce your sugar intake is to stop drinking sodasand other sweet drinks, including fruit juices. Limiting alcohol, which may suppress fat-burning and add nutritionally empty calories, and avoiding refined carbohydrates like white bread and white rice are also helpful.

Make sure your diet contains adequate amounts of protein and dietary fiber, including vegetables, beans and peas and whole grains.

Get enough sleep — at least seven hours a night. In a study of 68,000 women followed for 16 years, those who slept five hours or less were a third more likely to gain 32 pounds.

Finally, move more. In a major national study, inactivity was more closely linked to weight gain and abdominal obesity than caloric intake.

This article was originally published in The New York Times.  Read the original article.