A Bid to Solve Californias Housing Crisis Could Redraw How Cities Grow

Scott Wiener, the California state senator representing San Francisco, has a pretty good idea for how to save the world. In fact, sitting in a coffee shop in his city’s Financial District, Wiener seems downright perplexed that anyone would be against it. Here’s the idea: Build more housing.

So, with his fellow senator Nancy Skinner, he authored a bill, SB 827, that overwrites some metropolitan zoning—putting policies that had been in the hands of cities under the authority of state government—to allow medium-sized multistory and multiunit buildings near transit stops.

Lots of urbanists and housing activists believe the bill will shift California cities into a denser, transit-oriented, multi-use future. But an unlikely coalition has emerged in opposition: homeowners who don’t want their neighborhoods to change and advocates for the lower-income people of color who almost always get hurt by gentrification.

This isn’t some dry policy fight. The mayor of Berkeley called the bill “a declaration of war against our neighborhoods.” A Los Angeles City Council member said it will make the residential areas he represents in LA’s tony Westside “look like Dubai.” A community organizer in LA wrote that Wiener is a “real estate industry puppet” who supports gentrification and displacement, and compared SB 827 to President Andrew Jackson’s Indian Removal Act.

Housing costs are crushing American cities, perhaps nowhere as severely as in California. It’s catastrophic—homes are priced 2.5 times the median in other places; rents are sky high; the population is increasing (but construction of places to live for them is not); poor people are getting pushed out; homelessness is severe, and on the rise.

Wiener says his fix can, over time, address all that without worsening the state’s drumbeat of evictions. And it'll do even more: “If you want to limit carbon and reduce congestion on freeways, the way you do that is by building a lot more housing near public transportation,” he says. “You get less driving, less carbon emissions, less sprawl so you can protect open spaces and farmland, and healthier families.”

It might even work.

Wiener came to San Francisco in the late 1990s, just in time to see the first dot-com boom turn the city into the center of the world and wreak centrifugal havoc, pushing longtime residents out and housing costs up.

As a community activist and then a politician, Wiener saw the other side of the problem. It’s really hard to get anything built in San Francisco. Booms, critical to the state economy because of the tax money they dump into state treasuries, don’t benefit cities the same way. Unemployment falls to nothing, but housing costs rise. The poorest people get forced out by gentrifying newcomers. The current boom, Wiener says, “has caused lasting damage to the culture and diversity of our city.”

"When we push people into areas like Phoenix and Houston, we see the climate impacts, from flooding to sprawl, with people in these high-polluting areas where they don’t necessarily even want to be."

Wiener has been full of ideas to counteract that. He’s behind a bill to make net neutrality a state law and another to let bars stay open until 4 am. (“Great cities have great nightlife,” he says.) He got a bill passed to force California cities to live up to their unenforced commitments to build new housing. And now he’s saying that within walking distance of mass transit, housing shouldn’t be single-family, suburban style. It should be tall, like 45 feet or up to 85, depending on how wide the street is.

The goal, Wiener says, isn't Hong Kong–style high-rises. It's what housing advocates call the “missing middle,” things like side-by-side duplexes, eight-unit apartment buildings, six-story buildings—a building form even San Francisco built plenty of in the early 20th century. Typically these are wood-frame construction, cheaper to build than luxury steel-and-glass high-rises.

If cities don’t build those housing units, other places will. “People first look for cheaper housing as far away from their jobs as they can that is still a reasonably feasible commute,” says Ethan Elkind, director of the climate program at UC Berkeley Law School’s Center for Law, Energy, and the Environment. “When we push people into areas like Phoenix and Houston, we see the climate impacts, from flooding to sprawl, with people in these high-polluting areas where they don’t necessarily even want to be.”

Denser urban cores, it so happens, are more environmentally responsible. Downtowns have lower per-capita carbon emissions than suburban and rural areas. A household in the heart of Wiener’s district has an average carbon footprint of about 31 tons of CO2 per year. In downtown Phoenix, it’s 34. In suburban Phoenix, it’s 82.

Thanks to global warming, the San Francisco Bay is full of rising seawater. Like Florida and New York, the region faces a future of chronic floods. It also faces fire: Seasonal wildfires like the ones that scorched huge swaths of California this year (including the biggest one in state history) begin at the wildland-urban interface, where human beings build near nature, like in the hills of the East Bay. Spread between mountains and the ocean, Southern California faces similar boundary conditions.

These regions can’t build outward; they have to build inward and upward. After all, one of the fundamental functions of a city is to serve as a bulwark against disaster.

"What you have are two strips of land on both sides of the Bay that are flat, high enough above sea-level rise, and not as prone to fire," says Kim-Mai Cutler, an urbanist and a partner at Initialized Capital. "Longer term, the safest and probably most inclusive way to handle the region's growth is missing-middle or more dense housing along transit lines." (Like many of the people I talked to, Cutler stresses that she's in the "support, if amend" camp on SB 827—pending tenant protections, controls on demolitions, and some way to deal with affordable housing.)

But economics and the law don't accommodate those pressures. Strapped California cities accrue more tax benefits from commercial development than from residential. (As American retail crumbles, “commercial” increasingly means office space and hotels.) Eventually, that pushes out everyone but the richest rich and the poorest poor. “We have offices in cities elsewhere in the US,” says Jeremy Stoppleman, CEO of Yelp and one of 120 signatories to a letter supporting Wiener’s bill. “As someone who lives in California, I’d love to allocate as many positions to San Francisco as possible, but I have to look at performance and retention.”

Yimbys—the “yes in my backyard” supporters of efforts like Wiener’s—slough off aesthetic concerns about “neighborhood character,” sightlines, or the shadows cast by taller buildings. At best, they’ll say, that’s old-people whinging. At worst, this apparent concern for architecture and planning is cover for redlining, keeping affluent neighborhoods closed to young people, lower-income people, and people of color. “It’s areas that have the land values to support multifamily development but don’t want newcomers and more density,” Elkind says. “They’re happy to accept all the benefits of new transit—the property value and benefits it gives them at taxpayer expense—but when it comes to providing housing around those transit networks they consistently say no.”

So the Yimbys instead want more housing to deal with population growth, more transit, more infrastructure, more everything. More city.

Some of the Nimbyism—“not in my backyard” (or, even worse, Banana, as in “build absolutely nothing anywhere near anything”)—that Wiener encounters argues that building new housing doesn’t reduce housing prices, because it attracts even more upper-income people. That doesn’t seem true—Seattle’s recent home-building binge apparently lowered rents, for example. Some opponents, like the California Sierra Club, argue that allowing increased density near transit might quash people’s willingness to pay for any new light rail lines at all.

To be fair, not everyone sees value in denser, more urban cities. You might think that having a place to get a coffee and drop off dry cleaning on the way to a bus stop or train is the best, but some city dwellers don't want to see changes like new four- to eight-story apartment buildings. They bring parking difficulties, traffic, and more crowds.

“It’s becoming rapidly apparent to lots of people that, in fact, the Nimbys are greedy, and they benefit dramatically from the housing shortage.”

Because of a state law called Proposition 13 and its follow-ons, Californians pay property tax based on the value of their home when they bought it—not on real-world increases in its value caused by, let’s say, a new subway nearby or a neighborhood suddenly turning “hot.” Now, changes to residential neighborhoods potentially lower the value of the houses there. Maybe young people keen on biking to work want apartments and light rail. But not so much for families with three kids to drop at two different sports practices, or someone who’s lived in the same house for 50 years who can’t easily move away because they’d face a steep increase in property taxes—again, thanks to Prop 13.

To be really fair, though, putative improvements to cities have often benefited the rich at the expense of people who live there—especially people of color. Some of the opposition to Wiener’s SB 827 and the ideas behind it comes from a real concern for displacement, racism, and classism. It’s already happening. Retail stretches of hair salons and dry cleaners at area-appropriate price points begin to give way to the Four Riders of the Gentrification Apocalypse: bike shops, yoga studios, artisanal tchotchke stores, and third-wave coffee.

The history of urban change in the United States is full of examples of low-income neighborhoods getting erased by capitalists in the name of renewal and modernization. Boston’s West End, Chavez Ravine in Los Angeles, and San Francisco’s Western Addition neighborhoods all used to be vibrant (low-income) communities.

Urban renewal in the mid 20th century didn’t emphasize density or climate change, of course. It was about “blight,” in a literal sense because of the health issues all poor communities face, but also (as the writer Alexis Madrigal has discussed) as a metaphoric term to cover failing infrastructure and economic collapse. But the end was the same.

So the interests of the rich and powerful align here with the interests of disenfranchised people of color—which should be great! Except they’re aligned against the young, new migrants, and the middle class.

Right now it’s hard to tell the players without a scorecard. “The Nimby movement for years stifled development and higher-density projects under the guise of ‘developers are greedy,’” Stoppleman says. “It’s becoming rapidly apparent to lots of people that, in fact, the Nimbys are greedy, and they benefit dramatically from the housing shortage.”

For his part, Wiener doesn’t believe that new housing will ruin neighborhoods and displace poor people. And, he says, people in well-to-do areas are co-opting that argument to protect their own interests. “It makes me nuts when I see wealthy Nimby homeowners in Marin and elsewhere suddenly becoming defenders of low-income people of color,” Wiener says. “These are communities that fought tooth and nail to keep low-income people out.”

Still, he knows the bill still needs work. California already gives a bonus to developers for higher density and mixed-income development, and in 2016 Los Angeles passed a law to do more of the same. “It’s very important that this bill not undermine those incentives,” says Sam Tepperman-Gelfant, an attorney with Public Advocates who works on low-income housing issues. “Giving developers of 100-percent market-rate housing the same or greater benefits awarded to mixed-income developments could really undermine the mixed-income development.”

One risk is that SB 827 could increase the speculative value of land near transit. That would give landlords an incentive to tear down cheaper rental housing and build luxury condominiums. Worse, the death-spiral outcome ousts low-income people who live next to a transit station and replaces them with upper-income people, who use the available transit less often, leading to the demise of that transit. On the other hand, inclusionary housing requirements that force developers to subsidize low-income units sometimes scare developers off altogether—as may be happening in Portland, Oregon, for example.

“The rhetoric and tone in the debate has gotten extremely heated,” Tepperman-Gelfant says. The solution: Making sure people in any potentially affected neighborhood, not just richies from the hills, are at the negotiating table. “If we’re going to get good solutions for low-income people of color, they need to be involved in shaping the policy.”

Wiener knows the negotiations aren’t over. Far from it. “I don’t pretend the bill will be in its pristine form by the end,” he says. “And it’s not by any stretch of the imagination guaranteed to pass.”

Cities change. That’s their nature. If California has to add 100,000 houses a year for the foreseeable future, someone’s going to have to goose that change along. Maybe it’ll be the guy trying to keep San Francisco bars open late. “I’m a progressive urbanist, and I embrace cities,” Wiener says. “A city’s character is not just the physicality of a neighborhood. It’s about who lives there.” A city underwater, on fire, with no young people, no families, no people of color, and restricted to only the richest rich and the poorest poor—that’s not a city at all.

Life in the City

Read more: https://www.wired.com/story/scott-weiner-california-housing-bill-cities/

Scientists Know How Youll Respond to Nuclear Warand They Have a Plan

It will start with a flash of light brighter than any words of any human language can describe. When the bomb hits, its thermal radiation, released in just 300 hundred-millionths of a second, will heat up the air over K Street to about 18 million degrees Fahrenheit. It will be so bright that it will bleach out the photochemicals in the retinas of anyone looking at it, causing people as far away as Bethesda and Andrews Air Force Base to go instantly, if temporarily, blind. In a second, thousands of car accidents will pile up on every road and highway in a 15-mile radius around the city, making many impassable.

That’s what scientists know for sure about what would happen if Washington, DC, were hit by a nuke. But few know what the people—those who don’t die in the blast or the immediate fallout—will do. Will they riot? Flee? Panic? Chris Barrett, though, he knows.

When the computer scientist began his career at Los Alamos National Laboratory, the birthplace of the atomic bomb, the Cold War was trudging into its fifth decade. It was 1987, still four years before the collapse of the Soviet Union. Researchers had made projections of the blast radius and fallout blooms that would result from a 10-kiloton bomb landing in the nation’s capital, but they mostly calculated the immediate death toll. They weren’t used for much in the way of planning for rescue and recovery, because back then, the most likely scenario was mutually assured destruction.

But in the decades since, the world has changed. Nuclear threats come not from world powers but from rogue nation states and terrorist organizations. The US now has a $40 billion missile interception system; total annihilation is not presupposed.

The science of prediction has changed a lot, too. Now, researchers like Barrett, who directs the Biocomplexity Institute of Virginia Tech, have access to an unprecedented level of data from more than 40 different sources, including smartphones, satellites, remote sensors, and census surveys. They can use it to model synthetic populations of the whole city of DC—and make these unfortunate, imaginary people experience a hypothetical blast over and over again.

That knowledge isn’t simply theoretical: The Department of Defense is using Barrett’s simulations—projecting the behavior of survivors in the 36 hours post-disaster—to form emergency response strategies they hope will make the best of the worst possible situation.

You can think of Barrett’s system as a series of virtualized representation layers. On the bottom is a series of datasets that describe the physical landscape of DC—buildings, roads, the electrical grid, water lines, hospital systems. On top of that is dynamic data, like how traffic flows around the city, surges in electrical usage, and telecommunications bandwidth. Then there’s the synthetic human population. The makeup of these e-peeps is determined by census information, mobility surveys, tourism statistics, social media networks, and smartphone data, which is calibrated down to a single city block.

So say you’re a parent in a two-person working household with two kids under the age of 10 living on the corner of First and Adams Streets. The synthetic family that lives at that address inside the simulation may not travel to the actual office or school or daycare buildings that your family visits every day, but somewhere on your block a family of four will do something similar at similar times of day. “They’re not you, they’re not me, they’re people in aggregate,” Barrett says. “But it’s just like the block you live in; same family structures, same activity structures, everything.”

Fusing together the 40-plus databases to get this single snapshot requires tremendous computing power. Blowing it all up with a hypothetical nuclear bomb and watching things unfold for 36 hours takes exponentially more. When Barrett’s group at Virginia Tech simulated what would happen if the populations exhibited six different kinds of behaviors—like healthcare-seeking vs. shelter-seeking—it took more than a day to run and produced 250 terabytes of data. And that was taking advantage of the institute’s new 8,600-core cluster, recently donated by NASA. Last year, the US Threat Reduction Agency awarded them $27 million to speed up the pace of their analysis, so it could be run in something closer to real time.

The system takes advantage of existing destruction models, ones that have been well-characterized for decades. So simulating the first 10 or so minutes after impact doesn’t chew up much in the way of CPUs. By that time, successive waves of heat and radiation and compressed air and geomagnetic surge will have barreled through every building within five miles of 1600 Pennsylvania Avenue. These powerful pulses will have winked out the electrical grid, crippled computers, disabled phones, burned thread patterns into human flesh, imploded lungs, perforated eardrums, collapsed residences, and made shrapnel of every window in the greater metro area. Some 90,000 people will be dead; nearly everyone else will be injured. And the nuclear fallout will be just beginning.

That’s where Barrett’s simulations really start to get interesting. In addition to information about where they live and what they do, each synthetic Washingtonite is also assigned a number of characteristics following the initial blast—how healthy they are, how mobile, what time they made their last phone call, whether they can receive an emergency broadcast. And most important, what actions they’ll take.

These are based on historical studies of how humans behave in disasters. Even if people are told to shelter in place until help arrives, for example, they’ll usually only follow those orders if they can communicate with family members. They’re also more likely to go toward a disaster area than away from it—either to search for family members or help those in need. Barrett says he learned that most keenly in seeing how people responded in the hours after 9/11.

Inside the model, each artificial citizen can track family members’ health states; this knowledge is updated whenever they either successfully place a call or meet them in person. The simulation runs like an unfathomably gnarled decision tree. The model asks each agent a series of questions over and over as time moves forward: Is your household together? If so, go to the closest evacuation location. If not, call all household members. That gets paired with the likelihood that the avatar’s phone is working at that moment, that their family members are still alive, and that they haven’t accumulated so much radiation that they’re too sick to move. And on and on and on until the 36-hour clock runs out.

Then Barrett’s team can run experiments to see how different behaviors result in different mortality rates. The thing that leads to the worst outcomes? If people miss or disregard messages that tell them to delay their evacuation, they may be exposed to more of the fallout—the residual radioactive dust and ash that “falls out” of the atmosphere. About 25,000 more people die if everyone tries to be a hero, encountering lethal levels of radiation when they approach within a mile of ground zero.

Those scenarios give clues about how the government might minimize lethal behaviors and encourage other kinds. Like dropping in temporary cell phone communication networks or broadcasting them from drones. “If phones can work even marginally, then people are empowered with information to make better choices,” Barrett says. Then they'll be part of the solution rather than a problem to be managed. “Survivors can provide first-hand accounts of conditions on the ground—they can become human sensors.”

Not everyone is convinced that massive simulations are the best basis for formulating national policy. Lee Clarke, a sociologist at Rutgers who studies calamities, calls these sorts of preparedness plans "fantasy documents," designed to give the public a sense of comfort, but not much else. "They pretend that really catastrophic events can be controlled," he says, "when the truth of the matter is, we know that either we can't control it or there's no way to know."

Maybe not, but someone still has to try. For the next five years, Barrett’s team will be using its high-throughput modeling system to help the Defense Threat Reduction Agency grapple not just with nuclear bombs but with infectious disease epidemics and natural disasters too. That means they’re updating the system to respond in real time to whatever data they slot in. But when it comes to atomic attacks, they’re hoping to stick to planning.

Going Nuclear

Read more: https://www.wired.com/story/scientists-know-how-youll-respond-to-nuclear-warand-they-have-a-plan/

Meet the Company Trying to Democratize Clinical Trials With AI

A decade ago, Pablo Graiver was working as a VP at Kayak, the online airfare aggregator, when he sat down to dinner with an old friend—a heart surgeon from his home country of Argentina. The talk turned to how tech was doing more to save folks a few bucks on a flight to Rome than to save people’s lives. The biggest problem in healthcare? “Clinical trials,” she said. “They’re a disaster.”

Right now, the US has exactly 19,816 clinical trials open and ready to recruit patients—trials of promising new therapeutics to fight everything from HIV to cancer to Alzheimer’s. About 18,000 of them will get stuck on the tarmac because they won’t get enough people enrolled. And a third of those will never get off the ground at all, for the same reason.

So where are all the patients? Well, the vast majority of them either don’t know the trials exist, or don’t know they can participate. Since 2000, the government has kept details of every clinical drug trial in a national registry, but it’s a nightmare for the average human to navigate. So most pharma companies use recruitment firms to painstakingly comb through patient medical records and find people who might be a good fit—geographically, genetically, and generationally. Each patient hunt is basically a one-off. Like, say if every time you wanted to fly somewhere you had to search on the websites of United, Delta, American, Frontier, Alaska, and Southwest one at a time. And then do the same thing for hotels. (Man, the early aughts were bleak, weren’t they?)

Graiver’s new company, Antidote, does for clinical trials what Kayak and Orbitz and Priceline did for travel. It gives that painful patient matching problem an e-commerce solution. “Fundamentally, it’s just a question of structuring information,” says Graiver. “Which is something the tech world is great at. I was shocked no one had done it already.”

The information that most needed help was something called inclusion/exclusion criteria. It’s what makes a patient eligible to enroll (or not) in a trial: things like age, sex, prior treatment regimes, and current health status. When drugmakers submit new trial details to ClinicalTrials.gov, most of it gets entered as structured data, the kind of thing you enter in a drop-down menu. But eligibility criteria gets entered in a free text field, where you can write whatever you want. That lack of structure means a machine can’t read it—unless it’s been properly trained.

That’s what Antidote does. Graiver’s company started by amassing thousands of clinical studies from ClinicalTrials.gov and the World Health Organization, and they hired clinical experts to manually standardize all that free-wheeling trial jargon into structured language a search engine could understand. Then they trained it to categorize and identify studies using that language.

If you search for adult onset diabetes, it will know to pull up trials for Type 2 diabetes, and diabetes mellitus 2, and T2DM—since they’re all ways to describe the same disease. Called TrialReach at the time, the company proceeded slowly, focusing first only on diabetes and Alzheimer’s studies.

Then in 2015, Graiver’s platform got a big boost from big pharma. For two years prior, Novartis, Pfizer, and Eli Lilly had worked together to organize their trial data to be machine-readable. But as they looked to expand the consortium, the three pharma giants realized a need for a more neutral host organization. So they gave the tech to Graiver. Today, three years and a new name later, Antidote has annotated more than 14,000 trials—about 50 percent of what’s listed on ClinicalTrials.gov—spanning 726 conditions.

The result of all this data structuring is that Antidote can take a number (say, 50) and return studies that say something like this: “Ages Eligible for Study: Child, Adult, Senior” but not studies like this: “Ages Eligible for Study: 75 years and older.” And the interface is pretty slick. You type in your condition, where you live, then choose your age and sex. For a 50-year-old woman living in St. Louis, Missouri with lung cancer, 617 trials pop up. On the next screen, Antidote asks how far you’d be willing to travel; within 20 miles the trial options narrow to 69. If you know what kind of mutation is causing your lung cancer, Antidote can winnow down the number even further. At this point, you could print out a list of the trials, take them to your oncologist, and discuss your options.

Or, you can click on any trials you’re interested in, register your email with Antidote, and they’ll send you contact information for the trial organizers, along with next steps. They’ll also keep you updated on any new trials for which you might be a match.

The service is totally free for patients, who can find it on their own or through a widget on websites for patient organizations. Through 231 of those partnerships, including with the American Kidney Fund, Muscular Dystrophy Association, and Lung Cancer Alliance, Antidote says it reaches more than 15 million people per month. On the website of JDRF—the leading Type 1 Diabetes research fund in the world—27,863 people have searched for a trial using the Antidote widget since it launched in 2016. That’s more than in the previous 10 years combined using JDRF’s existing search tool.

“It makes it less of a wild goose chase for patients,” says Esther Schorr, COO of PatientPower, an online cancer news site and Antidote partner. Surveys of their 30,000 member community have shown an uptick in trial enrollment since the widget went up about a year ago. “There’s just so much information for the common man or woman to get through. Technology can really make a patient’s journey easier.”

It’s also making things easier (and cheaper) for drugmakers. Antidote makes money chiefly by selling limited access to this user database to the world’s biggest pharma companies and clinical research institutions, helping them to fill their own trials.1 When you enter your email address, you’re consenting not just to having your personal information shared with the sponsor of a particular trial, but to having your deidentified data shared with third parties.

Antidote maintains that it still keeps up some kind of a firewall; pharma companies can’t just contact you out of the blue—they have to place a request through Antidote, that you can accept or deny. But the broad consent language in the company’s privacy policy gives Antidote a lot of latitude with how it can use your name, age, sex, location, and any other details you provide about your medical condition.

It’s a tradeoff between privacy and care that many patients are confronting these days. Like the seniors filling their homes and wardrobes with IoT-enabled sensors to keep track of their movement and heart rates. Or the record number of Americans letting companies mine their DNA, so they can know if they’re at higher risk for genetic diseases like Alzheimer’s or cancer. For Antidote’s users, the promise of a cure—however distant—is well worth the risk.

_1 Correction appended 01/30/18 5:40pm EST This story was changed to clarify how Antidote earns revenues by providing clinical trial sponsors access to eligible patients.

Read more: https://www.wired.com/story/meet-the-company-trying-to-democratize-clinical-trials-with-ai/

The WIRED Guide to Climate Change

The world is busted. For decades, scientists have carefully accumulated data that confirms what we hoped wasn’t true: The greenhouse gas emissions that have steadily spewed from cars and planes and factories, the technologies that powered a massive period of economic growth, came at an enormous cost to the planet’s health. Today, we know that absent any change in our behavior, the average global temperature will rise as much as 4 degrees Celsius by the end of the century. Global sea levels will rise by up to 6 feet. Along with those shifts will come radical changes in weather patterns around the globe, leaving coastal communities and equatorial regions forever changed—and potentially uninhabitable.

Strike that. We are already seeing the effects of a dramatically changed climate, from extended wildfire seasons to worsening storm surges. Now, true, any individual weather anomaly is unlikely to be solely the result of industrial emissions, and maybe your particular part of the world has been spared so far. But that’s little solace when the historical trends are so terrifyingly real. (Oh, and while it used to take mathematicians months to calculate how the odds of specific extreme weather events were affected by humans, they’ve knocked that data-crunching time down to weeks.)

Thankfully, it seems most of the world’s nation-states are beyond quibbling over the if of climate change—they’re moving rapidly onto the what now? The 2015 Paris climate agreement marked a turning point in the conversation about planetary pragmatics. Renewable energy in the form of wind and solar is actually becoming competitive with fossil fuels. And the world’s biggest cities are driving sustainable policy choices in a way that rival the contributions of some countries. Scientists and policymakers are also beginning to explore a whole range of last-ditch efforts—we’re talking some serious sci-fi stuff here—to deliberately, directly manipulate the environment. To keep the climate livable, we may need to prepare for a new era of geoengineering.

How this Global Climate Shift Got Started

If we want to go all the way back to the beginning, we could take you to the Industrial Revolution—the point after which climate scientists start to see a global shift in temperature and atmospheric carbon dioxide levels. In the late 1700s, as coal-fired factories started churning out steel and textiles, the United States and other developed nations began pumping out its byproducts. Coal is a carbon-rich fuel, so when it combusts with oxygen, it produces heat along with another byproduct: carbon dioxide. Other carbon-based fuels, like natural gas, do the same in different proportions.

When those emissions entered the atmosphere, they acted like an insulating blanket, preventing the sun’s heat from escaping into space. Over the course of history, atmospheric carbon dioxide levels have varied—a lot. Models of ancient climate activity, hundreds of millions of years back, put carbon dioxide levels as high as several thousand parts per million. In the past half-million years or so, they’ve fluctuated between about 180 and 300 parts per million. But they haven't fluctuated this fast. Today, atmospheric CO2 is at 407 ppm—roughly one and half times as high as it was just two centuries ago. And we know for certain that extra greenhouse gas is from humans; analysis of the carbon isotopes in the atmosphere show that the majority of the extra CO2 comes from fossil fuels.

Radiation from the sun hits the Earth’s atmosphere. Some of it travels down to warm the Earth’s surface (A), while some of it bounces right back into space (B). Some of the energy, though, is absorbed by molecules of greenhouse gases—carbon dioxide, water, methane, and nitrous oxide—that prevent it from escaping (C). Over time, the trapped energy contributes to global warming.

The result: extreme weather. There’s global warming, of course; the Earth’s average temperature has increased 1.1 degrees Celsius since the late 19th century. But it goes further. As oceans absorb heat and polar ice sheets melt, hurricane seasons become more severe as warm water from the oceans kicks warm, moist air into the atmosphere. Sea levels rise—about 8 inches in the past century. Critically, the rate of these changes is increasing.

By the Numbers

1.9 million

The number of homes in the US that could end up underwater if sea levels rise 6 feet by 2100, as models suggest. The Miami area would be particularly devastated: Nearly 33,000 homes would end up underwater, at a total loss of $16 billion.

13.2 percent

The decline in arctic sea ice per decade since 1980. Melting sea ice and land ice sheets causes a warming spiral: On account of being white, ice bounces light back into space, while exposed, darker land absorbs more of the sun’s energy.

2,625 feet

The decrease in thickness of Alaska’s Muir Glacier between 1941 and 2004. During that same period, the front of the glacier retreated 7 miles.

All of this has led 97 percent of climate scientists to agree that warming trends are very likely the result of human activity. And in 1988, that bulk of research led to the founding of the United Nations’ Intergovernmental Panel on Climate Change, which has now issued five assessment reports documenting all the available scientific, technical, and economic information on climate change. The fourth report, in 2007, was the first to clearly state that the climate was unequivocally warming—and that human-created greenhouse gases were very likely to blame.

Just because the panel came to a consensus doesn’t mean everyone else did, though. In 2009, climate scientists had their own WikiLeaks scandal, when climate deniers released a trove of emails from scientists, including the one behind the famous 1999 “hockey stick” graph showing a sharp upturn in global temperature after the Industrial Revolution—one that was clearly sharper than the many global warmings and coolings the Earth has seen. Excerpts taken out of context from those emails showed that researcher, Michael Mann, purportedly conspiring to statistically manipulate his data. Placed back in context, they clearly didn’t.

Political controversy has continued to call into question scientists’ consensus on data supporting the concept of human-caused climate change, motivated by the financial incentives of the fossil fuel industry. But in 2015, the world’s leaders appeared to transcend those squabbles. On December 12, after two weeks of deliberations at the 21st United Nations Conference on Climate Change in Le Bourget, France, 195 countries agreed on the language in what’s known as the Paris agreement. The goal is to keep average global temperature increase to below 2 degrees Celsius above pre-Industrial levels, and as close to 1.5 degrees as possible. It does so by having each country submit a commitment to reduce emissions and collectively bear the economic burden of a shift from fossil fuels—while acknowledging that developing nations would be denied a certain amount of growth if they had to give up cheap energy.

On November 4, 2016, the Paris agreement officially entered into force, just four days before Donald Trump would be elected president of the United States on a campaign promise to pull out of the agreement. And on June 1, 2017, Trump made good on that promise, saying that “the United States will withdraw from the Paris climate accord, but begin negotiations to reenter either the Paris accord or a really entirely new transaction on terms that are fair to the United States, its businesses, its workers, its people, its taxpayers.” Technically, the United States remains in the agreement until 2020, which is the earliest Trump can officially withdraw.

What's Next for Climate Change

The good news is, the global community is pretty united on the risks of climate change. The science is getting good enough to link specific extreme events—anomalous hurricanes, extreme flooding events—directly to human-caused climate change, and that’s making it easier to build a case for dramatic action to stem the damage. But what should those actions be?

The most obvious solution to climate change woes is a dramatic shift away from fossil fuels and toward renewable energies: solar, wind, geothermal, and (deep breath) nuclear. And we’re making solid progress, growing our renewable electricity generation about 2.8 percent every year worldwide.

But there’s an increasing understanding that even if every country that originally signed up for the agreement meets every single one of their stated goals, the Earth is still set to experience some dramatic changes. Some even argue that we’ve passed the tipping point; even if we stopped emitting today, we’d still see dramatic effects. And that means we need to start preparing for a different kind of climate future—primarily, in the way we build. The floods will come, forcing us to make new rules governing building. An ever-lengthening wildfire season will discourage building along the wildland-urban interface. And people will stream in from regions made uninhabitable by drought or heat or flooding, forcing other countries to adapt their immigration policies to a new class of refugees.

All of those changes will cost money. That was one of the primary motivators for the Paris agreement: Switching away from cheap fossil fuels means that businesses and companies are going to need to take a financial hit to ensure a profitable, livable future. Which is why many of the solutions to climate change have nothing to do with climate science, per se: They have to do with economics.

By the Numbers

407.6 parts per million

The concentration of CO2 in the lowest layer of our planet’s atmosphere. Compare that to 380 ppm just a decade ago.

75 percent

The portion of humanity that could face deadly heat waves by 2100 if major cuts to CO2 emissions are not made. By the middle of this century, the American South could see a tripling of days per year that hit 95 degrees.

2 degrees Celsius

The goal for maximum global temperature rise from pre-Industrial levels, as outlined in the Paris agreement. Unfortunately, a study published last summer determined that the chance of hitting that goal by 2100 is a mere 5 percent. The reality is, the rise could be as much as 4.9 degrees Celsius.

1.5 degrees Fahrenheit

The rise in sea surface temperature between 1901 and 2015. Warming seas are a particular problem for coral, which release the photosynthetic algae they use to extract energy from sunlight when they’re stressed and bleach to death.

Socially conscious investors, for their part, are making a difference by holding businesses to account for their impacts on the climate—and the ways in which climate change will impact their business. Last year, a collective of small-scale pension systems forced Occidental Petroleum, one of the country’s largest oil companies, to disclose climate risk in its shareholder prospectus; ExxonMobil caved to pressure in December 2017. Places with large endowments, like universities, are facing political pressure to divest from the fossil fuel industry.

These are all indirect ways of holding the fossil fuel industry accountable for the financial toll it takes on the Earth with every gigaton of greenhouse gases emitted. But there are more direct ways they can pay up, too. After reporting by Inside-Climate News revealed in 2015 that ExxonMobil has long known about the risks of climate change, the company is being investigated by attorneys general in multiple states to determine if it violated consumer or investor protection statutes. The city of San Francisco is suing the five largest publicly-held producers of fossil fuels to get them to pay for infrastructure to protect against rising sea levels. New York City followed with a similar suit.

Let’s say those suits succeed, and at-risk cities get some help making the massive infrastructure updates necessary to protect their coastline investments. After doing everything we can to reduce further carbon emissions and protect life and property from the dangers of a changing climate, it still won’t be enough to keep global temperatures from rising beyond that 2-degree-Celsius tipping point. So that’s when humanity goes into proactive mode, potentially unleashing a controversial set of experimental technologies into the atmosphere. This is geoengineering: Removing carbon dioxide and reducing heat through, let’s say, experimental means. Like salt-spraying ships, and supersized space mirrors.

One of the great hopes of the IPCC’s latest report is that we can pull carbon dioxide directly out of the atmosphere and store it underground through a process called bioenergy with carbon capture and storage. But that technology doesn’t exist yet. Another strategy attempts to reduce heat by injecting sulfate particles into the atmosphere, reflecting solar radiation back into space—but that could trigger too much global cooling. Put mildly, most of the propositions for geoengineering are underdeveloped. The drive to complete those ideas will depend on the success of global cutbacks in the decades to come.

Learn More

  • The Dirty Secret of the World’s Plan to Avert Climate DisasterWhen the United Nations’ Intergovernmental Panel on Climate Change issued its fifth assessment report in 2014, it laid out 116 scenarios for keeping average global temperature rise under 2 degrees Celsius. The tricky thing is, 101 of them rely on a carbon dioxide-sucking technology that doesn’t exist yet.

  • Renewables Aren’t Enough. Clean Coal Is the FutureThe world can’t wean itself off of coal in an instant—so before transitioning to fully renewable fuels, capturing and storing the carbon emitted from coal plants will be critical to meeting the Paris agreement goals. In this 2014 feature, Charles C. Mann visits GreenGen, a billion-dollar Chinese facility that’s one of the most consequential efforts to realize that technology, extracting CO2 from a coal-fired power plant to store it underground.

  • Nations Be Damned, the World's Cities Can Take a Big Bite Out of EmissionsAt the C40 Mayors Summit, leaders from around the world meet to discuss how their cities (more than 40, now) can fight climate change. If every city with a population over 100,000 stepped up, they could account for 40 percent of the reductions required to hit the Paris climate goals.

  • The US Flirts With Geoengineering to Stymie Climate ChangeGeoengineering solutions to climate change—doing stuff like spraying sulfate particles into the atmosphere to keep temperature down—could have catastrophic side effects. Which is why we need more research before considering them. One congressman introduced a bill that would set the National Academies of Science to the task.

  • The World Needs Drastic Action to Meet Paris Climate GoalsWIRED science reporter Nick Stockton traveled to Paris at the end of 2015 to see the negotiations that led to the signing of the global climate agreement. He came away invigorated but daunted by the challenge of converting all the industries represented—from agriculture to transportation to concrete—away from fossil fuels. Here’s what needs to be done.

  • Climate Change Causes Extreme Weather—But Not All of ItScientists know that accumulated CO2 means higher temperatures, longer dry spells, and bigger storms. But ask them whether global warming caused a Midwest heatwave, the California drought, or a New York hurricane, and they’ll explain ad nauseam how hard it is to untangle whether any single weather event is due to natural variation or climate change. Hard, but not impossible.

This guide was last updated on January 31, 2018.

Enjoyed this deep dive? Check out more WIRED Guides.

Read more: https://www.wired.com/story/guide-climate-change/

How the Religious Freedom Division Threatens LGBT Healthand Science

When Marci Bowers consults with her patients, no subject is off limits. A transgender ob/gyn and gynecologic surgeon in Burlingame, California, she knows how important it is that patients feel comfortable sharing their sexual orientation and gender identity with their doctor, trust and honesty being essential to providing the best medical care. But Bowers knows firsthand that the medical setting can be a challenging place for patients to be candid. That for LGBT people, it can even be dangerous.

"I know from talking with patients that they're often denied services, not just for surgery and hormone therapy, but basic medical care," Bowers says. "I've had patients show up in an emergency room who were denied treatment because they were transgender."

Experiences like these are what make the creation of a new "Conscience and Religious Freedom" division within the US Department of Health and Human Services so troubling. Announced last week by acting secretary of HHS Eric Hargan, the division's stated purpose is to protect health care providers who refuse to provide services that contradict their moral or religious beliefs—services that include, according to the division's new website, "abortion and assisted suicide."

But the division's loose language could leave room for physicians to provide substandard care to LGBT patients—or abstain from treating them altogether. Indeed, in a statement to WIRED, an HHS spokesperson said the department would not interpret prohibitions on sex discrimination in health care to cover gender identity, citing its adherence to a 2016 court order that excluded transgender people from certain anti-discrimination protections.

That's obviously bad for the health and wellbeing of LGBT people, who may feel less comfortable sharing their sexual orientation or gender identity going forward—but it's bad for science, medicine, and policy, as well.

At its core, the new HHS office threatens data and understanding. Collecting facts and figures on sexual orientation and gender identity fills valuable gaps in the medical community's comprehension of LGBT patients and their public health needs, and progress on that front has accelerated in recent years. "Gathering these details has tremendous potential to improve care for LGBT people," says psychologist Ed Callahan, who in 2015 helped orchestrate the addition of fields for sexual orientation and gender identity—aka "SO/GI"—to electronic health records at UC Davis, the first academic system in the country to do so. The more data doctors and policymakers have on LGBT people, the better they can understand the institutional hurdles, social challenges, and public health risks they face as sexual minorities.

The creation of the new HHS division is but the latest development in an ongoing battle over whether and how that data is collected. As of this year, the Office of the National Coordinator of Health Information Technology requires outpatient clinics to use software that collects SO/GI information if they receive federal incentive payments for using government-certified electronic health care records. The Bureau of Primary Health Care requires health centers to report the sexual orientation and gender identity of their patients. And the Centers for Disease Control and Prevention and the Centers for Medicare & Medicaid Services continue to encourage data collection on SO/GI.

"There’s actually been a lot of good work happening at the Veterans Health Administration," says Sean Cahill, director of health policy research at the Fenway Institute, a Boston-based center for research, training, and policy development on LGBT-related health issues. Since 2012, the VA has encouraged the collection of SO/GI data and issued directives that ensure respectful, equitable, culturally competent care for LGBT veterans. And by the end of Obama's presidency, the number of federal surveys and studies measuring sexual orientation had increased to 12, seven of which also measured gender identity or transgender status. "So the good news is that the shift to gathering these data has been underway for several years, and does continue," Cahill says.

But data collection has slowed under the Trump administration. In the past 13 months, surveys collecting data on participation in Older Americans Act-funded programs and Administration for Community Living-supported disability services have removed questions pertaining to sexual orientation and gender identity. In the same time span, numerous political maneuvers have sown uncertainty and distrust throughout the LGBT community. A July 2017 directive from President Trump attempted to ban transgender people from enlisting in the military, and in December policy analysts were presented with a list of banned words—including "transgender"—not to be used in official CDC budget documents.

In short: Under the Trump administration, the country is simultaneously collecting less data and promoting conditions that leave LGBT patients wary of their healthcare providers. "These patients already face significant obstacles to accessing medical care, and I fear implementation of these measures will only make these obstacles worse," says Stanley Vance, a pediatrician at University of California San Francisco and an expert in the care of gender nonconforming youth. "I also worry that these measures will be an institutionalized form of discrimination against patients who have been identified as a sexual minority or transgender who freely come out to their providers or through information previously entered in electronic medical records."

Even when physicians don’t overtly discriminate against gay and transgender patients, negative health care experiences are routine. Many physicians simply don't think to consider a patient's SO/GI—information they can use to not only respect their patients, but screen them for family rejection, which studies show increases the risk for depression, suicide, and high-risk sexual behaviors. Failing to acknowledge a patient's SO/GI can compound the ill effects of social stigma and inaccessibility to care like hormone therapy or gender affirmation surgery. "Across the board, LGBT patients are the group least likely to come back for further care," Callahan says. "And that often happens because of ways they are dismissed as not existing."

Of course, the reality is that LGBT people do exist, they're entitled to equitable services and care, and they deserve to be counted—sometimes literally. "It really shouldn't be political, you know? It shouldn't be a partisan issue," Cahill says. "It's about science and data and providing quality care to all patients."

Read more: https://www.wired.com/story/how-the-religious-freedom-division-threatens-lgbt-healthand-science/

Spread of breast cancer linked to compound in asparagus and other foods

Using drugs or diet to reduce levels of asparagine may benefit patients, say researchers

Spread of breast cancer linked to compound in asparagus and other foods

Using drugs or diet to reduce levels of asparagine may benefit patients, say researchers

Read more: https://www.theguardian.com/science/2018/feb/07/cutting-asparagus-could-prevent-spread-of-breast-cancer-study-shows

The Second Coming of Ultrasound

Before Pierre Curie met the chemist Marie Sklodowska; before they married and she took his name; before he abandoned his physics work and moved into her laboratory on Rue Lhomond where they would discover the radioactive elements polonium and radium, Curie discovered something called piezoelectricity. Some materials, he found—like quartz and certain kinds of salts and ceramics—build up an electric charge when you squeeze them. Sure, it’s no nuclear power. But thanks to piezoelectricity, US troops could locate enemy submarines during World War I. Thousands of expectant parents could see their baby’s face for the first time. And one day soon, it may be how doctors cure disease.

Ultrasound, as you may have figured out by now, runs on piezoelectricity. Applying voltage to a piezoelectric crystal makes it vibrate, sending out a sound wave. When the echo that bounces back is converted into electrical signals, you get an image of, say, a fetus, or a submarine. But in the last few years, the lo-fi tech has reinvented itself in some weird new ways.

Researchers are fitting people’s heads with ultrasound-emitting helmets to treat tremors and Alzheimer’s. They’re using it to remotely activate cancer-fighting immune cells. Startups are designing swallowable capsules and ultrasonically vibrating enemas to shoot drugs into the bloodstream. One company is even using the shockwaves to heal wounds—stuff Curie never could have even imagined.

So how did this 100-year-old technology learn some new tricks? With the help of modern-day medical imaging, and lots and lots of bubbles.

Bubbles are what brought Tao Sun from Nanjing, China to California as an exchange student in 2011, and eventually to the Focused Ultrasound Lab at Brigham and Women’s Hospital and Harvard Medical School. The 27-year-old electrical engineering grad student studies a particular kind of bubble—the gas-filled microbubbles that technicians use to bump up contrast in grainy ultrasound images. Passing ultrasonic waves compress the bubbles’ gas cores, resulting in a stronger echo that pops out against tissue. “We’re starting to realize they can be much more versatile,” says Sun. “We can chemically design their shells to alter their physical properties, load them with tissue-seeking markers, even attach drugs to them.”

Nearly two decades ago, scientists discovered that those microbubbles could do something else: They could shake loose the blood-brain barrier. This impassable membrane is why neurological conditions like epilepsy, Alzheimer’s, and Parkinson’s are so hard to treat: 98 percent of drugs simply can’t get to the brain. But if you station a battalion of microbubbles at the barrier and hit them with a focused beam of ultrasound, the tiny orbs begin to oscillate. They grow and grow until they reach the critical size of 8 microns, and then, like some Grey Wizard magic, the blood-brain barrier opens—and for a few hours, any drugs that happen to be in the bloodstream can also slip in. Things like chemo drugs, or anti-seizure medications.

This is both super cool and not a little bit scary. Too much pressure and those bubbles can implode violently, irreversibly damaging the barrier.

That’s where Sun comes in. Last year he developed a device that could listen in on the bubbles and tell how stable they were. If he eavesdropped while playing with the ultrasound input, he could find a sweet spot where the barrier opens and the bubbles don’t burst. In November, Sun’s team successfully tested the approach in rats and mice, publishing their results in Proceedings in the National Academy of Sciences.

“In the longer term we want to make this into something that doesn’t require a super complicated device, something idiot-proof that can be used in any doctor’s office,” says Nathan McDannold, co-author on Sun’s paper and director of the Focused Ultrasound Lab. He discovered ultrasonic blood-brain barrier disruption, along with biomedical physicist Kullervo Hynynen, who is leading the world’s first clinical trial evaluating its usefulness for Alzheimer’s patients at the Sunnybrook Research Institute in Toronto. Current technology requires patients to don special ultrasound helmets and hop in an MRI machine, to ensure the sonic beams go to the right place. For the treatment to gain any widespread traction, it’ll have to become as portable as the ultrasound carts wheeled around hospitals today.

More recently, scientists have realized that the blood-brain barrier isn’t the only tissue that could benefit from ultrasound and microbubbles. The colon, for instance, is pretty terrible at absorbing the most common drugs for treating Crohn’s disease, ulcerative colitis, and other inflammatory bowel diseases. So they’re often delivered via enemas—which, inconveniently, need to be left in for hours.

But if you send ultrasound waves waves through the colon, you could shorten that process to minutes. In 2015, pioneering MIT engineer Robert Langer and then-PhD student Carl Schoellhammer showed that mice treated with mesalamine and one second of ultrasound every day for two weeks were cured of their colitis symptoms. The method also worked to deliver insulin, a far larger molecule, into pigs.

Since then, the duo has continued to develop the technology within a start-up called Suono Bio, which is supported by MIT’s tech accelerator, The Engine. The company intends to submit its tech for FDA approval in humans sometime later this year.

Ultrasound sends pressure waves through liquid in the body, creating bubble-filled jets that can propel microscopic drug droplets like these into surrounding tissues.
Suono Bio

Instead of injecting manufactured microbubbles, Suono Bio uses ultrasound to make them in the wilds of the gut. They act like jets, propelling whatever is in the liquid into nearby tissues. In addition to its backdoor approach, Suono is also working on an ultrasound-emitting capsule that could work in the stomach for things like insulin, which is too fragile to be orally administered (hence all the needle sticks). But Schoellhammer says they have yet to find a limit on the kinds of molecules they can force into the bloodstream using ultrasound.

“We’ve done small molecules, we’ve done biologics, we’ve tried DNA, naked RNA, we’ve even tried Crispr,” he says. “As superficial as it may sound, it all just works.”

Earlier this year, Schoellhammer and his colleagues used ultrasound to deliver a scrap of RNA that was designed to silence production of a protein called tumor necrosis factor in mice with colitis. (And yes, this involved designing 20mm-long ultrasound wands to fit in their rectums). Seven days later, levels of the inflammatory protein had decreased sevenfold and symptoms had dissipated.

Now, without human data, it’s a little premature to say that ultrasound is a cure-all for the delivery problems facing gene therapies using Crispr and RNA silencing. But these early animal studies do offer some insights into how the tech might be used to treat genetic conditions in specific tissues.

Even more intriguing though, is the possibility of using ultrasound to remotely control genetically-engineered cells. That’s what new research led by Peter Yingxiao Wang, a bioengineer at UC San Diego, promises to do. The latest craze in oncology is designing the T-cells of your immune system to better target and kill cancer cells. But so far no one has found a way to go after solid tumors without having the T-cells also attack healthy tissue. Being able to turn on T-cells near a tumor but nowhere else would solve that.

Wang’s team took a big step in that direction last week, publishing a paper that showed how you could convert an ultrasonic signal into a genetic one. The secret? More microbubbles.

This time, they coupled the bubbles to proteins on the surface of a specially designed T-cell. Every time an ultrasonic wave passed by, the bubble would expand and shrink, opening and closing the protein, letting calcium ions flow into the cell. The calcium would eventually trigger the T-cell to make a set of genetically encoded receptors, directing it it to attack the tumor.

“Now we’re working on figuring out the detection piece,” says Wang. “Adding another receptor so that we’ll known when they’ve accumulated at the tumor site, then we’ll use ultrasound to turn them on.”

In his death, Pierre Curie was quickly eclipsed by Marie; she went on to win another Nobel, this time in chemistry. The discovery for which she had become so famous—radiation—would eventually take her life, though it would save the lives of so many cancer patients in the decades to follow. As ultrasound’s second act unfolds, perhaps her husband’s first great discovery will do the same.

Read more: https://www.wired.com/story/the-second-coming-of-ultrasound/

A Familys Race to Cure a Daughters Genetic Disease

One July afternoon last summer, Matt Wilsey distributed small plastic tubes to 60 people gathered in a Palo Alto, California, hotel. Most of them had traveled thousands of miles to be here; now, each popped the top off a barcoded tube, spat in about half a teaspoon of saliva, and closed the tube. Some massaged their cheeks to produce enough spit to fill the tubes. Others couldn’t spit, so a technician rolled individual cotton swabs along the insides of their cheeks, harvesting their skin cells—and the valuable DNA inside.

One of the donors was Asger Vigeholm, a Danish business developer who had traveled from Copenhagen to be here, in a nondescript lobby at the Palo Alto Hilton. Wilsey is not a doctor, and Vigeholm is not his patient. But they are united in a unique medical pursuit.

Wilsey’s daughter, Grace, was one of the first children ever diagnosed with NGLY1 deficiency. It’s a genetic illness defined by a huge range of physical and mental disabilities: muscle weakness, liver problems, speech deficiencies, seizures. In 2016, Vigeholm’s son, Bertram, became the first child known to die from complications of the disease. Early one morning, as Bertram, age four, slept nestled between his parents, a respiratory infection claimed his life, leaving Vigeholm and his wife, Henriette, to mourn with their first son, Viktor. He, too, has NGLY1 deficiency.

Grace and her mother, Kristen Wilsey.


The night before the spit party, Vigeholm and Wilsey had gathered with members of 16 other families, eating pizza and drinking beer on the hotel patio as they got to know each other. All of them were related to one of the fewer than 50 children living in the world with NGLY1 deficiency. And all of them had been invited by the Wilseys—Matt and his wife Kristen, who in 2014 launched the Grace Science Foundation to study the disease.

These families had met through an online support group, but this was the first time they had all come together in real life. Over the next few days in California, every family member would contribute his or her DNA and other biological samples to scientists researching the disease. On Friday and Saturday, 15 of these scientists described their contributions to the foundation; some studied the NGLY1 gene in tiny worms or flies, while others were copying NGLY1 deficient patients’ cells to examine how they behaved in the lab. Nobody knows what makes a single genetic mutation morph into all the symptoms Grace experiences. But the families and scientists were there to find out—and maybe even find a treatment for the disease.

That search has been elusive. When scientists sequenced the first human genome in 2000, geneticist Francis Collins, a leader of the Human Genome Project that accomplished the feat, declared that it would lead to a “complete transformation in therapeutic medicine” by 2020. But the human genome turned out to be far more complex than scientists had anticipated. Most disorders, it’s now clear, are caused by a complicated mix of genetic faults and environmental factors.

And even when a disease is caused by a defect in just one gene, like NGLY1 deficiency, fixing that defect is anything but simple. Scientists have tried for 30 years to perfect gene therapy, a method for replacing defective copies of genes with corrected ones. The first attempts used modified viruses to insert corrected genes into patients’ genomes. The idea appeared elegant on paper, but the first US gene therapy to treat an inherited disease—for blindness—was approved just last year. Now scientists are testing methods such as Crispr, which offers a far more precise way to edit DNA, to replace flawed genes with error-free ones.

Certainly, the genetics revolution has made single-mutation diseases easier to identify; there are roughly 7,000, with dozens of new ones discovered each year. But if it’s hard to find a treatment for common genetic diseases, it’s all but impossible for the very rare ones. There’s no incentive for established companies to study them; the potential market is so small that a cure will never be profitable.

Which is where the Wilseys—and the rest of the NGLY1 families—come in. Like a growing number of groups affected by rare genetic diseases, they’re leapfrogging pharmaceutical companies’ incentive structures, funding and organizing their own research in search of a cure. And they’re trying many of the same approaches that Silicon Valley entrepreneurs have used for decades.

At 10:30 on a recent Monday morning, Grace is in Spanish class. The delicate 8-year-old with wavy brown hair twisted back into a ponytail sits in her activity chair—a maneuverable kid-sized wheelchair. Her teacher passes out rectangular pieces of paper, instructing the students to make name tags.

Grace grabs her paper and chews it. Her aide gently takes the paper from Grace’s mouth and puts it on Grace’s desk. The aide produces a plastic baggie of giant-sized crayons shaped like cylindrical blocks; they’re easier for Grace to hold than the standard Crayolas that her public school classmates are using.

Grace’s NGLY1 deficiency keeps her from speaking.


At her school, a therapist helps her communicate.


The other kids have written their names and are now decorating their name tags.

“Are we allowed to draw zombies for the decorations?” one boy asks, as Grace mouths her crayons through the baggie.

Grace’s aide selects a blue crayon, puts it in Grace’s hand, and closes her hand over Grace’s. She guides Grace’s hand, drawing letters on the paper: “G-R-A-C-E.”

Grace lives with profound mental and physical disabilities. After she was born in 2009, her bewildering list of symptoms—weak muscles, difficulty eating, failure to thrive, liver damage, dry eyes, poor sleep—confounded every doctor she encountered. Grace didn’t toddle until she was three and still needs help using the toilet. She doesn’t speak and, like an infant, still grabs anything within arm’s reach and chews on it.

Her father wants to help her. The grandson of a prominent San Francisco philanthropist and a successful technology executive, Matt Wilsey graduated from Stanford, where he became friends with a fellow undergraduate who would one day be Grace’s godmother: Chelsea Clinton. Wilsey went on to work in the Clinton White House, on George W. Bush’s presidential campaign, and in the Pentagon.

But it was his return to Silicon Valley that really prepared Wilsey for the challenge of his life. He worked in business development for startups, where he built small companies into multimillion-dollar firms. He negotiated a key deal between online retailer Zazzle and Disney, and later cofounded the online payments company Cardspring, where he brokered a pivotal deal with First Data, the largest payment processor in the world. He was chief revenue officer at Cardspring when four-year-old Grace was diagnosed as one of the first patients with NGLY1 deficiency in 2013—and when he learned there was no cure.

At the time, scientists knew that the NGLY1 gene makes a protein called N-glycanase. But they had no idea how mistakes in the NGLY1 gene caused the bewildering array of symptoms seen in Grace and other kids with NGLY1 deficiency.

Wilsey’s experience solving technology problems spurred him to ask scientists, doctors, venture capitalists, and other families what he could do to help Grace. Most advised him to start a foundation—a place to collect money for research that might lead to a cure for NGLY1 deficiency.

As many as 30 percent of families who turn to genetic sequencing receive a diagnosis. But most rare diseases are new to science and medicine, and therefore largely untreatable. More than 250 small foundations are trying to fill this gap by sponsoring rare disease research. They’re funding scientists to make animals with the same genetic defects as their children so they can test potential cures. They’re getting patients’ genomes sequenced and sharing the results with hackers, crowdsourcing analysis of their data from global geeks. They’re making bespoke cancer treatments and starting for-profit businesses to work on finding cures for the diseases that affect them.

“Start a foundation for NGLY1 research, get it up and running, and then move on with your life,” a friend told Wilsey.

Wilsey heeded part of that advice but turned the rest of it on its head.

In 2014, Wilsey left Cardspring just before it was acquired by Twitter and started the Grace Science Foundation to fund research into NGLY1 deficiency. The foundation has committed $7 million to research since then, most of it raised from the Wilseys’ personal network.

Many other families with sick loved ones have started foundations, and some have succeeded. In 1991, for instance, a Texas boy named Ryan Dant was diagnosed with a fatal muscle-wasting disease called mucopolysaccharidosis type 1. His parents raised money to support an academic researcher who was working on a cure for MPS1; a company agreed to develop the drug, which became the first approved treatment for the disease in 2003.

But unlike Dant, Grace had a completely new disease. Nobody was researching it. So Wilsey began cold-calling dozens of scientists, hoping to convince them to take a look at NGLY1 deficiency; if they agreed to meet, Wilsey read up on how their research might help his daughter. Eventually he recruited more than 100 leading scientists, including Nobel Prize-winning biologist Shinya Yamanaka and Carolyn Bertozzi, to figure out what was so important about N-glycanase. He knew that science was unpredictable and so distributed Grace Science’s funding through about 30 grants worth an average of $135,000 apiece.

Two years later, one line of his massively parallel attack paid off.

Matt Wilsey, Grace’s father.


Bertozzi, a world-leading chemist, studies enzymes that add and remove sugars from other proteins, fine-tuning their activity. N-glycanase does just that, ripping sugars off from other proteins. Our cells are not packed with the white, sweet stuff that you add to your coffee. But the tiny building blocks of molecules similar to table sugar can also attach themselves to proteins inside cells, acting like labels that tell the cell what to do with these proteins.

Scientists thought that N-glycanase’s main role was to help recycle defective proteins, but many other enzymes are also involved in this process. Nobody understood why the loss of N-glycanase had such drastic impacts on NGLY1 kids.

In 2016, Bertozzi had an idea. She thought N-glycanase might be more than just a bit player in the cell’s waste management system, so she decided to check whether it interacts with another protein that turns on the proteasomethe recycling machine within each of our cells.

This protein is nicknamed Nerf, after its abbreviation, Nrf1. But fresh-made Nerf comes with a sugar attached to its end, and as long as that sugar sticks, Nerf doesn’t work. Some other protein has to chop the sugar off to turn on Nerf and activate the cellular recycling service.

Think of Nerf’s sugar like the pin in a grenade: You have to remove the pin—or in this case, the sugar—to explode the grenade and break down faulty proteins.

But nobody knew what protein was pulling the pin out of Nerf. Bertozzi wondered if N-glycanase might be doing that job.

To find out, she first tested cells from mice and humans with and without working copies of the NGLY1 gene. The cells without NGLY1 weren’t able to remove Nerf’s sugar, but those with the enzyme did so easily. If Bertozzi added N-glycanase enzymes to cells without NGLY1, the cells began chopping off Nerf’s sugar just as they were supposed to: solid evidence, she thought, that N-glycanase and Nerf work together. N-glycanase pulls the pin (the sugar) out of the grenade (the Nerf protein) to trigger the explosion (boom).

The finding opened new doors for NGLY1 disease research. It gave scientists the first real clue about how NGLY1 deficiency affects patients’ bodies: by profoundly disabling their ability to degrade cellular junk via the proteasome.

As it turns out, the proteasome is also involved in a whole host of other diseases, such as cancer and brain disorders, that are far more common than NGLY1 deficiency. Wilsey immediately grasped the business implications: He had taken a moon shot, but he’d discovered something that could get him to Mars. Pharmaceutical companies had declined to work on NGLY1 deficiency because they couldn’t make money from a drug for such a rare disease. But Bertozzi had now linked NGLY1 deficiency to cancer and maladies such as Parkinson’s disease, through the proteasome—and cancer drugs are among the most profitable medicines.

Suddenly, Wilsey realized that he could invent a new business model for rare diseases. Work on rare diseases, he could argue, could also enable therapies for more common—and therefore profitable—conditions.

In early 2017, Wilsey put together a slide deck—the same kind he’d used to convince investors to fund his tech startups. Only this time, he wanted to start a biotechnology company focused on curing diseases linked to NGLY1. Others had done this before, such as John Crowley, who started a small biotechnology company that developed the first treatment for Pompe disease, which two of his children have. But few have been able to link their rare diseases to broader medical interests in the way that Wilsey hoped to.

He decided to build a company that makes treatments for both rare and common diseases involving NGLY1. Curing NGLY1 disease would be to this company as search is to Google—the big problem it was trying to solve, its reason for existence. Treating cancer would be like Google’s targeted advertising—the revenue stream that would help the company get there.

But his idea had its skeptics, Wilsey’s friends among them.

One, a biotechnology investor named Kush Parmar, told Wilsey about some major obstacles to developing a treatment for NGLY1 deficiency. Wilsey was thinking of using approaches such as gene therapy to deliver corrected NGLY1 genes into kids, or enzyme replacement therapy, to infuse kids with the N-glycanase enzyme they couldn’t make on their own.

But NGLY1 deficiency seems particularly damaging to cells in the brain and central nervous system, Parmar pointed out—places that are notoriously inaccessible to drugs. It’s hard to cure a disease if you can’t deliver the treatment to the right place.

Other friends warned Wilsey that most biotech startups fail. And even if his did succeed as a company, it might not achieve the goals that he wanted it to. Ken Drazan, president of the cancer diagnostics company Grail, is on the board of directors of Wilsey’s foundation. Drazan warned Wilsey that his company might be pulled away from NGLY1 deficiency. “If you take people’s capital, then you have to be open to wherever that product development takes you,” Drazan said.

But Wilsey did have some things going for him. Biotechnology companies have become interested of late in studying rare diseases—ones like the type of blindness for which the gene therapy was approved last year. If these treatments represent true cures, they can command a very high price.

Still, the newly approved gene therapy for blindness may be used in 6,000 people, 100 times more than could be helped by an NGLY1 deficiency cure. Wilsey asked dozens of biotechnology and pharmaceutical companies if they would work on NGLY1 deficiency. Only one, Takeda, Japan’s largest drug company, agreed to conduct substantial early-stage research on the illness. Others turned him down flat.

If no one else was going to develop a drug to treat NGLY1 deficiency, Wilsey, decided, he might as well try. “We have one shot at this,” he says. “Especially if your science is good enough, why not go for it?”

“Matt was showing classic entrepreneurial tendencies,” says Dan Levy, the vice president for small business at Facebook, who has known Wilsey since they rushed the same Stanford fraternity in the 1990s. “You have to suspend a little bit of disbelief, because everything is stacked against you.”

At 11 am, Grace sits in a classroom with a speech therapist. Though Grace doesn’t speak, she’s learning to use her “talker,” a tablet-sized device with icons that help her communicate. Grace grabs her talker and presses the icons for “play” and “music,” then presses a button to make her talker read the words out loud.

The "talker" used for Grace’s therapy.


“OK, play music,” her therapist says, starting up a nearby iPad.

Grace watches an Elmo video on the iPad for a few moments, her forehead crinkled in concentration, her huge brown eyes a carbon copy of her dad’s. Then Grace stops the video and searches for another song.

Suddenly, her therapist slides the iPad out of Grace’s reach.

“You want ‘Slippery Fish,’” her therapist says. “I want you to tell me that.”

Grace turns to her talker: “Play music,” she types again.

The therapist attempts one more time to help Grace say more clearly which particular song she wants. Instead, Grace selects the symbols for two new words.

“Feel mad,” Grace’s talker declares.

Grace working with a therapist in one of their therapy rooms.


There’s no denying how frustrating it can be for Grace to rely on other people to do everything for her, and how hard her family works to meet her constant needs.

Matt and Kristen can provide the therapy, equipment, medicines, and around-the-clock supervision that Grace needs to have a stable life. But that is not enough—not for Grace, who wants "Slippery Fish," nor for her parents, who want a cure.

So last summer, Wilsey raised money to bring the Vigeholms and the other NGLY1 families to Palo Alto, where they met with Grace’s doctors and the Grace Science Foundation researchers. One Japanese scientist, Takayuki Kamei, was overjoyed to meet two of the NGLY1 deficiency patients: “I say hello to their cells every morning,” he told their parents.

And because all of these families also want a cure, each also donated blood, skin, spit, stool, and urine to the world’s first NGLY1 deficiency biobank. In four days, scientists collected more NGLY1 deficiency data than had been collected in the entire five years since the disease was discovered. These patient samples, now stored at Stanford University and at Rutgers University, have been divvied up into more than 5,000 individual samples that will be distributed to academic and company researchers who wish to work on NGLY1 deficiency.

That same month, Wilsey closed a seed round of $7 million to start Grace Science LLC. His main backer, a veteran private equity investor, prefers not to be named. Like many in Silicon Valley, he’s recently become attracted to health care by the promise of a so-called “double bottom line”: the potential to both to make money and to do good by saving lives.

Wilsey is chief executive of the company and heavily involved in its scientific strategy. He’s looking for a head scientist with experience in gene therapy and in enzyme replacement therapy, which Mark Dant and John Crowley used to treat their sick children. Gene therapy now seems poised to take off after years of false starts; candidate cures for blood and nervous system disorders are speeding through clinical trials, and companies that use Crispr have raised more than $1 billion.

Wilsey doesn’t know which of these strategies, if any, will save Grace. But he hopes his company will find an NGLY1 deficiency cure within five years. The oldest known NGLY1 deficient patient is in her 20s, but since nobody has been looking for these patients until now, it’s impossible to know how many others—like Bertram—didn’t make it that long.

“We don’t know what Grace’s lifespan is,” Wilsey says. “We’re always waiting for the other shoe to drop.”

But at 3 pm on this one November day, that doesn’t seem to matter.

School’s out, and Grace is seated atop a light chestnut horse named Ned. Five staff members lead Grace through a session of equine therapy. Holding herself upright on Ned’s back helps Grace develop better core strength and coordination.

Grace on her horse.


Grace and Ned walk under a canopy of oak trees. Her face is serene, her usually restless legs still as Ned paces through late-afternoon sunshine. But for a little grace, there may be a cure for her yet.

Read more: https://www.wired.com/story/a-familys-race-to-cure-a-daughters-genetic-disease/

How the sushi boom is fuelling tapeworm infections

As eating raw fish has become more popular, gruesome tapeworm tales have emerged. But how worried should sashimi lovers be and how else might we become infected?

How the sushi boom is fuelling tapeworm infections

As eating raw fish has become more popular, gruesome tapeworm tales have emerged. But how worried should sashimi lovers be and how else might we become infected?

Read more: https://www.theguardian.com/world/2018/jan/22/how-the-sushi-boom-is-fuelling-tapeworm-infections

Why No Gadget Can Prove How Stoned You Are

If you’ve spent time with marijuana—any time at all, really—you know that the high can be rather unpredictable. It depends on the strain, its level of THC and hundreds of other compounds, and the interaction between all these elements. Oh, and how much you ate that day. And how you took the cannabis. And the position of the North Star at the moment of ingestion.

OK, maybe not that last one. But as medical and recreational marijuana use spreads across the United States, how on Earth can law enforcement tell if someone they’ve pulled over is too high to be driving, given all these factors? Marijuana is such a confounding drug that scientists and law enforcement are struggling to create an objective standard for marijuana intoxication. (Also, I’ll say this early and only once: For the love of Pete, do not under any circumstances drive stoned.)

Sure, the cops can take you back to the station and draw a blood sample and determine exactly how much THC is in your system. “It's not a problem of accurately measuring it,” says Marilyn Huestis, coauthor of a new review paper in Trends in Molecular Medicine about cannabis intoxication. “We can accurately measure cannabinoids in blood and urine and sweat and oral fluid. It's interpretation that is the more difficult problem.”

You see, different people handle marijuana differently. It depends on your genetics, for one. And how often you consume cannabis, because if you take it enough, you can develop a tolerance to it. A dose of cannabis that may knock amateurs on their butts could have zero effect on seasoned users—patients who use marijuana consistently to treat pain, for instance.

The issue is that THC—what’s thought to be the primary psychoactive compound in marijuana—interacts with the human body in a fundamentally different way than alcohol. “Alcohol is a water-loving, hydrophilic compound,” says Huestis. “Whereas THC is a very fat-loving compound. It's a hydrophobic compound. It goes and stays in the tissues.” The molecule can linger for up to a month, while alcohol clears out right quick.

But while THC may hang around in tissues, it starts diminishing in the blood quickly—really quickly. “It's 74 percent in the first 30 minutes, and 90 percent by 1.4 hours,” says Huestis. “And the reason that's important is because in the US, the average time to get blood drawn [after arrest] is between 1.4 and 4 hours.” By the time you get to the station to get your blood taken, there may not be much THC left to find. (THC tends to linger longer in the brain because it’s fatty in there. That’s why the effects of marijuana can last longer than THC is detectable in breath or blood.)

So law enforcement can measure THC, sure enough, but not always immediately. And they’re fully aware that marijuana intoxication is an entirely different beast than drunk driving. “How a drug affects someone might depend on the person, how they used the drug, the type of drug (e.g., for cannabis, you can have varying levels of THC between different products), and how often they use the drug,” California Highway Patrol spokesperson Mike Martis writes in an email to WIRED.

Accordingly, in California, where recreational marijuana just became legal, the CHP relies on other observable measurements of intoxication. If an officer does field sobriety tests like the classic walk-and-turn maneuver, and suspects someone may be under the influence of drugs, they can request a specialist called a drug recognition evaluator. The DRE administers additional field sobriety tests—analyzing the suspect’s eyes and blood pressure to try to figure out what drug may be in play.

The CHP says it’s also evaluating the use of oral fluid screening gadgets to assist in these drug investigations. (Which devices exactly, the CHP declines to say.) “However, we want to ensure any technology we use is reliable and accurate before using it out in the field and as evidence in a criminal proceeding,” says Martis.

Another option would be to test a suspect’s breath with a breathalyzer for THC, which startups like Hound Labs are chasing. While THC sticks around in tissues, it’s no longer present in your breath after about two or three hours. So if a breathalyzer picks up THC, that would suggest the stuff isn’t lingering from a joint smoked last night, but one smoked before the driver got in a car.

This could be an objective measurement of the presence of THC, but not much more. “We are not measuring impairment, and I want to be really clear about that,” says Mike Lynn, CEO of Hound Labs. “Our breathalyzer is going to provide objective data that potentially confirms what the officer already thinks.” That is, if the driver was doing 25 in a 40 zone and they blow positive for THC, evidence points to them being stoned.

But you might argue that even using THC to confirm inebriation goes too far. The root of the problem isn’t really about measuring THC, it’s about understanding the galaxy of active compounds in cannabis and their effects on the human body. “If you want to gauge intoxication, pull the driver out and have him drive a simulator on an iPad,” says Kevin McKernan, chief scientific officer at Medicinal Genomics, which does genetic testing of cannabis. “That'll tell ya. The chemistry is too fraught with problems in terms of people's individual genetics and their tolerance levels.”

Scientists are just beginning to understand the dozens of other compounds in cannabis. CBD, for instance, may dampen the psychoactive effects of THC. So what happens if you get dragged into court after testing positive for THC, but the marijuana you consumed was also a high-CBD strain?

“It significantly compounds your argument in court with that one,” says Jeff Raber, CEO of the Werc Shop, a cannabis lab. “I saw this much THC, you're intoxicated. Really, well I also had twice as much CBD, doesn't that cancel it out? I don't know, when did you take that CBD? Did you take it afterwards, did you take it before?

“If you go through all this effort and spend all the time and money and drag people through court and spend taxpayer dollars, we shouldn't be in there with tons of question marks,” Raber says.

But maybe one day marijuana roadside testing won’t really matter. “I really think we're probably going to see automated cars before we're going to see this problem solved in a scientific sense,” says Raber. Don’t hold your breath, then, for a magical device that tells you you’re stoned.

Read more: https://www.wired.com/story/why-no-gadget-can-prove-how-stoned-you-are/