[unable to retrieve full-text content]
[unable to retrieve full-text content]
On April 25, at 10:50 a.m. local time, a white helium balloon ascended from Wanaka, New Zealand, and lifted Angela Olinto’s hopes into the stratosphere. The football stadium-size NASA balloon, now floating 20 miles above the Earth, carries a one-ton detector that Olinto helped design and see off the ground. Every moonless night for the next few months, it will peer out at the dark curve of the Earth, hunting for the fluorescent streaks of mystery particles called “ultrahigh-energy cosmic rays” crashing into the sky. The Extreme Universe Space Observatory Super Pressure Balloon (EUSO-SPB) experiment will be the first ever to record the ultraviolet light from these rare events by looking down at the atmosphere instead of up. The wider field of view will allow it to detect the streaks at a faster rate than previous, ground-based experiments, which Olinto hopes will be the key to finally figuring out the particles’ origin.
Olinto, the leader of the seven-country EUSO-SPB experiment, is a professor of astrophysics at the University of Chicago. She grew up in Brazil and recalls that during her “beach days in Rio” she often wondered about nature. Over the 40 years since she was 16, Olinto said, she has remained captivated by the combined power of mathematics and experiments to explain the universe. “Many people think of physics as hard; I find it so elegant, and so simple compared to literature, which is really amazing, but it’s so varied that it’s infinite,” she said. “We have four forces of nature, and everything can be done mathematically. Nobody’s opinions matter, which I like very much!”
Olinto has spent the last 22 years theorizing about ultrahigh-energy cosmic rays. Composed of single protons or heavier atomic nuclei, they pack within quantum proportions as much energy as baseballs or bowling balls, and hurtle through space many millions of times more energetically than particles at the Large Hadron Collider, the world’s most powerful accelerator. “They’re so energetic that theorists like me have a hard time coming up with something in nature that could reach those energies,” Olinto said. “If we didn’t observe these cosmic rays, we wouldn’t believe they actually would be produced.”
Olinto and her collaborators have proposed that ultrahigh-energy cosmic rays could be emitted by newly born, rapidly rotating neutron stars, called “pulsars.” She calls these “the little guys,” since their main competitors are “the big guys”: the supermassive black holes that churn at the centers of active galaxies. But no one knows which theory is right, or if it’s something else entirely. Ultrahigh-energy cosmic rays pepper Earth so sparsely and haphazardly — their paths skewed by the galaxy’s magnetic field — that they leave few clues about their origin. In recent years, a hazy “hot spot” of the particles coming from a region in the Northern sky seems to be showing up in data collected by the Telescope Array in Utah. But this potential clue has only compounded the puzzle: Somehow, the alleged hot spot doesn’t spill over at all into the field of view of the much larger and more powerful Pierre Auger Observatory in Argentina.
To find out the origin of ultrahigh-energy cosmic rays, Olinto and her colleagues need enough data to produce a map of where in the sky the particles come from — a map that can be compared with the locations of known cosmological objects. “In the cosmic ray world, the big dream is to point,” she said during an interview at a January meeting of the American Physical Society in Washington, D.C.
She sees the current balloon flight as a necessary next step. If successful, it will serve as a proof of principle for future space-based ultrahigh-energy cosmic-ray experiments, such as her proposed satellite detector, Poemma (Probe of Extreme Multi-Messenger Astrophysics). While in New Zealand in late March preparing for the balloon launch, Olinto received the good news from NASA that Poemma had been selected for further study.
Olinto wants answers, and she has an ambitious timeline for getting them. An edited and condensed version of our conversations in Washington and on a phone call to New Zealand follows.
QUANTA MAGAZINE: What was your path to astrophysics and ultrahigh-energy cosmic rays?
ANGELA OLINTO: I was really interested in the basic workings of nature: Why three families of quarks? What is the unified theory of everything? But I realized how many easier questions we have in astrophysics: that you could actually take a lifetime and go answer them. Graduate school at MIT showed me the way to astrophysics — how it can be an amazing route to many questions, including how the universe looks, how it functions, and even particle physics questions. I didn’t plan to study ultrahigh-energy cosmic rays; but every step it was, “OK, it looks promising.”
How long have you been trying to answer this particular question?
In 1995, we had a study group at Fermilab for ultrahigh-energy cosmic rays, because the AGASA (Akeno Giant Air Shower Array) experiment was seeing these amazing events that were so energetic that the particles broke a predicted energy limit known as the “GZK cutoff.” I was studying magnetic fields at the time, and so Jim Cronin, who just passed away last year in August — he was a brilliant man, charismatic, full of energy, lovely man — he asked that I explain what we know about cosmic magnetic fields. At that time the answer was not very much, but I gave him what we did know. And because he invited me I got to learn what he was up to. And I thought, wow, this is pretty interesting.
Later you helped plan and run Pierre Auger, an array of detectors spread across 3,000 square kilometers of Argentinian grassland. Did you actually go around and persuade farmers to let you put detectors on their land?
Not me; it was the Argentinian team who did the amazing job of talking to everybody. The American team helped build a planetarium and a school in that area, so we did interact with them, but not directly on negotiations over land. In Argentina it was like this: You get a big fraction of folks who are very excited and part of it from the beginning. Gradually you got through the big landowners. But eventually we had a couple who were really not interested. So we had two regions in the middle of the array that were empty of the detectors for quite some time, and then we finally closed it.
Space is much easier in that sense; it’s one instrument and no one owns the atmosphere. On the other hand, the nice thing about having all the farmers involved is that Malargüe, the city in Argentina that has had the detectors deployed, has changed completely. The students are much more connected to the world and speak English. Some are coming to the U.S. for undergraduate and even graduate school eventually. It’s been a major transformation for a small town where nobody went to college before. So that was pretty amazing. It took a huge outreach effort and a lot of time, but this was very important, because we needed them to let us in.
Why is space the next step?
To go the next step on the ground — to get 30,000 square kilometers instrumented — is something I tried to do, but it’s really difficult. It’s hard enough with 3,000; it was crazy to begin with, but we did it. To get to the next order of magnitude seems really difficult. On the other hand, going to space you can see 100 times more volume of air in the same minute. And then we can increase by orders of magnitude the ability to see ultrahigh-energy cosmic rays, see where they are coming from, how they are produced, what objects can reach these kinds of energies.
What will we learn from EUSO-SPB?
We will not have enough data to revolutionize our understanding at this point, but we will show how it can be done from space. The work we do with the balloon is really in preparation for something like Poemma, our proposed satellite experiment. We plan to have two telescopes free-flying and communicating with each other, and by recording cosmic-ray events with both of the them we should be able to also reproduce the direction and composition very precisely.
Speaking of Poemma, do you still teach a class called Cosmology for Poets?
We don’t call it that anymore, but yes. What it entails is teaching nonscience majors what we know about the history of the universe: what we’ve learned and why we think it is the way it is, how we measure things and how our scientific understanding of the history of the universe is now pretty interesting. First, we have a story that works brilliantly, and second, we have all kinds of puzzles like dark matter and dark energy that are yet to be understood. So it gives the sense of the huge progress since I started looking at this. It’s unbelievable; in my lifetime it’s changed completely, and mostly due to amazing detections and observations.
One thing I try to do in this course is to mix in some art. I tell them to go to a museum and choose an object or art piece that tells you something about the universe — that connects to what we talked about in class. And here my goal is to just make them dream a bit free from all the boundaries of science. In science there’s right and wrong, but in art there are no easy right and wrong answers. I want them to see if they can have a personal attachment to the story I told them. And I think art helps me do that.
You’ve said that when you left Brazil for MIT at 21, you were suffering from a serious muscle disease called polymyositis, which also recurred in 2006. Did those experiences contribute to your drive to push the field forward?
I think this helps me not get worked up about small stuff. There are always many reasons to give up when working on high-risk research. I see some colleagues who get worked up about things that I’m like, whatever, let’s just keep going. And I think that attitude to minimize things that are not that big has to do with being close to death. Being that close, it’s like, well, everything is positive. I’m very much a positive person and most of the time say, let’s keep pushing. I think having a question that is not answered that is well posed is a very good incentive to keep moving.
Between the “big guys” and the “little guys” — black holes versus pulsating neutron stars — what’s your bet for which ones produce ultrahigh-energy cosmic rays?
I think it’s 50-50 at this point — both can do it and there’s no showstopper on either side — but I root always for the underdog. It looks like ultrahigh-energy cosmic rays have a heavier composition, which helps the neutron star case, since we had heavy elements in our neutron star models from the beginning. However, it’s possible that supermassive black holes do the job, too, and basically folks just imagine that the bigger the better, so the supermassive black holes are usually a little bit ahead. It could be somewhere in the middle: intermediate-mass black holes. Or ultrahigh-energy cosmic rays could be related to other interesting phenomena, like fast radio bursts, or something that we don’t know anything about.
When do you think we’ll know for sure?
You know how when you climb the mountain — I rarely look at where I’m going. I look at the next two steps. I know I’m going to the top but I don’t look at the top, because it’s difficult to do small steps when the road is really long. So I don’t try to predict exactly. But I would imagine — we have a decadal survey process, so that takes quite some time, and then we have another decade — so let’s say, in the 2030s we should know the answer.
Our Insights questions this month were based on the vagaries of the modern calendar and that eternal question about any specified date: “What day of the week is that?” Our first two questions concerned the frequency of Friday the 13th’s, which some consider an unlucky day.
The year 2017 began with a Friday the 13th in January, and another one is due in October. What’s the maximum and minimum number of Friday the 13th’s that there can be in a Gregorian calendar year?
This question can, of course, be solved by brute force methods, but can you find an easy way to answer it that you could conceivably even do in your head?
The maximum number of Friday the 13th’s in a year is three, and the minimum is one, as Cameron Eggins explains. Here’s a way to think about it: If two given months start on the same day of the week, then the day of the week for the 13th of both months will also be concordant. So all we need to do is figure out which months will start from the same days of the week, for both nonleap and leap years.
Let’s map the days of the week to the numbers 0 through 6, and assign the base number 0 for January. We can now find the offsets to this base number for each subsequent month by casting out complete weeks. Basically, we perform modulo arithmetic: Take the number of days in the month, divide by 7 and add the remainder to the previous month’s number, and reduce the sum to a number below 7, if necessary, to get the number for the next month. January has 31 days, which is four complete weeks plus three days, so February’s base number is 0 + 3, which is 3. Continuing this way, we obtain the numbers 0, 3, 3, 6, 1, 4, 6, 2, 5, 0, 3, 5, which give the offsets for the 12 months in a nonleap year. The number 3 occurs three times, for February, March and November, and this is the maximum number of occurrences of all the numbers. If any one of these three months has a Friday the 13th, so will the other two months. Hence three is the maximum number of Friday the 13th’s that it is possible to have in a nonleap year. Now notice that each of the seven numbers occurs at least once, which means that all starting days of the week are represented, so you cannot avoid having at least one Friday the 13th in a nonleap year. Doing the same procedure for leap years does not change our maximum and minimum, which remain 3 and 1 respectively.
Incidentally, if you memorize this string of numbers that represent the months — 0 3 3 6 1 4 6 2 5 0 3 5 — you can figure out the day of the week for any date using simple addition and casting out 7s. Let’s take Oct. 13, 2017. Map the weekdays to the digits 0 through 6 such that 0 = Sunday and 6 = Saturday. Add the number of years since 2001 to the number of leap years since 2001: 16 + 4 = 20 = 6 (mod 7). Now add the date and the offset number for the month, giving 6 + 13 + 0 = 19 = 5, which is a Friday. You can do it pretty quickly in your head with some practice. For dates in the 1900s use the number of years plus number of leap years since 1900.
Suppose that instead of being spooked by Friday the 13th, you consider it to be your lucky day, and you want to maximize the number of Friday the 13th’s in a year. You are allowed to tamper with the monthly distribution of days in a normal nonleap year in the following way: You can take away one day from any month of the year and add it to any other. For instance, you could, like Robin Hood, rob the day-rich December of one day, reducing it to 30 days, and bump up February’s quota to 29 days. Or you could, like a kleptocrat, decree that January has 32 days while poor February has just 27. What’s the maximum number of Friday the 13th’s you could create in this way in a single year? What if you could do the above procedure for two pairs of months, without using any month twice?
The answers to the two questions are 4 and 5.
You can solve this by inspecting the string of numbers we obtained above: 0 3 3 6 1 4 6 2 5 0 3 5. There are already three 3s, so it is simplest to try to maximize them. Notice that there is a 4 and a 2, and we can convert both to 3s by performing the day-borrowing procedure described on adjacent months. By taking a day from May and giving it to June, we can surgically alter the 4 to a 3, thus adding a fourth 3 and potentially creating a fourth Friday the 13th. Similarly, by borrowing a day from August and giving it to July, we can change the 2 to a fifth 3, without changing any of the other offsets. So our string of offsets is now 0 3 3 6 1 3 6 3 6 0 3 5, which includes five 3s. This means that if February has a Friday the 13th, so will March, June, August and November. Lucky you!
Pete Winkler pointed out that the 13th is in fact more likely to be a Friday than any other day of the week, something, he said, that was proved by a 13-year-old! Just knowing this fact, it is possible to conclude that there is an integral number of weeks in a time period of 400 years. Do you see how?
On the subject of days and dates, there is something special about the 12th of March, the 9th of May and the 11th of July that requires a global perspective to appreciate.
Each of these dates has a “twin” date with which it shares a special property. Can you figure out what it is? Note: There are some other pairs of twin dates (how many?) that have a similar property, but the three that are mentioned above (with their twin dates) possess it to a degree that is ahead of the others by leaps and bounds.
This question was correctly answered by amrith raghavan:
“The 12th of March, the 9th of May and the 11th of July share the property that they fall on the same day of the week even if the month and day of the dates are switched. That is, these dates would fall on the same day of the week if written the American way (mm/dd/yyyy) or the British way (dd/mm/yyyy).”
Yes, indeed! These dates fall on the same day whether they are written in the globally more common European/imperial format (DD-MM-YYYY) or in the American format (MM-DD-YYYY). These three pairs of dates are unique in that they fall on the same day in both nonleap and leap years:
There are six other pairs of dates that share this property, but three of them work only in nonleap years (01-07/07-01, 01-11/11-01 and 02-08/03-02) or leap years (01-06/06-01, 02-03/03-02 and 02-12/12-02). Several years ago, I constructed a science fiction adventure based on this fact.
In the Insights column, we also discussed calendar reform. In response to one suggestion that involves having 13 months of 28 days each, with one or two additional extracalendar holidays, I commented that the above system does not preserve quarters of three months and had asked, “Can you think of a way that preserves weeks of 7 days, has months of about 30 days (let’s say 30 plus or minus no more than 5 days), preserves equal quarters and equal seasons, and does not need a new calendar every year?” However, after reading the comment by David Prentiss, I agree that there is no need to try to preserve four-month quarters, as it would require the insertion of extracalendar days four times in the year. Instead, it is much easier to redefine the quarter to be exactly 13 weeks (three 28-day months plus one week, a very small adjustment). The beauty of the 13th-month calendar, as David Prentiss stated, is that it avoids this by putting all adjustments (which are limited to just one or two days) at the year’s end. So I rescind the question: I think there is no doubt that the 13-month calendar is most logical, and we should move to it. Of course, there is little chance of that happening in the near future, if ever. Calendar reform faces a great deal more entrenched resistance than just from triskaidekaphobes!
Finally, I asked for readers’ views on what the base year for a universal human calendar should be. Michael Ahern suggested 1969, when human beings first landed on the moon. That’s definitely a good candidate. But for me, no other choice can come close to the year when Charles Darwin published what the philosopher Daniel Dennett has called “the greatest idea that anyone ever had” — the theory of evolution by natural selection. For the first time in our history, we as a species glimpsed our true origins. The year 1859 marks the emergence of our species from its intellectual childhood, from the realm of magic and fantasy into the world of rational thought.
As usual, it was not easy to decide the winner of the Quanta T-shirt this month. I’ve decided to give it to amrith raghavan, based on his answer to Question 3. Considering how he ended his comment, I hope he is sober now! Cameron Eggins just misses out. See you next month for new insights.
[unable to retrieve full-text content]
[unable to retrieve full-text content]
[unable to retrieve full-text content]
In a recent work (Burgarth et al 2014, Nat. Commun . 5 5173), it was shown that a series of frequent measurements can project the dynamics of a quantum system onto a subspace in which the dynamics can be more complex. In this subspace, even full controllability can be achieved, although the controllability over the system before the projection is very poor since the control Hamiltonians commute with each other. We can also think of the opposite: any Hamiltonians of a quantum system, which are in general noncommutative with each other, can be made commutative by embedding them in an extended Hilbert space, thus the dynamics in the extended space becomes trivial and simple. This idea of making noncommutative Hamiltonians commutative is called ‘Hamiltonian purification.’ The original noncommutative Hamiltonians are recovered by projecting the system back onto the original Hilbert space through frequent measurements. Here, we generalise this idea to open-system dynamics…
[unable to retrieve full-text content]
At a conference in Maine during the summer of 2008, the biochemist David Sabatini stood before an audience of his peers, prepared to dazzle them with a preview of unpublished results emerging from his lab at the Whitehead Institute for Biomedical Research in Cambridge, Massachusetts. The presentation did not go over well. His group was studying mTOR, a cellular enzyme he and colleagues had discovered more than a decade earlier. Among other things, they had tried to find out where mTOR aggregates inside cells, since this seemed likely to help explain the enzyme’s remarkable but mysterious influence over diverse cellular growth processes. Sabatini proudly projected a slide with the team’s findings, showing the enzyme arrayed along the surface of the organelles called lysosomes.
The audience was dubious. “People literally got up and said, ‘David, that’s the trash bin of the cell. It doesn’t make sense. Why decorate the outside of a trash can?” Sabatini recalled.
Over the nine years since Sabatini’s talk, lysosomes have won more respect. Research continues to show that lysosomes transcend the trash can role, acting as crucial advisers to the nucleus in its job of genetic regulation. That leap in status was obvious at the fourth Gordon Research Conference on Lysosomal Diseases, held March 5-10 in Barga, Italy. The lysosome was also celebrated in a paper that appeared last October in Annual Reviews of Cell and Developmental Biology, “The Lysosome as a Regulatory Hub.” Its authors, the San Francisco Bay Area researchers Rushika Perera and Roberto Zoncu, observed that recent studies have “raised the status of the lysosome from a catabolic dead end to a key signaling node, with far-reaching implications for our understanding of the logic of metabolic regulation both in health and in disease.”
In this loftier reckoning of lysosomes, the organelles deftly integrate metabolic information from throughout the cell and communicate it to the nucleus. Like snooping garbage collectors who learn the secrets of all the homeowners on their route, lysosomes gain a uniquely informed perspective on a cell’s status by picking through its molecular discards. And some of the finely tuned genetic controls of the nucleus would possibly be pilotless without them.
Lysosomes first drew attention in the 1950s, when the Belgian biochemist Christian de Duve stumbled across the saclike intracellular structures while trying to purify a protein found in rat livers. He named the previously unknown sacs after the Greek for “digestive body” because their contents were highly acidic and filled with enzymes that break down virtually any biomolecule that’s set before them. De Duve received a Nobel Prize for his discovery in 1974, but biologists were unenthusiastic about the organelle. Researchers nicknamed the lysosome “the recycle bin of the cell, or the trash can — nothing interesting,” said Zoncu, a biochemist at the University of California, Berkeley.
It wasn’t that lysosomes didn’t seem important — waste disposal systems inevitably are. They are responsible for digesting a cell’s damaged, malformed, superfluous or otherwise undesirable proteins and organelles, along with excess sugars and fats. When genetic defects cause lysosomes to make too little of any of the 60 or more enzymes associated with them, waste products pile up inside cells and cause lysosomal storage diseases, such as Tay-Sachs, Niemann-Pick and other disorders. Moreover, as a series of experiments led by Yoshinori Ohsumi (first at the University of Tokyo, then at Japan’s National Institute for Basic Biology) demonstrated in the 1990s, lysosomes are also instrumental in the vital process of autophagy, which allows cells to cannibalize their own organelles for resources in times of need and to combat the effects of illness and aging. That work brought Ohsumi a Nobel Prize in 2016 — a second Nobel to be awarded for work involving the cell’s lowly trash can.
But in the 1980s, when Andrea Ballabio, the founder of the Telethon Institute of Genetics and Medicine in Naples, was starting out in biological research, studies of the lysosome focused almost exclusively on what goes on inside it. He recalls the field as heavily and narrowly disease driven: Lysosome investigators purified enzymes that were deficient or dysfunctional in specific lysosomal storage disorders.
Ballabio had become interested in the lysosome while studying a particular kind of lysosomal storage disease. Multiple sulfatase deficiency causes scaly skin, stiff joints, seizures and developmental delays. The symptoms arise from mutations in a gene that, as Ballabio’s group discovered, is essential for the activation of a group of enzymes called sulfatases, many of which are lysosomal.
That discovery by Ballabio’s team, along with other studies of rare lysosomal disorders, convinced Ballabio that cells must have a system to boost lysosomal activity and a way to start making more lysosomes as cellular trash piles up. To do this, “you need to control the function of many different genes,” Ballabio said. He set out to find the master regulators that do this.
In 2009, his team reported that it had found an important one. They called it “transcription factor EB,” or TFEB. In the cell’s nucleus, TFEB binds to DNA sequences in many lysosomal genes and controls the rate at which they make proteins.
Precisely how TFEB’s activity could reflect the cell’s needs for lysosomes so comprehensively, however, was still unknown. But an answer would soon emerge from work that, at least initially, had nothing to do with lysosomes.
A Seat of Signaling
When Roberto Zoncu arrived as a postdoc at Sabatini’s lab at the Whitehead Institute in 2008, lysosomes were not uppermost in his mind. The lab’s focus was (and in many ways still is) on the enzyme that Sabatini had discovered in mammalian cells in 1994 and dubbed mechanistic target of rapamycin (mTOR). Implicated in aging and a slew of diseases including cancer and diabetes, mTOR signals cells to grow and divide under a surprisingly wide variety of circumstances. “One of the big motivating questions for us has been: How does that happen?” Sabatini said. “How does mTOR manage to sense so many things, integrate those signals and drive growth?”
A critical clue came when the team tracked the protein’s movements within cells. When cells were bathed in amino acid-free media, mTOR seemed to spread evenly throughout the cytoplasm. But if the media contained amino acids, within minutes mTOR moved into distinct clusters at specific locations inside the cell, shepherded there by other proteins called Rag GTPases. The enzymatic activity of mTOR depended on its reaching those locations, but the proteins that guided it there did not appear to turn it on. “We were stuck,” Sabatini said.
Zoncu therefore set out to learn what was special about where the mTOR protein was going in response to amino acids. In a key experiment, he stained cells with pairs of fluorescent antibodies: a red one that bound mTOR and a green one designed to bind to a protein associated with a different organelle in each round of the experiment. He then examined the cells under the microscope, looking for where the green and red fluorescent tags overlapped. This would indicate what else was located in the spots where the mTOR clustered.
Scanning the slide that stained for mitochondria — a potential target of huge metabolic importance — Zoncu found no overlap. He moved on to the slide for the next organelle, and the next. Still no overlap. “I almost lost hope,” Zoncu recalled.
Then came the lysosome slide. “All of a sudden, everything matched perfectly,” he said. The red mTOR staining and the green staining for lysosomal marker LAMP2 overlapped 100 percent.
Revving Up the Recycler
Those results added further support to the data Sabatini reported at the Maine conference in 2008 to his underwhelmed audience. But even Zoncu acknowledges that skepticism might have been warranted. Lysosomes, he says, could still “have just been a landing pad” — a convenient place for mTOR to touch down during activation.
Yet later experiments suggested otherwise. When Zoncu extracted lysosomes from cells and loaded them with amino acids, he saw that the more amino acids they carried, the more mTOR clustered on their surface and became active. (The enzyme mTOR forms two protein complexes in the cell; mTOR complex 1 [mTORC1] is the one found on lysosomes.) Those experiments, published in 2011, show that mTORC1 responds to the lysosomal contents, Sabatini says — as though the lysosomes tell mTORC1 about the amino acids they hold and mTORC1 adjusts its behavior accordingly.
Widening the Cellular Conversation
When Ballabio’s lab and Sabatini’s learned of one another’s results and joined forces, they soon worked out how the mTORC1 and TFEB pieces of the puzzle fit together, publishing the solution in 2012. In a healthy, well-fed cell, lysosomes have a cornucopia of proteins to break down to their amino acid components, and those amino acids work with proteins on the lysosome surface to anchor mTORC1 and activate it. The mTORC1 in turn keeps cytoplasmic TFEB out of the nucleus. When a cell becomes starved or stressed, mTORC1 drops away from the lysosome and TFEB is freed to bind its targets on the nuclear DNA. Acting as a master sensor of lysosomal function, TFEB turns on genes for more lysosomal enzymes.
Starvation is not the only stressor that unleashes TFEB from lysosomes. Ballabio and his colleagues showed recently that TFEB can zip to the nucleus to help cells handle other stressful scenarios. According to Ballabio, several groups have shown that administering TFEB as a gene therapy in mice can help to alleviate symptoms of lysosomal storage diseases, diet-induced obesity and diabetes, and neurodegenerative conditions akin to Alzheimer’s disease and Parkinson’s disease.
Early in March, while chairing the Gordon Conference on lysosomes, Ballabio reflected over the phone to Quanta about how dramatically the field had changed. The field was once overwhelmingly about lysosomal storage diseases; now, he says, disease investigators mingle freely with those doing basic research. And the focus on deficiencies inside lysosomes had shifted to the lysosomal membrane and the ways in which it enlists TFEB, mTOR and roughly 200 other identified proteins in a conversation with the rest of the cell.
Cancer is one stubborn condition that might yield to a better understanding of lysosomes. Because cancer cells need plenty of nutrients to grow, “they have to reprogram or rewire their stomachs — their lysosomes — to take in and process a lot of food,” said Perera, a cancer biologist at the University of California, San Francisco.
She and Zoncu are jointly exploring the differences between lysosomes of malignant and normal cells. In addition, they are working to identify surface proteins on the lysosome that let nutrients escape to the cytoplasm for tumor cells to use. Such proteins could be exploited as portals for introducing toxins or drugs that cancer cells would collect in their lysosomes, to their detriment.
Another intriguing possibility comes from Ralph A. Nixon, a neuroscientist at New York University Langone Medical Center who also spoke at the lysosome conference. Experiments in his lab and others’ have linked failing lysosomes to cellular aging, reduced longevity and a range of neurodegenerative disorders. In 2015 Nixon and colleagues showed that certain gene mutations linked to Alzheimer’s disease disable a proton pump that maintains the pH of lysosomes. That change in acidity, which alters the balance of ions and metabolites that migrate out of lysosomes, can impair a cell’s metabolism.
Neurons may be uniquely vulnerable to this kind of low-level lysosomal disruption, Nixon says, and that could explain why lysosomal disorders so often have neurological consequences. If future drug interventions could correct problems with the lysosomal proton pump, or with other disruptions of lysosome function, it might be possible to stave off some of the neurodegenerative effects of Alzheimer’s disease or other conditions. Several drug compounds that modify lysosomal function — either by increasing rates of autophagy, or by raising rates of lysosome creation, or both — have shown promise against neurological conditions in mice, Nixon says.
Lysosomes may once have seemed like garbage bins that were “boring on the outside,” Nixon says, but they are increasingly appreciated as regulated signal platforms crucial to cellular health. And as perspectives on the lysosome change, views of the associated biology shift, too. Perera notes that cancer researchers have long wanted to know more about the signals that let malignant cells grow and multiply nonstop, and about how the cells co-opt nutrients. The new view of lysosomes, she says, reveals that these are “all different aspects of the same problem.”