[rNASA Television to Broadcast Next Space Station Crew Launch, Docking A multinational crew, including NASA astronaut Jessica Meir and the first space traveler from the United Arab Emirates (UAE), is scheduled to launch to the International Space Station Wednesday, Sept. 25. NASA Television and the agency’s website will provide live coverage of the crew’s launch and arrival. Source: NASA Breaking news http://www.nasa.gov/press-release/nasa-television-to-broadcast-next-space-station-crew-launch-docking-0
[rNASA, Australian Space Agency to Sign Joint Statement at NASA Headquarters Media are invited to a joint signing ceremony between NASA and the Australian Space Agency at 9 a.m. EDT Saturday, Sept. 21, at NASA Headquarters in Washington. Source: NASA Breaking news http://www.nasa.gov/press-release/nasa-australian-space-agency-to-sign-joint-statement-at-nasa-headquarters
The Technologies Giving Rise to the Spatial Web
How each of us sees the world is about to change dramatically.
For all of human history, the experience of looking at the world was roughly the same for everyone. But boundaries between the digital and physical are beginning to fade.
The world around us is gaining layer upon layer of digitized, virtually overlaid information—making it rich, meaningful, and interactive. As a result, our respective experiences of the same environment are becoming vastly different, personalized to our goals, dreams, and desires.
Welcome to Web 3.0, or the Spatial Web. In version 1.0, static documents and read-only interactions limited the internet to one-way exchanges. Web 2.0 provided quite an upgrade, introducing multimedia content, interactive web pages, and participatory social media. Yet, all this was still mediated by two-dimensional screens.
Today, we are witnessing the rise of Web 3.0, riding the convergence of high-bandwidth 5G connectivity, rapidly evolving AR eyewear, an emerging trillion-sensor economy, and powerful artificial intelligence.
As a result, we will soon be able to superimpose digital information atop any physical surrounding—freeing our eyes from the tyranny of the screen, immersing us in smart environments, and making our world endlessly dynamic.
In the third post of our five-part series on augmented reality, we will explore the convergence of AR, AI, sensors, and blockchain and dive into the implications through a key use case in manufacturing.
A Tale of Convergence
Let’s deconstruct everything beneath the sleek AR display.
It all begins with graphics processing units (GPUs)—electric circuits that perform rapid calculations to render images. (GPUs can be found in mobile phones, game consoles, and computers.)
However, because AR requires such extensive computing power, single GPUs will not suffice. Instead, blockchain can now enable distributed GPU processing power, and blockchains specifically dedicated to AR holographic processing are on the rise.
Next up, cameras and sensors will aggregate real-time data from any environment to seamlessly integrate physical and virtual worlds. Meanwhile, body-tracking sensors are critical for aligning a user’s self-rendering in AR with a virtually enhanced environment. Depth sensors then provide data for 3D spatial maps, while cameras absorb more surface-level, detailed visual input. In some cases, sensors might even collect biometric data, such as heart rate and brain activity, to incorporate health-related feedback in our everyday AR interfaces and personal recommendation engines.
The next step in the pipeline involves none other than AI. Processing enormous volumes of data instantaneously, embedded AI algorithms will power customized AR experiences in everything from artistic virtual overlays to personalized dietary annotations.
In retail, AIs will use your purchasing history, current closet inventory, and possibly even mood indicators to display digitally rendered items most suitable for your wardrobe, tailored to your measurements.
In healthcare, smart AR glasses will provide physicians with immediately accessible and maximally relevant information (parsed from the entirety of a patient’s medical records and current research) to aid in accurate diagnoses and treatments, freeing doctors to engage in the more human-centric tasks of establishing trust, educating patients and demonstrating empathy.
Convergence in Manufacturing
One of the nearest-term use cases of AR is manufacturing, as large producers begin dedicating capital to enterprise AR headsets. And over the next ten years, AR will converge with AI, sensors, and blockchain to multiply manufacturer productivity and employee experience.
(1) Convergence with AI
In initial application, digital guides superimposed on production tables will vastly improve employee accuracy and speed, while minimizing error rates.
Already, the International Air Transport Association (IATA) — whose airlines supply 82 percent of air travel — recently implemented industrial tech company Atheer’s AR headsets in cargo management. And with barely any delay, IATA reported a whopping 30 percent improvement in cargo handling speed and no less than a 90 percent reduction in errors.
With similar success rates, Boeing brought Skylight’s smart AR glasses to the runway, now used in the manufacturing of hundreds of airplanes. Sure enough—the aerospace giant has now seen a 25 percent drop in production time and near-zero error rates.
Beyond cargo management and air travel, however, smart AR headsets will also enable on-the-job training without reducing the productivity of other workers or sacrificing hardware. Jaguar Land Rover, for instance, implemented Bosch’s Re’flekt One AR solution to gear technicians with “x-ray” vision: allowing them to visualize the insides of Range Rover Sport vehicles without removing any dashboards.
And as enterprise capabilities continue to soar, AIs will soon become the go-to experts, offering support to manufacturers in need of assembly assistance. Instant guidance and real-time feedback will dramatically reduce production downtime, boost overall output, and even help customers struggling with DIY assembly at home.
Perhaps one of the most profitable business opportunities, AR guidance through centralized AI systems will also serve to mitigate supply chain inefficiencies at extraordinary scale. Coordinating moving parts, eliminating the need for manned scanners at each checkpoint, and directing traffic within warehouses, joint AI-AR systems will vastly improve workflow while overseeing quality assurance.
After its initial implementation of AR “vision picking” in 2015, leading courier company DHL recently announced it would continue to use Google’s newest smart lens in warehouses across the world. Motivated by the initial group’s reported 15 percent jump in productivity, DHL’s decision is part of the logistics giant’s $300 million investment in new technologies.
And as direct-to-consumer e-commerce fundamentally transforms the retail sector, supply chain optimization will only grow increasingly vital. AR could very well prove the definitive step for gaining a competitive edge in delivery speeds.
As explained by Vital Enterprises CEO Ash Eldritch, “All these technologies that are coming together around artificial intelligence are going to augment the capabilities of the worker and that’s very powerful. I call it Augmented Intelligence. The idea is that you can take someone of a certain skill level and by augmenting them with artificial intelligence via augmented reality and the Internet of Things, you can elevate the skill level of that worker.”
Already, large producers like Goodyear, thyssenkrupp, and Johnson Controls are using the Microsoft HoloLens 2—priced at $3,500 per headset—for manufacturing and design purposes.
Perhaps the most heartening outcome of the AI-AR convergence is that, rather than replacing humans in manufacturing, AR is an ideal interface for human collaboration with AI. And as AI merges with human capital, prepare to see exponential improvements in productivity, professional training, and product quality.
(2) Convergence with Sensors
On the hardware front, these AI-AR systems will require a mass proliferation of sensors to detect the external environment and apply computer vision in AI decision-making.
To measure depth, for instance, some scanning depth sensors project a structured pattern of infrared light dots onto a scene, detecting and analyzing reflected light to generate 3D maps of the environment. Stereoscopic imaging, using two lenses, has also been commonly used for depth measurements. But leading technology like Microsoft’s HoloLens 2 and Intel’s RealSense 400-series camera implement a new method called “phased time-of-flight” (ToF).
In ToF sensing, the HoloLens 2 uses numerous lasers, each with 100 milliwatts (mW) of power, in quick bursts. The distance between nearby objects and the headset wearer is then measured by the amount of light in the return beam that has shifted from the original signal. Finally, the phase difference reveals the location of each object within the field of view, which enables accurate hand-tracking and surface reconstruction.
With a far lower computing power requirement, the phased ToF sensor is also more durable than stereoscopic sensing, which relies on the precise alignment of two prisms. The phased ToF sensor’s silicon base also makes it easily mass-produced, rendering the HoloLens 2 a far better candidate for widespread consumer adoption.
To apply inertial measurement—typically used in airplanes and spacecraft—the HoloLens 2 additionally uses a built-in accelerometer, gyroscope, and magnetometer. Further equipped with four “environment understanding cameras” that track head movements, the headset also uses a 2.4MP HD photographic video camera and ambient light sensor that work in concert to enable advanced computer vision.
For natural viewing experiences, sensor-supplied gaze tracking increasingly creates depth in digital displays. Nvidia’s work on Foveated AR Display, for instance, brings the primary foveal area into focus, while peripheral regions fall into a softer background— mimicking natural visual perception and concentrating computing power on the area that needs it most.
Gaze tracking sensors are also slated to grant users control over their (now immersive) screens without any hand gestures. Conducting simple visual cues, even staring at an object for more than three seconds, will activate commands instantaneously.
And our manufacturing example above is not the only one. Stacked convergence of blockchain, sensors, AI and AR will disrupt almost every major industry.
Take healthcare, for example, wherein biometric sensors will soon customize users’ AR experiences. Already, MIT Media Lab’s Deep Reality group has created an underwater VR relaxation experience that responds to real-time brain activity detected by a modified version of the Muse EEG. The experience even adapts to users’ biometric data, from heart rate to electro dermal activity (inputted from an Empatica E4 wristband).
Now rapidly dematerializing, sensors will converge with AR to improve physical-digital surface integration, intuitive hand and eye controls, and an increasingly personalized augmented world. Keep an eye on companies like MicroVision, now making tremendous leaps in sensor technology.
While I’ll be doing a deep dive into sensor applications across each industry in our next blog, it’s critical to first discuss how we might power sensor- and AI-driven augmented worlds.
(3) Convergence with Blockchain
Because AR requires much more compute power than typical 2D experiences, centralized GPUs and cloud computing systems are hard at work to provide the necessary infrastructure. Nonetheless, the workload is taxing and blockchain may prove the best solution.
A major player in this pursuit, Otoy aims to create the largest distributed GPU network in the world, called the Render Network RNDR. Built specifically on the Ethereum blockchain for holographic media, and undergoing Beta testing, this network is set to revolutionize AR deployment accessibility.
Alphabet Chairman Eric Schmidt (an investor in Otoy’s network), has even said, “I predicted that 90% of computing would eventually reside in the web based cloud… Otoy has created a remarkable technology which moves that last 10%—high-end graphics processing—entirely to the cloud. This is a disruptive and important achievement. In my view, it marks the tipping point where the web replaces the PC as the dominant computing platform of the future.”
Leveraging the crowd, RNDR allows anyone with a GPU to contribute their power to the network for a commission of up to $300 a month in RNDR tokens. These can then be redeemed in cash or used to create users’ own AR content.
In a double win, Otoy’s blockchain network and similar iterations not only allow designers to profit when not using their GPUs, but also democratize the experience for newer artists in the field.
And beyond these networks’ power suppliers, distributing GPU processing power will allow more manufacturing companies to access AR design tools and customize learning experiences. By further dispersing content creation across a broad network of individuals, blockchain also has the valuable potential to boost AR hardware investment across a number of industry beneficiaries.
On the consumer side, startups like Scanetchain are also entering the blockchain-AR space for a different reason. Allowing users to scan items with their smartphone, Scanetchain’s app provides access to a trove of information, from manufacturer and price, to origin and shipping details.
Based on NEM (a peer-to-peer cryptocurrency that implements a blockchain consensus algorithm), the app aims to make information far more accessible and, in the process, create a social network of purchasing behavior. Users earn tokens by watching ads, and all transactions are hashed into blocks and securely recorded.
The writing is on the wall—our future of brick-and-mortar retail will largely lean on blockchain to create the necessary digital links.
Integrating AI into AR creates an “auto-magical” manufacturing pipeline that will fundamentally transform the industry, cutting down on marginal costs, reducing inefficiencies and waste, and maximizing employee productivity.
Bolstering the AI-AR convergence, sensor technology is already blurring the boundaries between our augmented and physical worlds, soon to be near-undetectable. While intuitive hand and eye motions dictate commands in a hands-free interface, biometric data is poised to customize each AR experience to be far more in touch with our mental and physical health.
And underpinning it all, distributed computing power with blockchain networks like RNDR will democratize AR, boosting global consumer adoption at plummeting price points.
As AR soars in importance—whether in retail, manufacturing, entertainment, or beyond—the stacked convergence discussed above merits significant investment over the next decade. The augmented world is only just getting started.
(1) A360 Executive Mastermind: Want even more context about how converging exponential technologies will transform your business and industry? Consider joining Abundance 360, a highly selective community of 360 exponentially minded CEOs, who are on a 25-year journey with me—or as I call it, a “countdown to the Singularity.” If you’d like to learn more and consider joining our 2020 membership, apply here.
Share this with your friends, especially if they are interested in any of the areas outlined above.
(2) Abundance-Digital Online Community: I’ve also created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is Singularity University’s ‘onramp’ for exponential entrepreneurs — those who want to get involved and play at a higher level. Click here to learn more.
Source: Singularity Hub:
How a Centuries-Old Sculpting Method Is Helping 3D Print Organs With Blood Vessels
Blood vessels are the lifeline of any organ.
The dense web of channels, spread across tissues like a spider web, allow oxygen and nutrients to reach the deepest cores of our hearts, brains, and lungs. Without a viable blood supply, tissues rot from the inside. For any attempt at 3D printing viable organs, scientists have to tackle the problem of embedding millions of delicate blood vessels throughout their creation.
It’s a hideously hard problem. Although blood vessels generally resemble tree-like branches, their distribution, quantity, size, and specific structure vastly differs between people. So far, the easiest approach is to wash out cells from donated organs and repopulate the structure with recipient cells—a method that lowers immunorejection after transplant. Unfortunately, this approach still requires donor organs, and with 20 people in the US dying every day waiting for an organ transplant, it’s not a great solution.
This week, a team from Harvard University took a stab at the impossible. Rather than printing an entire organ, they took a Lego-block-like approach, making organ building blocks (OBBs) with remarkably high density of patient cells, and assembled the blocks into a “living” environment. From there, they injected a “sacrificial ink” into the proto-tissue. Similar to pottery clay, the “ink” hardens upon curing—leaving a dense, interconnected 3D network of channels for blood to run through.
As a proof of concept, the team printed heart tissue using the strategy. Once the block fused, the lab-made chunk of heart could beat in synchrony and remained healthy for at least a week.
The technology, SWIFT (an eyebrow-raising backcronym of “sacrificial writing into functional tissue”), is a creative push into a new generation of 3D biofabrication. Although OBBs have been around, the team explained, little attention was previously paid to putting the Lego pieces together with blood vessels.
“This is an entirely new paradigm for tissue fabrication,” said study author Dr. Mark Skylar-Scott. The focus is on vessels, which will support 3D printed living tissue that may eventually be used to repair damaged parts of a natural body, or even replace entire human organs with lab-grown versions, he added.
“[It’s] beautiful work,” commented tissue engineer Dr. Jordan Miller at Rice University, who was not involved in the study.
A Wild Mashup
SWIFT straddles two wildly diverse fields across centuries: organoids and 15th-century lost-wax sculpturing.
You’ve heard of organoids. Often dubbed mini-organs, these lentil-sized blobs of tissue remarkably mimic particular aspects of entire organs—brain organoids, for example, show the characteristic nerve cell types of firings of a preemie baby. The cellular inhabitants that make up organoids are what especially caught the team’s attention: most are grown from induced pluripotent stem cells (iPSCs), which are often skin cells “de-aged” in a way that they can develop into almost any cell type with a little chemical prodding.
Because organoids are built from a patient’s own cells, they’re completely compatible with the host for an immune standpoint. That particular strength caught the team’s attention: organoids, they reasoned, make the “ideal” OBB—or Lego pieces—to biomanufacture patient- and organ-specific tissues with all the desired properties.
For example, the team explained, organoids are packed with a high density of cells, which is usually hard to achieve with traditional 3D tissue printing. Under the right conditions, they also develop similarly to real organs in terms of cellular composition and microarchitecture to support function…for about a year. Without a blood vessel network, all organoids die.
Here’s where lost-wax technique comes in.
First, a very brief explainer. Throughout the Renaissance, the majority of Italian sculptors used the technique to fabricate bronze statues. In the simplest method, a statuette is first modeled in beeswax and covered in potter’s clay. Once dried, the assembly is heated—the clay is “fired” into ceramics, and the wax melts and flows away (hence, “lost”). Once cooled, the entire project is now a hollow ceramic mold, through which the artist can pour in molten metal.
Now, replace beeswax with “sacrificial bio-ink,” and that’s pretty much how SWIFT carves out its intricate tunnels of blood vessels.
The entire fabrication process is two main steps. The team first grew hundreds of thousands of proto-organoids inside culture dishes. These tiny blobs are so small they don’t yet need to be churned inside a bioreactor, but they’re mightily packed with roughly 200 million cells every milliliter—about the bottom bit of a teaspoon. These make up the technique’s building blocks, or OBBs.
Next, roughly 400,000 OBBs are mixed with a dense, gel-like liquid with the consistency of mayonnaise at a low temperature. The liquid is filled with collagen, a protein that keeps our skin elastic, and other synthetic versions. The OBBs are now somewhat suspended inside the gel-like matrix, which is “ideally suited for creating vascular channels,” the team said. Altogether, the organoids and gel are compacted into a density similar to human tissue, making up the raw material for further sculpting.
Now the fun second step. Using a 3D printer, the team moved a tiny nozzle containing both harmless red ink and gelatin into the mixture, depositing both in a pre-programmed manner. In this way, the team was able to “draw” intricate branch-like patterns into the organoid-gel mixture. Similar to squeezing frosting out of a bag, the team was able to adjust the diameter of the gelatin ink by nearly two-fold, mimicking the usual structure of blood vessels—thick main channels that increasingly become tinier.
Once the network was fully printed, they then gently heated the mixture to body temperature. The matrix stiffens, and the gelatin ink—acting like Jello left under the sun for too long—melts and is washed away. What remains is a network of OBBS, or organoids, linked with a vascular structure that can now be filled with blood.
As a proof of concept, the team went straight for the heart—cardiac tissue, that is. They repeated the steps using heart-derived cells, and kept the resulting chunk of heart, a little bigger than half an inch inside a chamber, filled with a nutritious, oxygen-rich bath.
Within a week, individual organoids embedded inside the gel fused together into a collective: the tissue was able to contract almost 50 percent better than immediately after printing, and the beating rhythm synchronized, suggesting that the lab-grown tissue had further matured.
The tissue even reacted similarly to a normal heart. When the team infused a drug that increases heart rate into those printed vessels, the tissue doubled in its “heartbeat.” Similarly, drugs that normally decrease heart muscle contraction also worked on the mini-heart. As a final proof of concept demo, the team printed a chunk of heart tissue with a branch of the coronary artery—a main blood vessel branch that normally wraps the heart.
The new study is hardly the first try at printing organs with blood vessels. Miller, for example, biomanufactured a hydrogel that mimicked a lung air sac earlier this May. Layer by layer, the precise anatomy of the lung-mimicking structure is constructed with liquid hydrogel, and solidified using light.
The new study stands out in its sheer creativity. By combining organoids with an ancient sculpture technique, the team was able to pack far more cells into the resulting structure, while tapping into the natural mini-organization that stems from organoids. The results aren’t just promising for printing larger, more intricate human organs with a blood supply—they could also help inform organoid research, which has struggled to keep the pseudo-organs alive.
The team is planning to transplant their SWIFT tissue into animals to further examine their function and health. But to the team, the main goal is to finally bring 3D-printed organs to people desperately on the transplant waiting list.
“Our method opens new avenues for creating personalized organ-specific tissues with embedded vascular channels for therapeutic applications,” they said.
Source: Singularity Hub:
[rNASA Awards $2.3 Million in Fellowships to US Universities for Aviation, Planetary, Space Research NASA has awarded fellowships to 14 minority-serving institutions through its Minority University Research and Education Project (MUREP) and five majority institutions through its Aeronautics Research Mission Directorate (ARMD), all totaling $2.3 million, to support graduate student research. Source: NASA Breaking news http://www.nasa.gov/press-release/nasa-awards-23-million-in-fellowships-to-us-universities-for-aviation-planetary-space
5 Areas We Should Invest in Now to Survive Climate Change Later
Even if the world manages to keep to the Paris Agreement goal of limiting global mean temperatures to 2°C above pre-industrial levels, climate change is coming. The best way to protect ourselves from its effects is to drastically cut our emissions by deploying renewables, electrification, and energy-efficiency measures. But we’ll also need to adapt to the changes that are coming.
Doing so will save money, and it will save lives. That’s the message from a new report from the Global Commission on Adaptation, led by Ban Ki-Moon, Bill Gates, and World Bank CEO Kristalina Georgieva. The report estimates that investing $1.8 trillion worldwide in five areas of climate adaptation during the next decade—for scale, this is what humans spend on efforts to kill each other every year—will yield $7.1 trillion in benefits.
Beyond cold, hard economics, though, the number of lives that could be improved or saved by adaptation is immense. Over this century, sea level rise and storm surges could force millions from coastal homes; another hundred million people in the developing world could be pushed into poverty as crop yields stall; water security will be threatened for more than a billion people across the world; and extreme weather events will disproportionately impact the poorest and most vulnerable.
Here are the ways we can alleviate that impact.
1. Early Warning Systems
Early warning systems for extreme weather events such as cyclones, droughts, floods, heat waves, and wildfires need to be improved.
A striking example of the effectiveness of these systems is found in Bangladesh. In 1970, the Bhola cyclone struck the low-lying nation with devastating results: at least 300,000 were killed. Since then, Bangladesh has launched a Cyclone Preparedness Program, constructed thousands of shelters, and invested in an early warning system. When Cyclone Mora hit in 2017, the Bangladeshi authorities evacuated hundreds of thousands of people, resulting in a death toll of around 10.
While funding emergency services and having rapid response available for natural disasters is crucial, planning for disasters in advance is also important. Advances in climate and weather models allow for scenario planning; finance, information, and other resources can be directed to the communities that are most likely to be hit.
2. Build Infrastructure for a Warmer World
Infrastructure requires investment. In the US alone, trillions of dollars may need to be spent just in order to preserve existing infrastructure. In developing nations, vast building projects and rapid urbanization are constantly accelerating.
If we want to tackle the causes and effects of climate change, this needs to be done well. Building houses to high energy-efficiency standards can prevent billions of tons of carbon dioxide being emitted into the atmosphere. Similarly, building infrastructure that takes into account the likely effects of climate change—warmer temperatures and more extreme weather events—reduces the probability of large-scale failures.
Bridges, coastal airports, and ports must be resilient to flooding and sea level rise. As the world electrifies, power lines and power plants, often forced to close by heatwaves, must be ready. When infrastructure fails, particularly after natural disasters, food or medical shortages can follow; much of the economic damage arises from these knock-on effects. The report estimates that $4 trillion in savings and benefits could arise from careful infrastructure planning. Yet only 5 out of the 35 OECD developed nations have changed their regulatory standards to account for these climate risks.
We have a choice: lock in infrastructure that’s vulnerable to a changing climate and that contributes to the problem by wasting energy, or build and repair our infrastructure with climate in mind.
3. Invest in Food Security
Climate change is already reducing crop yields in vulnerable regions like sub-Saharan Africa. Higher temperatures reduce water availability and allow pests and diseases to spread across new regions. Extreme weather events can destroy crops and prevent food from being distributed properly. All of this occurs against the backdrop of ever-increasing population and demand for food—as much as 50 percent by 2050.
The scientific innovations of the first Green Revolution headed off the most dire predictions about food availability in the 1950s and 60s, boosting agricultural productivity. Now investment needs to take place in new strains of climate-resistant crops. Diversifying strains and diets will help improve resilience to pests, diseases, and changing climates—and farms with more diverse sources of income are less likely to experience extreme poverty when climate shocks hit. Better agricultural technology and training needs to be shared with the small farmers in developing nations who live off the land.
Agriculture is at the forefront of climate vulnerability, and also plays its part in contributing to emissions, as rainforests are cleared for cattle and rice paddies emit methane. It’s crucial that better land management, informed by scientific and technological developments, form part of the solution to climate change. Engagement and investment in this now will save lives.
4. Invest in Water Security
In vulnerable regions, the length and severity of droughts are expected to grow under climate change. At the same time, flooding can jeopardize supplies of clean water. Competition for water between regions can fuel conflicts.
Rejuvenating the drainage basins that supply rivers and cities through restoration of wetlands and forests that are crucial to preventing runoff is important, as are planning for droughts and ensuring that reserves exist. But as with energy, so much can be achieved if we are smarter in the way we use our resources. Wastewater treatment and desalination can reclaim water that’s not usable today. Authorities in large cities should allocate water for the most urgent uses, and repair leaky infrastructure to preserve the supply. Since 70 percent of freshwater is used in agriculture, improving irrigation techniques and using crop varieties that require less water can both help in this.
5. Restore Crucial Ecosystems
We might like to think that our technological prowess has made us masters of the natural world, and when you live in a large city, it can often feel like this is the case. But natural processes evolve on huge scales and often provide critical services to people.
Mangrove forests, which thrive in coastal regions, are an excellent example. They protect low-lying coastal communities from storm surges, acting as natural flood defenses. They lock in carbon dioxide—up to ten times more than other terrestrial ecosystems. They provide natural habitats for many rare species. But 35 percent of the world’s mangroves have already been destroyed. Careful management of these and other vital ecosystems is necessary to help us adapt to climate change.
We have many of the tools we need to adapt to a changing climate. But what’s worth emphasising is that all of these adaptation tasks are worth completing anyway. Billions of lives could be improved by taking action that preserves natural ecosystems, enhances food and water security, protects us from natural disasters, and ensures resilient infrastructure for the coming decades.
In an increasingly interconnected world, these are everyone’s problems. There are few better ways to spend money—but, as the report makes clear, investing now will save trillions in the future. What are we waiting for?
Source: Singularity Hub:
The First Evidence That Drugs Could Turn Back the Clock on Our Biological Age
After decades of research, here it is: the first promising evidence in humans, albeit imperfect and early, that a cocktail of three drugs is enough to reverse the epigenetic clock—a measure of someone’s biological age and health.
The results came as a surprise to even the research team, who originally designed the trial for something a little less dazzling: to look at human growth hormone’s effects on the thymus, the cradle of the body’s immune system that deteriorates with age.
“Maintained immune function is seen in centenarians,” and thymus function is linked to all-cause mortality, explained study author Dr. Gregory Fahy at Intervene Immune, based in Los Angeles, California. “So we were hoping to use a year of growth hormone to maintain thymus function in middle-aged men, right before the tissue’s functions take a nosedive,” he said.
Yet something gnawed at the back of his mind. To combat the side effects of growth hormone, which includes dangerously increasing blood sugar levels, the team added in two diabetes drugs as a countermeasure. One is DHEA, a hormone secreted by the adrenal gland. The other, metformin, might spark immediate recognition: based on pre-clinical research it’s one of the most promising anti-aging drugs in the longevity pipeline. All three drugs have been linked to slowing the aging process in the lab.
What if the three-drug combination didn’t just work on the immune system? What if they can actually induce measurable anti-aging effects in humans?
Before terminating the study, Fahy decided to call up Dr. Steve Horvath at the University of California, Los Angeles. The “watcher” of epigenetic clocks, Horvath has spent his career finding measures to assess a person’s biological age, which differs from the number of candles you put on your birthday cake every year but better reflects your “true” age. Taking the drug cocktail for one year shed the participant’s chronological age by 2.5 years on average, while showing signs of immune rejuvenation.
While not a massive change, the results caught the team off guard. “I’d expected to see slowing down of the clock, but not a reversal,” said Horvath. “That felt kind of futuristic.”
It’s not to say we’ve “cured” aging—far from it. But after decades of research in flies, worms, and rodents, this trial in humans, however small and imperfect in control measures, offers hope.
The Hallmarks of Aging
Measuring a person’s “true” age is surprisingly difficult. Due to genetics and lifestyle interventions, a population of 60-year-olds, for example, exhibit a spread in their health and mental status. Compared to chronological age, biological age better correlates with general health status, mental abilities, risk of getting age-related diseases, and death. Yet because aging gradually deteriorates the entire body, scientists have struggled to find the best markers.
In 2013, several research groups pooled their ideas to piece together the hallmarks of aging. Here’s a taste of some ideas: Genomic instability. The shrinking of telomeres, the “protective” endcaps that protect chromosomes, which house our genes. Protein metabolism gone wild. Mitochondria, the energy producers in cells, break down. Depleted stem cells. Zombie cells run rampant.
A combination of markers may form the best “clock” that measures true age. But when it comes to any single measure, one stood out: epigenetic alterations.
Stay with me. The epigenome controls how genes get turned into proteins, and subsequently, tissues, organs, and the whole body. They’re comprised of chemical marks that tag onto the genetic sequence itself, like light switches on every gene lamp. Different marks control whether a gene is turned on or off—methyl groups, for example, shut it off—and the pattern of these tags drastically changes as you age.
For the past few years, Horvath and others screened hundreds of positions on DNA from sample cells to see how often those places have a methyl group. By feeding these epigenetic data into algorithms, the teams have uncovered several mathematical clocks that can remarkably estimate a cell’s—and a person’s—true biological age.
“The greatest hope is that this clock measures the output of a process that really does relate to aging—even causes aging,” said Horvath.
An Immune Restoration
The new study’s initial focus wasn’t epigenetic clocks; rather, it was the immune system. The thymus, a tiny gland nestled between the lungs and the breastbone, helps nurture immune white cells to their full function to combat infections and cancers. The thymus is critical for maintaining the immune system, but it’s fragile. It begins to shrink after puberty and fills with fatty deposits, which correlates to all sorts of immune troubles.
Nearly 16 years ago, when Fahy was 46 years old, he reviewed promising studies using growth hormones to restore thymus functions in animals and grew convinced that he found the solution to restoring the organ’s function. With striking commitment, he jabbed himself with growth hormones and the diabetes drug DHEA for a month, and found signs of regeneration in his own thymus.
The new TRIM (Thymus Regeneration, Immunorestoration and Insulin Mitigation) trial built on Fahy’s self-experimentation. The study recruited nine white men aged between 51 and 65 years old, and dosed them with the three-drug combo for a year: growth hormone for restoration, and DHEA and metformin to combat high blood sugar. The latter two were also partly chosen for their promising anti-aging effects in animals.
During the trial, the team regularly took blood samples to analyze immune cell counts, and used medical imaging to check the composition of their thymus. With age, the number of different immune cell type changes, with potentially detrimental effects. At the end of the trial, not only were the cell changes reversed, the participants’ thymus also showed less signs of fat—they’d been replaced by healthy, regenerated tissue.
A Surprising Rewind
Studying epigenetic clocks came as an afterthought. After the trial was completed, Fahy reached out to Horvath to take a second look at the data.
The results came as a surprise to both. Using four different epigenetic clocks, Horvath measured the biological age of each participant. Every single time he found that the clock rewound: the participants’ epigenetic age was, on average, 1.5 years slower than when they first entered the trial. Rather than aging, they had a Benjamin Button moment. What’s more, at nine months of treatment, the de-aging effect seemed to accelerate—that is, the longer they took the drug, the faster their epigenetic clocks seemed to rewind. The effects lasted for at least six months after they stopped taking the drugs.
Because the results were so consistent and lasting, Horvath is optimistic it’s not a fluke, even with a small sample size. De-aging effects aside, other measures also proved promising: one of the most dangerous side effects of rejuvenation is cancer, characterized by “immortal” cells. Although the study only looked at prostate cancer, a high-risk potential, they didn’t find any biomarkers hinting at a dangerous turn.
The study is hardly the final word on rejuvenation in humans. Because the study is so small and “not very well controlled,” said Dr. Wolfgang Wagner at the University of Aachen in Germany, who was not involved in the study, “the results are not rock solid.”
More importantly, the authors don’t know how the drugs are working. Their main idea is that they’re acting on the same molecular pathways as restricting calories, which is also a strong de-aging intervention in animals. In addition, epigenetic age isn’t synonymous with biological age, though it’s a tight approximation in terms of age-related health risks.
Regardless, the results are promising. Intervene Immune is now planning a larger trial with a more diverse population, including different age and ethnic groups and women, to further gauge efficacy.
As the authors concluded: “This is to our knowledge the first report of an increase, based on an epigenetic age estimator, in predicted human lifespan by means of a currently accessible aging intervention.” It won’t be the last.
Source: Singularity Hub:
[rNASA Opens Accreditation for Launch of Mission to Explore Ionosphere NASA has opened media accreditation for the launch of its Ionospheric Connection Explorer (ICON) mission, targeted to be air-launched over the Atlantic Ocean on a Northrop Grumman Pegasus XL rocket Wednesday, Oct. 9. Source: NASA Breaking news http://www.nasa.gov/press-release/nasa-opens-accreditation-for-launch-of-mission-to-explore-ionosphere
MIT Future of Work Report: We Shouldn’t Worry About Quantity of Jobs, But Quality
Robots aren’t going to take everyone’s jobs, but technology has already reshaped the world of work in ways that are creating clear winners and losers. And it will continue to do so without intervention, says the first report of MIT’s Task Force on the Work of the Future.
The supergroup of MIT academics was set up by MIT President Rafael Reif in early 2018 to investigate how emerging technologies will impact employment and devise strategies to steer developments in a positive direction. And the headline finding from their first publication is that it’s not the quantity of jobs we should be worried about, but the quality.
Widespread press reports of a looming “employment apocalypse” brought on by AI and automation are probably wide of the mark, according to the authors. Shrinking workforces as developed countries age and outstanding limitations in what machines can do mean we’re unlikely to have a shortage of jobs.
But while unemployment is historically low, recent decades have seen a polarization of the workforce as the number of both high and low-skilled jobs have grown at the expense of the middle-skilled ones, driving growing income equality and depriving the non-college-educated of viable careers.
This is at least partly attributable to the growth of digital technology and automation, the report notes, which are rendering obsolete many middle-skilled jobs based around routine work like assembly lines and administrative support.
That leaves workers to either pursue high-skilled jobs that require deep knowledge and creativity, or settle for low-paid jobs that rely on skills—like manual dexterity or interpersonal communication—that are still beyond machines, but generic to most humans and therefore not valued by employers. And the growth of emerging technology like AI and robotics is only likely to exacerbate the problem.
This isn’t the first report to note this trend. The World Bank’s 2016 World Development Report noted how technology is causing a “hollowing out” of labor markets. But the MIT report goes further in saying that the cause isn’t simply technology, but the institutions and policies we’ve built around it.
The motivation for introducing new technology is broadly assumed to be to increase productivity, but the authors note a rarely-acknowledged fact: “Not all innovations that raise productivity displace workers, and not all innovations that displace workers substantially raise productivity.”
Examples of the former include computer-aided design software that makes engineers and architects more productive, while examples of the latter include self-service checkouts and automated customer support that replace human workers, often at the expense of a worse customer experience.
While the report notes that companies have increasingly adopted the language of technology augmenting labor, in reality this has only really benefited high-skilled workers. For lower-skilled jobs the motivation is primarily labor cost savings, which highlights the other major force shaping technology’s impact on employment: shareholder capitalism.
The authors note that up until the 1980s, increasing productivity resulted in wage growth across the economic spectrum, but since then average wage growth has failed to keep pace and gains have dramatically skewed towards the top earners.
The report shies away from directly linking this trend to the birth of Reaganomics (something others have been happy to do), but it notes that American veneration of the shareholder as the primary stakeholder in a business and tax policies that incentivize investment in capital rather than labor have exacerbated the negative impacts technology can have on employment.
That means the current focus on re-skilling workers to thrive in the new economy is a necessary, but not sufficient, solution to the disruptive impact technology is having on work, the authors say.
Alongside significant investment in education, fiscal policies need to be re-balanced away from subsidizing investment in physical capital and towards boosting investment in human capital, the authors write, and workers need to have a greater say in corporate decision-making.
The authors point to other developed economies where productivity growth, income growth, and equality haven’t become so disconnected thanks to investments in worker skills, social safety nets, and incentives to invest in human capital. Whether such a radical reshaping of US economic policy is achievable in today’s political climate remains to be seen, but the authors conclude with a call to arms.
“The failure of the US labor market to deliver broadly shared prosperity despite rising productivity is not an inevitable byproduct of current technologies or free markets,” they write. “We can and should do better.”
Source: Singularity Hub:
[rBrad Pitt to Speak with NASA Astronaut on Space Station about Artemis Program As NASA prepares to send the first woman and next man to the Moon by 2024 under the Artemis program, Brad Pitt is playing an astronaut in his latest film, and now the actor will have the opportunity to discuss what it’s truly like to live and work in space with a NASA crew member living aboard the International Space Station. Source: NASA Breaking news http://www.nasa.gov/press-release/brad-pitt-to-speak-with-nasa-astronaut-on-space-station-about-artemis-program