Greta Thunberg’s voice speaks just as loud as her words
“Do you think they hear us? We’ll make them hear us!”. This was the rallying call Greta Thunberg gave to 250,000 people in New York’s packed streets, and millions worldwide who were taking part in the largest climate protest in history. In political chambers and on the streets, her cutting and inspiring words have awakened countless people to the climate and ecological emergencies. But beyond them, her voice contains a message that’s just as powerful.
The sound of Thunberg’s voice has become as distinctive as her bluntly precise rhetoric and diminutive figure, as evidenced by her recent uncredited opening monologue on indie rock band The 1975’s album, A Brief Inquiry Into Online Relationships.
And in making her voice heard, its unique characteristics tell their own important story.
The sonic fingerprint
The human voice is like a fingerprint made of sound waves. Unique to each speaker, its composition is determined by an array of factors ranging from the size, age and sex of the speaker’s body to their emotional state and social background.
As a result, when a person vocalises, the voice connects the individual to the collective. When we speak, we carry and communicate our own personal identity to the community we enter. When we listen, we’re listening not only to the meaning of the words spoken but also the non-linguistic information communicated by the speaker’s sonic fingerprint.
The most salient characteristic of Thunberg’s vocal fingerprint is perhaps her age. Just as Malala Yousafzai’s youthful tones gave her drive for female education such sway, Thunberg’s voice is a clear reminder of her 16 years – and by extension, the adolescence of the thousands of school strikers she has galvanised. She frequently frames the climate crisis as a generational conflict between the adults who are exacerbating the problem and the children who will pay the price. It is the youth in her voice, over and above her chastising words, that makes this role reversal so powerful.
The political landscape surrounding the climate and ecological crises is constantly changing. So is the human voice. How she alters her approach to public speaking as her voice shifts into adulthood could be important if she is to keep having an impact – she will only sound like a student for so long.
A voice through a crowd
The wide reach of Thunberg’s public speeches also turn an issue too often expressed with faceless statistics and global trends into a human one.
On paper, it would be easy to write off Greta’s words as another opinion among many. But as well as physical characteristics, our voices communicate our emotions. Thunberg’s terse and sombre monotone makes tangible the emotional significance of the deepening crisis facing her generation.
In embodying the issue of climate breakdown, Thunberg’s voice also makes it personal. When she speaks, we are reminded that she is one individual – and that her actions alone inspired hundreds of thousands to join her. As the title of her recent collection of speeches says, No One Is Too Small to Make a Difference.
Thunberg and her followers argue for systems change, but they do so as a chorus of individual voices. The quarter-million strong unified chant of Thunberg’s name at New York’s climate strikes reminds us that when individuals are empowered and brought together, they can each play an important part in tackling climate and ecological breakdown.
The voice is not just a vehicle for language. The unique sounds of every human voice tell their own story alongside the words they carry. For Thunberg, this is the story of a generation let down and determined to effect change – whether leaders like it or not.
As the movement she started continues to gain momentum, this message will underpin everything she and her followers say. Whether people are listening is another story.
This is a concerted effort among news organisations to put the climate crisis at the forefront of our coverage. The Conversation also runs Imagine, a newsletter in which academics explore how the world can rise to the challenge of climate change. Sign up here.
Damien Pollard receives funding from the Arts and Humanities Research Council.
[rNASA Television to Broadcast Next Space Station Crew Launch, Docking A multinational crew, including NASA astronaut Jessica Meir and the first space traveler from the United Arab Emirates (UAE), is scheduled to launch to the International Space Station Wednesday, Sept. 25. NASA Television and the agency’s website will provide live coverage of the crew’s launch and arrival. Source: NASA Breaking news http://www.nasa.gov/press-release/nasa-television-to-broadcast-next-space-station-crew-launch-docking-0
[rNASA, Australian Space Agency to Sign Joint Statement at NASA Headquarters Media are invited to a joint signing ceremony between NASA and the Australian Space Agency at 9 a.m. EDT Saturday, Sept. 21, at NASA Headquarters in Washington. Source: NASA Breaking news http://www.nasa.gov/press-release/nasa-australian-space-agency-to-sign-joint-statement-at-nasa-headquarters
How each of us sees the world is about to change dramatically.
For all of human history, the experience of looking at the world was roughly the same for everyone. But boundaries between the digital and physical are beginning to fade.
The world around us is gaining layer upon layer of digitized, virtually overlaid information—making it rich, meaningful, and interactive. As a result, our respective experiences of the same environment are becoming vastly different, personalized to our goals, dreams, and desires.
Welcome to Web 3.0, or the Spatial Web. In version 1.0, static documents and read-only interactions limited the internet to one-way exchanges. Web 2.0 provided quite an upgrade, introducing multimedia content, interactive web pages, and participatory social media. Yet, all this was still mediated by two-dimensional screens.
Today, we are witnessing the rise of Web 3.0, riding the convergence of high-bandwidth 5G connectivity, rapidly evolving AR eyewear, an emerging trillion-sensor economy, and powerful artificial intelligence.
As a result, we will soon be able to superimpose digital information atop any physical surrounding—freeing our eyes from the tyranny of the screen, immersing us in smart environments, and making our world endlessly dynamic.
In the third post of our five-part series on augmented reality, we will explore the convergence of AR, AI, sensors, and blockchain and dive into the implications through a key use case in manufacturing.
A Tale of Convergence
Let’s deconstruct everything beneath the sleek AR display.
It all begins with graphics processing units (GPUs)—electric circuits that perform rapid calculations to render images. (GPUs can be found in mobile phones, game consoles, and computers.)
However, because AR requires such extensive computing power, single GPUs will not suffice. Instead, blockchain can now enable distributed GPU processing power, and blockchains specifically dedicated to AR holographic processing are on the rise.
Next up, cameras and sensors will aggregate real-time data from any environment to seamlessly integrate physical and virtual worlds. Meanwhile, body-tracking sensors are critical for aligning a user’s self-rendering in AR with a virtually enhanced environment. Depth sensors then provide data for 3D spatial maps, while cameras absorb more surface-level, detailed visual input. In some cases, sensors might even collect biometric data, such as heart rate and brain activity, to incorporate health-related feedback in our everyday AR interfaces and personal recommendation engines.
The next step in the pipeline involves none other than AI. Processing enormous volumes of data instantaneously, embedded AI algorithms will power customized AR experiences in everything from artistic virtual overlays to personalized dietary annotations.
In retail, AIs will use your purchasing history, current closet inventory, and possibly even mood indicators to display digitally rendered items most suitable for your wardrobe, tailored to your measurements.
In healthcare, smart AR glasses will provide physicians with immediately accessible and maximally relevant information (parsed from the entirety of a patient’s medical records and current research) to aid in accurate diagnoses and treatments, freeing doctors to engage in the more human-centric tasks of establishing trust, educating patients and demonstrating empathy.
Convergence in Manufacturing
One of the nearest-term use cases of AR is manufacturing, as large producers begin dedicating capital to enterprise AR headsets. And over the next ten years, AR will converge with AI, sensors, and blockchain to multiply manufacturer productivity and employee experience.
(1) Convergence with AI
In initial application, digital guides superimposed on production tables will vastly improve employee accuracy and speed, while minimizing error rates.
Already, the International Air Transport Association (IATA) — whose airlines supply 82 percent of air travel — recently implemented industrial tech company Atheer’s AR headsets in cargo management. And with barely any delay, IATA reported a whopping 30 percent improvement in cargo handling speed and no less than a 90 percent reduction in errors.
With similar success rates, Boeing brought Skylight’s smart AR glasses to the runway, now used in the manufacturing of hundreds of airplanes. Sure enough—the aerospace giant has now seen a 25 percent drop in production time and near-zero error rates.
Beyond cargo management and air travel, however, smart AR headsets will also enable on-the-job training without reducing the productivity of other workers or sacrificing hardware. Jaguar Land Rover, for instance, implemented Bosch’s Re’flekt One AR solution to gear technicians with “x-ray” vision: allowing them to visualize the insides of Range Rover Sport vehicles without removing any dashboards.
And as enterprise capabilities continue to soar, AIs will soon become the go-to experts, offering support to manufacturers in need of assembly assistance. Instant guidance and real-time feedback will dramatically reduce production downtime, boost overall output, and even help customers struggling with DIY assembly at home.
Perhaps one of the most profitable business opportunities, AR guidance through centralized AI systems will also serve to mitigate supply chain inefficiencies at extraordinary scale. Coordinating moving parts, eliminating the need for manned scanners at each checkpoint, and directing traffic within warehouses, joint AI-AR systems will vastly improve workflow while overseeing quality assurance.
After its initial implementation of AR “vision picking” in 2015, leading courier company DHL recently announced it would continue to use Google’s newest smart lens in warehouses across the world. Motivated by the initial group’s reported 15 percent jump in productivity, DHL’s decision is part of the logistics giant’s $300 million investment in new technologies.
And as direct-to-consumer e-commerce fundamentally transforms the retail sector, supply chain optimization will only grow increasingly vital. AR could very well prove the definitive step for gaining a competitive edge in delivery speeds.
As explained by Vital Enterprises CEO Ash Eldritch, “All these technologies that are coming together around artificial intelligence are going to augment the capabilities of the worker and that’s very powerful. I call it Augmented Intelligence. The idea is that you can take someone of a certain skill level and by augmenting them with artificial intelligence via augmented reality and the Internet of Things, you can elevate the skill level of that worker.”
Already, large producers like Goodyear, thyssenkrupp, and Johnson Controls are using the Microsoft HoloLens 2—priced at $3,500 per headset—for manufacturing and design purposes.
Perhaps the most heartening outcome of the AI-AR convergence is that, rather than replacing humans in manufacturing, AR is an ideal interface for human collaboration with AI. And as AI merges with human capital, prepare to see exponential improvements in productivity, professional training, and product quality.
(2) Convergence with Sensors
On the hardware front, these AI-AR systems will require a mass proliferation of sensors to detect the external environment and apply computer vision in AI decision-making.
To measure depth, for instance, some scanning depth sensors project a structured pattern of infrared light dots onto a scene, detecting and analyzing reflected light to generate 3D maps of the environment. Stereoscopic imaging, using two lenses, has also been commonly used for depth measurements. But leading technology like Microsoft’s HoloLens 2 and Intel’s RealSense 400-series camera implement a new method called “phased time-of-flight” (ToF).
In ToF sensing, the HoloLens 2 uses numerous lasers, each with 100 milliwatts (mW) of power, in quick bursts. The distance between nearby objects and the headset wearer is then measured by the amount of light in the return beam that has shifted from the original signal. Finally, the phase difference reveals the location of each object within the field of view, which enables accurate hand-tracking and surface reconstruction.
With a far lower computing power requirement, the phased ToF sensor is also more durable than stereoscopic sensing, which relies on the precise alignment of two prisms. The phased ToF sensor’s silicon base also makes it easily mass-produced, rendering the HoloLens 2 a far better candidate for widespread consumer adoption.
To apply inertial measurement—typically used in airplanes and spacecraft—the HoloLens 2 additionally uses a built-in accelerometer, gyroscope, and magnetometer. Further equipped with four “environment understanding cameras” that track head movements, the headset also uses a 2.4MP HD photographic video camera and ambient light sensor that work in concert to enable advanced computer vision.
For natural viewing experiences, sensor-supplied gaze tracking increasingly creates depth in digital displays. Nvidia’s work on Foveated AR Display, for instance, brings the primary foveal area into focus, while peripheral regions fall into a softer background— mimicking natural visual perception and concentrating computing power on the area that needs it most.
Gaze tracking sensors are also slated to grant users control over their (now immersive) screens without any hand gestures. Conducting simple visual cues, even staring at an object for more than three seconds, will activate commands instantaneously.
And our manufacturing example above is not the only one. Stacked convergence of blockchain, sensors, AI and AR will disrupt almost every major industry.
Take healthcare, for example, wherein biometric sensors will soon customize users’ AR experiences. Already, MIT Media Lab’s Deep Reality group has created an underwater VR relaxation experience that responds to real-time brain activity detected by a modified version of the Muse EEG. The experience even adapts to users’ biometric data, from heart rate to electro dermal activity (inputted from an Empatica E4 wristband).
Now rapidly dematerializing, sensors will converge with AR to improve physical-digital surface integration, intuitive hand and eye controls, and an increasingly personalized augmented world. Keep an eye on companies like MicroVision, now making tremendous leaps in sensor technology.
While I’ll be doing a deep dive into sensor applications across each industry in our next blog, it’s critical to first discuss how we might power sensor- and AI-driven augmented worlds.
(3) Convergence with Blockchain
Because AR requires much more compute power than typical 2D experiences, centralized GPUs and cloud computing systems are hard at work to provide the necessary infrastructure. Nonetheless, the workload is taxing and blockchain may prove the best solution.
A major player in this pursuit, Otoy aims to create the largest distributed GPU network in the world, called the Render Network RNDR. Built specifically on the Ethereum blockchain for holographic media, and undergoing Beta testing, this network is set to revolutionize AR deployment accessibility.
Alphabet Chairman Eric Schmidt (an investor in Otoy’s network), has even said, “I predicted that 90% of computing would eventually reside in the web based cloud… Otoy has created a remarkable technology which moves that last 10%—high-end graphics processing—entirely to the cloud. This is a disruptive and important achievement. In my view, it marks the tipping point where the web replaces the PC as the dominant computing platform of the future.”
Leveraging the crowd, RNDR allows anyone with a GPU to contribute their power to the network for a commission of up to $300 a month in RNDR tokens. These can then be redeemed in cash or used to create users’ own AR content.
In a double win, Otoy’s blockchain network and similar iterations not only allow designers to profit when not using their GPUs, but also democratize the experience for newer artists in the field.
And beyond these networks’ power suppliers, distributing GPU processing power will allow more manufacturing companies to access AR design tools and customize learning experiences. By further dispersing content creation across a broad network of individuals, blockchain also has the valuable potential to boost AR hardware investment across a number of industry beneficiaries.
On the consumer side, startups like Scanetchain are also entering the blockchain-AR space for a different reason. Allowing users to scan items with their smartphone, Scanetchain’s app provides access to a trove of information, from manufacturer and price, to origin and shipping details.
Based on NEM (a peer-to-peer cryptocurrency that implements a blockchain consensus algorithm), the app aims to make information far more accessible and, in the process, create a social network of purchasing behavior. Users earn tokens by watching ads, and all transactions are hashed into blocks and securely recorded.
The writing is on the wall—our future of brick-and-mortar retail will largely lean on blockchain to create the necessary digital links.
Integrating AI into AR creates an “auto-magical” manufacturing pipeline that will fundamentally transform the industry, cutting down on marginal costs, reducing inefficiencies and waste, and maximizing employee productivity.
Bolstering the AI-AR convergence, sensor technology is already blurring the boundaries between our augmented and physical worlds, soon to be near-undetectable. While intuitive hand and eye motions dictate commands in a hands-free interface, biometric data is poised to customize each AR experience to be far more in touch with our mental and physical health.
And underpinning it all, distributed computing power with blockchain networks like RNDR will democratize AR, boosting global consumer adoption at plummeting price points.
As AR soars in importance—whether in retail, manufacturing, entertainment, or beyond—the stacked convergence discussed above merits significant investment over the next decade. The augmented world is only just getting started.
(1) A360 Executive Mastermind: Want even more context about how converging exponential technologies will transform your business and industry? Consider joining Abundance 360, a highly selective community of 360 exponentially minded CEOs, who are on a 25-year journey with me—or as I call it, a “countdown to the Singularity.” If you’d like to learn more and consider joining our 2020 membership, apply here.
Share this with your friends, especially if they are interested in any of the areas outlined above.
(2) Abundance-Digital Online Community: I’ve also created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is Singularity University’s ‘onramp’ for exponential entrepreneurs — those who want to get involved and play at a higher level. Click here to learn more.
People are calling it the fourth industrial revolution or “industry 4.0”. The first industrial revolution used steam power to mechanise production. The second used electric power to mass produce products while the third introduced computers to automate production. The fourth revolution is happening now, disruptive technologies including the internet of things, virtual reality, robotics, and artificial intelligence are changing the way we interact, work, and live. Highly automated, intelligent systems promise to transform people’s lives and even question the very role of humans.
What will all this mean for climate change? The answer is complicated. These innovations have the potential to significantly reduce greenhouse gas emissions and provide unprecedented levels of insight and data to mitigate climate change. But without proper consideration mass automation could be bad news, increasing consumption and emissions.
To consider what mass automation might mean for our environmental impact, I want to look at two sectors where human work has already been largely replaced by machinery: agriculture and cars.
Cars for all
At the beginning of the 20th century cars were a plaything of the rich, out of the reach of the average person. But that was before Henry Ford perfected the assembly line concept, and rapidly came to dominate nearly half of the American automobile market.
Before Ford, cars were an artisan product, individually built by hand by teams of skilled craftsmen. Once one car was completed the team could start work on the next. Ford reconfigured this process, with multiple stations working on specific assembly processes, with each car moving from one manufacturing process to the next in order of assembly.
Today, car manufacturing is largely fully automated, with human teams replaced by robotic workers. Robots and other technologies of industry 4.0 enable more efficient energy management in factories. And better data means better managed supply chains. This has allowed manufacturers to reduce waste and emissions across the entire lifecycle of products such as cars – from the initial metals and minerals, through to the energy used to transport products to market.
Farming has a huge environmental impact
Much like Ford’s cars, developments in mechanisation – tractors, combine harvesters and so on – have allowed more food to be produced with less labour. Despite this, with the world population and demand for food rapidly rising, agriculture is responsible for increasing greenhouse gas emissions and an enormous share of environmental degredation. It is vital we find ways to further improve efficiency and reduce the emissions from our food production.
But, as with cars, agriculture will fundamentally change with the advent of mass automation and smart technologies. Robots are already replacing human labour across a range of agricultural tasks from watering to pest control or harvesting. Even tractors could eventually become autonomous. Fully automated, vertical farms are being built, maximising space and production efficiency. These and various other innovations and emerging technologies including off-grid renewable energy systems all promise to produce food more efficiently, reducing emissions.
The ‘rebound effect’
These developments might suggest that these tech developments will reduce emissions and help the environment. After all, robots can build cars and grow food more efficiently than humans, right?
The issue is while there has been a significant improvement in energy and resource efficiency, there has not been an absolute reduction in environmental impact. In fact, overall environmental impact is generally increasing. Some commentators even argue that improvements in technology have actually driven an increase in consumption, a phenomenon commonly referred to as the “rebound effect”.
Similarly, automated processes and huge industrial farms have meant more food can be produced more efficiently. However, cheaper food and increasing average wealth are increasing consumption of high impact foods such as red meat, which is likely to have significant consequences for climate change and biodiversity.
So, yes, increasing automation and smart technologies do promise sweeping changes to society, with the potential to liberate human populations from the mundane. If managed carefully this technological revolution has the potential to provide significant environmental benefit. But that is a big if. Automation will not necessarily deliver a positive outcome for sustainability – we need to manage our consumption, even as the latest technological revolution races ahead of us.
Laurie Wright is affiliated with the American Center for Life Cycle Assessment (ACLCA) and the Forum for Sustainability through Life Cycle Innovation (FSLCI).
Alcohol advice for pregnant women – a lost opportunity to communicate new guidelines
Pregnant women in the UK are now officially advised to consume no alcohol at all. These guidelines, from the Chief Medical Officer (CMO), were issued in January 2016, and replaced a previous recommendation that women should limit themselves to one or two units of alcohol, once or twice per week, and not get drunk.
But how widely were these more recent guidelines promoted, and how well did such an important message get passed on to mothers-to-be? In our recent study, we examined awareness and implementation among midwives. And our findings show that more than three years after the guidelines were published, only 58% said they were of aware of them.
There was also variation in what midwives thought the content of the CMO guidelines were. Alongside abstinence, some 19% believed it to be the same limit of units as before – a limit set in National Institute for Health and Care Excellence (NICE) guidelines.
Midwives told us that NICE guidelines are commonly used to inform their work. Since those were not updated to align with the government’s alcohol guidelines until December 2018, it is perhaps unsurprising that midwives gave mixed responses.
Nonetheless, we found that 97% of midwives said they advised all women to abstain from alcohol at the first antenatal appointment, which usually takes place during the first ten weeks of pregnancy. In subsequent appointments, however, only around two fifths of midwives always or usually advised women to abstain.
This might reflect what midwives told us in interviews about taking time to build a trusting relationship with women, and that bringing up the subject of alcohol later on was felt to be a good strategy.
Our survey also showed that after qualifying, midwives received little – if any – follow up training on alcohol. Nor is alcohol included in annual training updates, unlike smoking, which is.
This is an important gap, given the changes in guidelines over time. And it reflects what seems to be a missed opportunity to ensure that alcohol is prioritised within governmental maternal health policy.
In recent years, the public health role of midwives has become vastly more extensive. It covers a range of topics including antenatal screening, immunisation, mental health and risks like smoking, alcohol and drug use.
Yet we found that while midwives acknowledged that public health is an important part of their role, they often feel limited by time constraints during antenatal appointments for women with uncomplicated pregnancies.
So, did the government succeeded with introducing their updated CMO guidelines in 2016? Our study suggests not, likely due to the NICE guidelines remaining unchanged for a substantial amount of time after the adjustment.
This does not mean that midwives did not advise abstinence as the guidelines recommended. But in order to ensure that a workforce that is under increasing pressure are aware of changes to guidelines, better communication is required.
The problem of lack of knowledge about drinking guidelines is not isolated to health professionals, of course. A study of the general population showed that while 71% were aware that general alcohol guidelines had been updated in 2016, only 8% knew what they were (a maximum of 14 units per week for both men and women spread over three days or more).
Updating drinking guidelines in line with the latest research is a good idea. It can help people change their behaviours and reduce harm. But those updates seem futile unless they are accompanied by investing in efforts to support health professionals in their efforts to share the latest information with the people under their care.
Lisa Schölin has received funding from the Institute for Alcohol Studies. Lisa Schölin currently works as a temporary policy analyst at Foundation for Alcohol Research and Education.
Lesley Smith does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Antidepressants work, but just not how scientists thought they worked
Most clinical trials of antidepressants were done decades ago in people with severe depression recruited from specialist mental health services. Yet most people who take these drugs have mild to moderate depression. We wanted to know whether a common antidepressant called sertraline works for this group. We found that, indeed, it does work, but differently from how we expected.
Prescriptions for antidepressants have risen substantially in wealthy countries over the past two decades (the rate has doubled in the last ten years), and this has led to concerns that they are being over-prescribed. The vast majority of antidepressants are prescribed by GPs for patients with mild to moderate symptoms of anxiety or depression, even if these patients don’t have enough symptoms for a clinical diagnosis of depression or anxiety.
Our new study, published in The Lancet Psychiatry, investigated the effectiveness of sertraline in primary care patients with symptoms of depression, ranging from mild to severe. We did not set any severity criteria because we wanted a sample similar to the people who receive antidepressants now.
Sertraline is a selective serotonin reuptake inhibitor (SSRI) – drugs that increase levels of a chemical called serotonin in the brain – and one of the most commonly prescribed drugs for depression and anxiety.
Our study recruited more than 650 people aged 18-74 from 179 GP surgeries in England. They had all reported symptoms of depression to their doctor – such as low mood, loss of pleasure, difficulty concentrating and sleep problems – and were seeking treatment. We randomly allocated patients to one of two groups: they either received sertraline for 12 weeks or they received a placebo that was identical to the sertraline pill.
Neither researchers nor patients knew which group people were allocated to. This type of “double-blind study” helps to reduce bias. After the study started, we collected data from participants at regular intervals: two, six and 12 weeks after they started the trial.
The results surprised us. Our theory was that depressive symptoms would be improved on sertraline by six weeks, but we found no evidence this was the case. Any effect on depressive symptoms happened later and was smaller and less convincing. In contrast, sertraline led to an early reduction in anxiety symptoms several weeks before any improvement in depressive symptoms.
Most people with depressive symptoms also have anxiety symptoms, and it would be unusual for someone to have depressive symptoms but no anxiety symptoms. By reducing anxiety symptoms, the antidepressant made people with depression feel better. Those taking the antidepressant were twice as likely to say they felt better compared with those taking the placebo. Even if depressive symptoms take longer to respond, early effects on anxiety lead to improvements in a person’s quality of life.
There has been a long-running debate about whether antidepressants help people with mild symptoms. Our study included people with a wide range of severity and we saw no evidence that the effect of the antidepressant was smaller in those with mild or moderate symptoms. Sertraline seems to benefit a wider group of people than previously believed. Our study alone cannot rule out the possibility that sertraline might be less effective in people with mild symptoms, but we found no evidence for this.
Our findings support the continued prescription of sertraline and other similar antidepressants for people with depressive symptoms. As with any medication, the benefits have to be set against any side effects and the possibility of withdrawal symptoms when coming off the drug.
This study was carried out in the UK, which has a strong primary care system. But there are similarities between the behaviour of doctors and patients in all wealthy countries, whatever the health system. For example, the increase in antidepressant prescription has occurred in all wealthy countries. We think our findings also apply to other countries.
Antidepressants are one of the most commonly prescribed drugs in the world, but we are still developing our understanding of how they work. Overall, this study is reassuring. On average, people who are receiving antidepressants in UK primary care are benefiting, even if the benefit is more for anxiety symptoms than depressive symptoms.
Glyn Lewis has received grant funding for research relevant to antidepressants and the study described in this article was funded by UK NIHR. He has also received funds for providing expert advice on litigation involving antidepressant withdrawal symptoms.
Gemma Lewis does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Declaring vaccine hesitancy one of the ten biggest health threats in 2019 is unhelpful
The World Health Organisation (WHO) recently declared vaccine hesitancy one of the ten biggest threats to global health in 2019, along with air pollution and climate change. The declaration followed several measles outbreaks in Europe and the US, but most cases were in a country where the health system had broken down: Ukraine.
Nothing suggests that these outbreaks were caused by the few who declined a measles vaccine. A substantial proportion of cases occurred in people who had been vaccinated – so the outbreaks were mainly the result of broken healthcare systems and vaccine failure rather than vaccine hesitancy.
But the WHO declaration provides extra motivation for the health authorities in many countries that now mandate or consider mandating vaccines. The rhetoric is well known: vaccines work, the science is settled, vaccine-hesitant parents are uninformed or misguided victims of the social media platforms where crooks spread fake science.
It is taken as a given that vaccines are similarly and uniformly beneficial – aside from rare side effects – and no sane person would question that. But are vaccines similarly and uniformly beneficial?
There is no doubt that vaccines can induce immunological “memory” against their target disease. And, at the population level, this reduces the risk of getting the target disease, at least for a period.
With smallpox, the vaccine actually led to the eradication of a devastating disease that killed around 30% of those infected.. We are close to eradicating two other serious infections: polio and measles.
Up to 50 years ago, polio infected almost everybody. And although only a small proportion developed clinical disease, it was still a major cause of paralysis. Measles infection, although seldom dangerous in wealthy areas, can be deadly in crowded, poor areas. These two infections are now close to extinct thanks to vaccines.
Overall health effects
But we don’t have a lot of evidence about the overall health effects of vaccines. Everybody has been so sure that vaccines only protected against the target infection, nothing else, and so nobody studied the overall health effects. They were simply assumed to be proportionally beneficial. For instance, if a measles vaccine is 90% effective and measles represents 10% of all deaths, then introducing the measles vaccine will reduce overall mortality by 9%. If the DTP vaccine protects against diphtheria, tetanus and pertussis – three potentially deadly diseases – then it will reduce overall mortality correspondingly.
None of the currently used vaccines were tested in randomised trials to document that they were overall beneficial before being introduced. And once a vaccine is recommended, it is almost impossible to study it in randomised trials because most ethical committees would not allow researchers to deprive a child of a recommended vaccine.
We do not have the evidence for all vaccines to tell vaccine-hesitant parents that it is overall beneficial for their child to receive each one of them. Rather, we have to acknowledge that there are things about vaccines that have not been investigated very well.
Most vaccine-hesitant parents that I have come across are concerned that vaccines have not been investigated for their overall health effects. Telling them that the science is settled and stigmatising them for their hesitance and mandating vaccines is inadequate and will only increase the popular opposition and hesitancy.
A good starting point for the new conversation we need to have with vaccine-hesitant parents is to stop talking about vaccines in plural, but discuss them individually. They are, after all, as different as drugs. And just as it would not make sense to say that “drugs work” it makes little sense to state that “vaccines work”.
There is considerable evidence that live vaccines, such as the measles vaccine, have beneficial effects on overall health – reducing the risk of measles and other infections, thereby the risk of dying. But we must admit that we do not have the same kind of evidence for other vaccines.
As health professionals, we can give people advice along the lines of “If it was my child, I would…” – but given the lack of evidence, we should not judge parents who choose not to vaccinate. And we should not mandate vaccines.
It would be wonderful to eradicate measles, but that can be achieved with a vaccination coverage of 95% – the point at which herd immunity is achieved. And it is still only a small percentage of the population that does not want to vaccinate – so if we vaccinate those who want to vaccinate, then eradication is within reach, without shaming or forcing vaccine-hesitant parents. If we manage to eradicate measles, we may want to continue the vaccination for its beneficial non-specific effects.
Regarding other vaccines, where evidence for overall benefit is missing, we need randomised trials of their effect on overall health to provide the safety evidence that parents rightly request. Rather than making vaccine hesitancy a top-ten threat, the WHO should make it a top-ten priority to follow-up on its decision from 2014 to further investigate the overall health effects of vaccines.
Christine Stabell Benn receives funding from various public and private research support foundations, none of which have any influence on her work.
My role as a university lecturer means that I am committed to fostering better lives and opportunities for each generation. I am also a parent, so when I hear the request from youth, including my students and child to stand with them, I am naturally inclined to give their case fair consideration.
But I am a scientific researcher, too. The first, core demand of the striking students, led by 16-year-old Swedish climate activist Greta Thunberg, is to “unite behind the science”. How could I not recognise the significance of this demand in a communication landscape too often dominated by short-term sensationalism, rather than the core challenges facing society and the living planet?
But there’s a deeper, more fundamental reason to support the global strike for climate, grounded in my own field of political and ecological economics.
My research focuses on how, if at all, we can create an economy that is focused on achieving human well-being and avoiding damage to the environment. The current prospect is not good. No country yet meets most needs of citizens at a sustainable level of resource use.
But my research also shows that it may be possible to do this and more. We have the capability to meet basic needs and achieve high levels of human well-being at modest levels of energy use. And beyond this moderate amount, there is no reliable relationship between energy use and well-being. In many cases, added energy can even harm human health and well-being through air pollution, climate impacts, road accidents, and lack of exercise.
A rapid, radical reduction in energy demand could perhaps fulfil both goals of addressing climate breakdown and enabling our students and children to live good lives: what Kate Raworth calls living within the “doughnut”. So why is this option not debated and put forward through an ambitious policy agenda?
A different future
The answer is both simple and profound. My research area remains marginal, and its results neglected, because to accept it would require a fundamental transformation of the prevailing economic philosophy. We would need to pay less attention to growth and profit as the measures of prosperity, and replace them with sufficiency and equity – a fair division of resources to provide what is sufficient for well-being and not more. After centuries of entrenchment, that’s no easy feat.
The production, pricing, and consumption of goods and services are not simply driven by the natural balancing of supply and demand. The economy is best understood as a social and political arena. In this arena, highly productive industries invest heavily in advertising to artificially grow consumption. As my upcoming research shows, they coalesce in aligned mega-sectors, such as the automotive, road-building and real estate industries, all of which wield outsize political influence, and have a vested interest in trapping consumers in car-intensive, road-intensive, suburban housing.
The paradox of high resource use that results in little or no human benefits has its roots in the very structure of our political economy, and the industries that are some of its most important mainstays. Transforming this structure means challenging these sectors, and finding ways to counter their excessive influence in our democracies.
This is why we must support the students’ strike this Friday, and every Friday for the foreseeable future. Significant change will not come into being without protests and solidarity movements that rigorously question unacceptable modes of living and politics. It is time for all of us to wake politicians, businesses, and institutions up to the immense task of transforming our societies.
This is a concerted effort among news organisations to put the climate crisis at the forefront of our coverage. This article is published under a Creative Commons license and can be reproduced for free – just hit the “Republish this article” button on the page to copy the full HTML coding. The Conversation also runs Imagine, a newsletter in which academics explore how the world can rise to the challenge of climate change. Sign up here.
Julia K. Steinberger receives funding from the Leverhulme Trust.
How sleep makes the brain forget things – new research on mice
What a nuisance is a faulty memory. How many times have you forgotten where you parked the car? A few years ago, probably as a sign that my retirement was overdue, I spent literally half a day trying to find my car at a major New York airport. Fortunately, I am not alone. When people find out I am an expert on memory, the first thing they ask me is normally whether I can help them be less forgetful.
Indeed, excessive forgetting is a major problem, but “normal” forgetting is actually necessary. After all, it is more crucial to remember what is important right now than to remember everything. There’s no point in remembering the phone number of the house you lived in 10 years ago – that may in fact block your memory for your current phone number.
But exactly how the brain forgets unnecessary memories has long been unclear. Now a beautiful and rather exhaustive series of studies, just published in Science, offers a clue.
Research does indeed show that, in order to remember what is important, we need to forget what isn’t important. This can happen at two levels in the brain, a “cleaning” of irrelevant information as we retain and consolidate our memories, and a “blocking” of irrelevant information when we try to retrieve a memory. The positive effect on memory of blocking irrelevant information has been known since the 1950s.
The new study, which was carried out in mice, seems to finally reveal the secret mechanisms of forgetting during retention of memory. The authors claim that forgetting is due to the activation of specific “melatonin-concentrating hormone (MCH) neurons” located in the brain’s hypothalamus, which is involved in releasing hormones. We know that melatonin affects sleep – and MCH neurons are indeed involved in the shift between the two main sleep cycles: NREM to REM (REM sleep commonly associated with dreaming).
The authors demonstrate that forgetting happens only during retention (not when we encode or retrieve memories), and that sleep is the period of the day when MCH neurons clean the memory of all the irrelevant clutter. They obtained the results by injecting chemicals into the brain of mice in order to inhibit these very neurons. Amazingly, the mice performed better on two specific memory tasks as a result – recognising new objects and a fear conditioning test (this involves making association between stimuli and their adverse consequences).
What’s more, when the researchers completely removed these neurons from the brain, the mice’s memory also improved, over the long term. On the other hand, a boosted activity of these neurons instead hindered the mice’s memory performance. The researchers therefore argue that the neuronal process may one day be used to treat memory problems.
This finding, if true and confirmed by other studies, represents a major breakthrough in understanding a fundamental memory mechanism. The methodology is rigorous and results convincing. There are some caveats though. How can we be absolutely sure that these neurons are involved in cleaning out irrelevant information in particular, rather than just impairing memory performance?
It seems that MCH neurons, when activated, just impair memory – and not necessarily with a good effect. This is important: the results do not say much about the positive role of forgetting during retention. In addition, whose memory are we talking about here? Mice memory – and necessarily so, given the highly invasive nature of most of the reported experiments. While animal models are indispensable for memory studies, it is too early to extend these findings to human memory.
For example, in humans the role of sleep in memory is still unclear. Also, forgetting occurs during retrieval of memories too and that is not explained by this new research.
Nevertheless, the new study does show for the first time that MCH neurons are strongly involved in making memory worse. That said, while we are on an exciting track thanks to this research, it is highly unlikely that we can improve human memory for a parked car by simply inhibiting a few neurons.
Giuliana Mazzoni received funding from Wellcome, Leverhulme, ESRC, SSHRC (Canada), Portuguese Science Foundation, British Academy.
Source: The Conversation: Technology http://theconversation.com/how-sleep-makes-the-brain-forget-things-new-research-on-mice-123636