In an effort to overcome the limitations of random spin-glass benchmarks for quantum annealers, focus has shifted to carefully crafted gadget-based problems whose logical structure typically has a planar topology. Recent experiments on these gadget problems using a commercially available quantum annealer have demonstrated an impressive performance over a selection of commonly used classical optimisation heuristics. Here, we show that efficient classical optimisation techniques, such as minimum-weight-perfect matching, can solve these gadget problems exactly and in polynomial time. We present approaches on how to mitigate this shortcoming of commonly used benchmark problems based on planar logical topologies.
The investigation of the ultimate limits imposed by quantum mechanics on amplification represents an important topic both on a fundamental level and from the perspective of potential applications. We discuss here a novel regime for bosonic linear amplifiers—beside phase-insensitive and phase-sensitive amplification—which we term here phase-mixing amplification. Furthermore, we show that phase-mixing amplification can be realised in a cavity optomechanical setup, constituted by a mechanical resonator which is dispersively coupled to an optomechanical cavity asymmetrically driven around both mechanical sidebands. While, in general, this amplifier is phase-mixing, for a suitable choice of parameters, the amplifier proposed here operates as a phase-sensitive amplifier. We show that both configurations allow amplification with an added noise below the quantum limit of (phase-insensitive) amplification in a parameter range compatible with current experiments in microwave circuit optom…
We analyse the time evolution of a two-level system prepared in a superposition of its ground state and radiatively unstable excited state. We show that by choosing appropriate means of detection of the radiated field, we can steer the evolution of the emitter and herald its preparation in the fully excited state. We determine the probability for the occurrence of this ‘excitation during the decay’ of a remote emitter.
Decades after Enrico Fermi’s uttered his famous words – “Where is everybody?” – the Paradox that bears his name still haunts us. Despite repeated attempts to locate radio signals coming from space and our ongoing efforts to find visible indications of alien civilizations in distant star systems, the search extra-terrestrial intelligence (SETI) has yet to produce anything substantive.
But as history has taught us, failure has a way of stimulated new and interesting ideas. For example, in a recently-published paper, Dr. Duncan H. Forgan of St. Andrews University proposed that extra-terrestrial civilizations could be communicating with each other by creating artificial transits of their respective stars. This sort of “galactic internet” could be how advanced species are attempting to signal us right now.
Forgan’s paper, “Exoplanet Transits as the Foundation of an Interstellar Communications Network“, was recently published online. In addition to being a research fellow at the School of Physics and Astronomy and the Scottish Universities Physics Alliance at the University of St Andrews (Scotland’s oldest academic institution), he is also a member of the St Andrews Center for Exoplanet Science.
The paper begins by addressing the two fundamental problems associated with interstellar communication – timing and energy consumption. When it comes to things like radio transmissions, the amount of energy that would be needed to transmit a coherent message over interstellar distances is prohibitive. Optical communications (i.e. lasers) need less energy, but spotting them would require incredibly precise timing.
As such, neither method would be particularly reliable for establishing an interstellar communications system. Taking a cue from humanity’s recent exoplanet-hunting efforts, Forgan argues that a method where transits in front of a stars are a basis of communication would solve both problems. The reason for this is largely due to the fact that the Transit Method is currently one of the most popular and reliable ways of detecting exoplanets.
By monitoring a star for periodic dips in brightness, which are caused by a planet or object passing between the observer and the star, astronomers are able to determine if the star has a system of planets. The method is also useful for determining the presence and composition of atmospheres around exoplanet. As Forgan indicates in the paper, this method could therefore be used as a means of signalling other civilizations:
“An ETI ’A’ can communicate with ETI ’B’ if B is observing transiting planets in A’s star system, either by building structures to produce artificial transits observable by B, or by emitting signals at B during transit, at significantly lower energy consumption than typical electromagnetic transmission schemes.”
In short, Forgan argued that within the Galactic Habitable Zone (GHZ) – the region of the Milky Way in which life is most likely develop – species may find that the best way to communicate with each other is by creating artificial megastructures to transit their star. These transits, which other civilizations will be looking for (looking for exoplanets, like us!) will lead them to conclude that an advanced civilization exists in another star system.
He even offers estimates on how often such transmissions could be made. As he put it:
“A message with a path of 20 kpc (the diameter of the GHZ) has a total travel time at lightspeed of just under 0.06 Myr. If we assume a relatively short timescale on which both ETIs remain in the transit zone of 100,000 years (which is approaching the timescale on which both secular evolution of planetary orbits and the star’s orbit become important), then a total of 30 exchanges can be made. This of course does not forbid a continuing conversation by other means.”
If this is starting to sound familiar, that’s probably because this is precisely what some theorists say is happening around KIC 8462852 (aka. Tabby’s Star). Back in May of 2015, astronomers noticed that the star had been undergoing considerable drops in brightness in the past few years. This behavior confounded natural explanations, which led some to argue that it could be the result of an alien megastructure passing in front of the star.
According to Forgan, such a possibility is hardly far-fetched, and would actually be a relatively economical means of communicating with other advanced species. Using graph theory, he estimated that civilizations within the GHZ could establish a fully connected network within a million years, where all civilizations are in communication with each other (either directly or via intermediate civilizations).
Not only would this network require far less energy to transmit data, but the range of any signal would be limited only by the extent of these civilizations themselves. Beyond saving energy and having greater range (assuming intermediate civilizations are able to pass messages along), this method presents other advantages. For one, a high level of technological sophistication would be required to pick up the transit of exoplanets.
In other words, civilizations would need to reach a certain level of development before they could hope to join the network. This would prevent any unfortunate “cultural contamination”, where less-advanced civilizations learned about the existence of aliens before they were ready. Second, once acquired, the transit network signals would be extremely predictable, with each transmission corresponding to a known orbital period.
That being said, there are some downsides that Forgan was sure to acknowledge. For starters, the periodicity of these signals would be a double edged sword, as signals could only be sent if and when the receiver begins to see the transit. And while a megastructure could be moved to alter the transit period, this poses problems in terms of synchronizing transmission and reception.
Addressing the limitations of the analysis, Forgan also acknowledges that the study relies on fixed stellar orbits. The orbits of stars are known to change over time, with stars passing in and out of the GHZ regularly on cosmic timescales. In addition, there is also the issue of how such a network would differ between denser regions in the galaxy – i.e. globular clusters – and areas populated by field stars. Binary stars are also not considered in the analysis.
In addition, planetary orbits are known to change over time, due to perturbations caused by neighboring planets, companion stars, or close encounters with passing stars. As a result, the visibility of transiting planets can vary even more over cosmic timescales. Last, but not least, the study assumes that civilizations have a natural lifespan of about a billion years, which is not based in any concrete knowledge.
However, these considerations do not alter the overall conclusions reached by Forgan. Making allowances for the dynamic nature of stars and planets, and assuming that civilizations exist for only 1 million years, Forgan maintains that the creation of an interstellar network of this kind is still mathematically feasible. On top of that, an artificial object could continue to signal other species long after a civilization has gone extinct.
Addressing the Fermi Paradox, Forgan concludes that this sort of communication would take a long time to detect.As he summarizes in the paper (bold added for emphasis):
“I find that at any instant, only a few civilizations are correctly aligned to communicate via transits. However, we should expect the true network to be cumulative, where a “handshake” connection at any time guarantees connection in the future via e.g. electromagnetic signals. In all our simulations, the cumulative network connects all civilizations together in a complete network. If civilizations share knowledge of their network connections, the network can be fully complete on timescales of order a hundred thousand years. Once established, this network can connect any two civilizations either directly, or via intermediate civilizations, with a path much less than the dimensions of the GHZ.”
In short, the reason we haven’t heard from or found evidence of ETI could be an issue of timing. Or, it could be that we simply didn’t realize we were being communicated with. While such an analysis is subject to guess-work and perhaps a few anthropocentric assumptions, it is certainly fascinating because of the possibilities it presents. It also offers us a potential tool in the search for extra-terrestrial intelligence (SETI), one which we are already engaged in.
And last, but not least, it offers a potential resolution to the Fermi Paradox, one which we may have already stumbled upon and are simply not yet aware of. For all we know, the observed drops in brightness coming from Tabby’s Star are evidence of an alien civilization (possibly an extinct one). Of course, the key word here is “perhaps”, as no evidence exists that could confirm this.
The possibilities raised by this paper are also exciting given that exoplanet-hunting is expected to ramp up in the coming years. With the deployment of next-generations missions like the James Webb Space Telescope and the Transiting Exoplanet Survey Satellite (TESS), we expect to be learning a great deal more about star systems both near and far.
Will we find more examples of unexplained drops in brightness? Who knows? The point is, if we do (and can’t find a natural cause for them) we have a possible explanation. Maybe its neighbors inviting us to “log on”!
Further Reading: arXiv
The post Advanced Civilizations Could Build a Galactic Internet with Planetary Transits appeared first on Universe Today.
When AlphaBay, the world’s largest dark web bazaar, went offline two weeks ago, it threw the darknet into chaos as its buyers and sellers scrambled to find new venues. What those dark web users didn’t—and couldn’t—know: That chaos was planned. Dutch authorities had already seized Hansa, another another major dark web market, the previous month. For weeks, they operated it as usual, quietly logging the user names, passwords, and activities of its visitors–including a massive influx of Alphabay refugees.
On Thursday, Europol and the US Department of Justice jointly announced the fruits of the largest-ever sting operation against the dark web’s black markets, including the seizure of AlphaBay, a market Europol estimates generated more than a billion dollars in sales of drugs, stolen data, and other illegal goods over its three years online. While Alpabay’s closure had previously been reported as an FBI operation, the agency has now confirmed that takedown, while Europol also revealed details of its tightly coordinated Hansa takeover.
With Hansa also shuttered as of Thursday, the dark web looks substantially diminished from just a few short weeks ago—and its denizens shaken by law enforcement’s deep intrusion into their underground economy.
“This is likely one of the most important criminal cases of the year,” attorney general Jeff Sessions said in a press conference Thursday morning. “Make no mistake, the forces of law and justice face a new challenge from the criminals and transnational criminal organizations who think they can commit their crimes with impunity by ‘going dark.’ This case, pursued by dedicated agents and prosecutors, says you are not safe. You cannot hide. We will find you, dismantle your organization and network. And we will prosecute you.”
So far, neither Europol nor the Department of Justice has named any of the administrators, sellers, or customers from either Hansa or AlphaBay that they plan to indict. The FBI and DEA had sought the extradition from Thailand of one AlphaBay administrator, Canadian Alexandre Cazes after identifying him in an operation they called Bayonet. But Cazes was found hanged in a Bangkok jail cell last week in an apparent suicide.
Still, expect plenty of prosecutions to emerge from the double-takedown of Hansa and AlphaBay, given the amount of information Dutch police could have swept up in the period after Alphabay’s closure.
“They flocked to Hansa in their droves,” said Interpol director Rob Wainwright. “We recorded an eight-times increase in the number of new users on Hansa immediately following the takedown of Alphabay.” The influx was so large, in fact, that Hansa put up a notice just last week that it was no longer accepting new registrations, a mysterious development given that Dutch police controlled it at the time.
That surveillance means that law enforcement likely now has identifying details on an untold number of dark web sellers—and particularly buyers. Europol claims that it gathered 10,000 postal addresses of Hansa customers, and tens of thousands of their messages, from the operation, at least some of which were likely AlphaBay customers who had migrated to the site in recent weeks. Though customers on dark web sites are advised to encrypt their addresses so that only the seller of the purchased contraband can read it, many don’t, creating a short trail of breadcrumbs to their homes for law enforcement when they seize the sites’ servers.
It’s still unclear how global law enforcement penetrated Hansa, given that it hid the location of their servers, administrators, and users with anonymity software like Tor and I2P. The FBI didn’t respond to WIRED’s request for more information, and Europol declined to comment beyond its press statement. But an indictment against AlphaBay’s Cazes filed Wednesday includes the detail that in 2014, Cazes’s personal email, “Pimp_alex_91@hotmail.com” was inexplicably included in welcome message to new users. That led them to his Paypal account and a front company, EBX Technologies. On July 5th, Thai police along with FBI and DEA agents searched Cazes’ home in Bangkok and found his laptop unencrypted and logged into the AlphaBay site. (They also found a document on the laptop tracking Cazes’ net worth, which it estimated at $23 million.)
Despite the size of the sites, the takedowns should by no means end the dark web’s vibrant trade in drugs, which researchers at Carnegie Mellon estimated in 2015 to cumulatively generate revenue in the hundreds of millions of dollars, annually. After AlphaBay’s shutdown, many of its users also flocked to another site known as Dream Market, which is likely the second-largest marketplace, ahead of Hansa. Now Dream Market will no doubt take more refugees from Hansa, to become the dark web’s reigning bazaar of the moment.
But fallout of the AlphaBay and Hansa takedowns may eventually be felt there as well. Vendors who flee those sites for Dream Market may still be compromised by law enforcement, and if arrested, could potentially give up the addresses of any new Dream Market’s customers, too.
“We know that removing top criminals from the infrastructure is not a long-term fix. There’s always a new player waiting in the wings, ready to fill those shoes,” acting FBI director Andrew McCabe said in Thursday’s press conference. “It’s like demolishing a building. Hacking away at individual walls and beams only does so much. But using federal statutes to prosecute these individuals is akin to blowing up the foundation with dynamite…With the weight of this kind of operation, the organization crumbles.”
Dark web users, meanwhile, were rattled by the sting, advising each other to change their passwords as soon as possible, and spreading paranoid warnings of a possible “backdoor” into dark net markets. “Looks like I’ll be sober for a while. Not trusting any markets ATM,” wrote one user on Reddit’s dark web market forum.
But don’t expect even this law enforcement victory to permanently damage the dark web’s black market business. After all, takedowns like the seizure of the Silk Road in 2013, and so-called Operation Onymous in 2014, which ended half a dozen top darknet sites, took chunks nearly as large out of the darknet markets infrastructure. Each time business rebounded, as users again went in search of anonymous, online contraband sales. “LE may have won this battle, but they will NEVER win the war on drugs,” noted one poster on Reddit’s darknet market forum. “For as long as drugs are illegal the DNMs will thrive.”
Prime numbers are the central characters in mathematics, the indivisible elements from which all other numbers are constructed. Around 300 B.C., Euclid proved that there’s an infinite number of them. Millennia later, in the late 19th century, mathematicians refined Euclid’s result and proved that the number of prime numbers over any very large interval between 1 and some number, x, is roughly x/log(x).
This estimate, called the prime number theorem, has only been proved to hold when x is really big, which raises the question — does it hold over shorter intervals?
Kaisa Matomäki, 32, grew up in the small town of Nakkila in western Finland, where she was a math star from an early age. She left home to attend a boarding school that specialized in math instruction, and as a senior she won first prize in a national mathematics competition. When she began serious mathematical research as a graduate student, she was drawn to prime numbers and, in particular, to questions about how they behave on smaller scales.
In 1896 mathematicians proved that roughly half of all numbers have an even number of prime factors and half have an odd number. In 2014 Matomäki, now a professor at the University of Turku in Finland, and her frequent collaborator, Maksym Radziwill of McGill University, proved that this statement also holds when you look at prime factors over short intervals. The methods they developed to accomplish this have been adopted by other prominent mathematicians, leading to a number of important results in the study of primes. For these achievements, Matomäki and Radziwill shared the 2016 SASTRA Ramanujan Prize, one of the most prestigious awards for young researchers in number theory.
But Matomäki’s results have only deepened the mystery surrounding primes. As she explained to Quanta Magazine, mathematicians had long assumed that a proof about even and odd numbers of prime factors over short intervals would lead inexorably to a proof about the prime number theorem over short intervals. Yet, when Matomäki achieved a proof of the first question, she found that a proof of the latter one moved even further out of reach — establishing once again that primes won’t be easily cornered.
In a pair of phone conversations, Quanta Magazine caught up with Matomäki to ask about the study of primes and the methods behind her breakthroughs. An edited and condensed version of our conversations follows.
Your work has dealt with two prominent related problems about prime numbers. Could you explain them?
One of the most fundamental theorems in analytic number theory is the prime number theorem, which says that the number of primes up to x is about x/log(x).
This is known to be equivalent to the fact that roughly half of the numbers up to x have an even number of prime factors and half of the numbers have an odd number of prime factors. It’s not obvious that the two are equivalent, but it’s known that they are equivalent because of facts related to the zeros of the Riemann zeta function.
So these two problems have been known to be equivalent for a long time. Where does your work on them begin?
I’ve been interested in questions about prime numbers and the prime number theorem, and that’s really related to this thing on the number of even and odd prime factors. So what Maksym Radziwill and I have studied is the local distribution of this thing. So even if you take a short sample of numbers, then typically about half of them have an even number of prime factors and half of them have an odd number of prime factors. This doesn’t only work from 1 to x, it also works in almost all very short segments.
Let’s talk about the methods you used to prove something about these very short segments — a way of working with something called multiplicative functions. These are a class of functions in which f(m x n) is the same thing as f(m) x f(n). Why are these functions interesting?
One can use them, for instance, to study these numbers that have an even number of prime factors or an odd number of prime factors. There is a multiplicative function that takes the value –1 if n has an odd number of prime factors and the value +1 if n has an even number of prime factors. This is sort of the most important example of a multiplicative function.
And you take these multiplicative functions and do a “decomposition” on them. What does that mean?
We take out a small prime factor from the number n. It’s a bit difficult to explain over the phone. We are looking at a typical number, n, in our desired interval, and we notice the number n has a small prime factor, and then we consider this small prime factor separately. So we write it as p x m where p is a small prime factor.
Over the 20th century, life expectancy in the UK increased from 46 to 76 years. But since 2010, that rate of increase is close to having ground to a halt. At any point in living memory this finding would cause serious concern. But in the age of austerity and accusations of “social murder” it is politically explosive. But can budget cuts really explain the trend? And can we do anything to boost life expectancy?
Calculating life expectancy is simple. Consider a hundred babies. If half die before their first year and the other half live to the age of 70 then average life expectancy is 35. If the healthcare system saves 40 such newborns (who also live to 70) then life expectancy will jump to 63. In fact, decreased childhood mortality underlies most of the increase in life expectancy we’ve seen since 1900. In fact, it is independent of maximum human lifespan – currently estimated to be about 125 years. As it is highly unusual that we actually live this long, it cannot be responsible for the stall.
Austerity is shorthand for a package of measures combining national deficit reduction through cuts to public spending with a balanced budget. The result has been an overall reduction in total public spending of about 3% in real terms. Some areas of expenditure, such as the National Health Service (NHS), are theoretically exempt from cuts (but are nevertheless affected by inflation) meaning other areas have received proportionately deeper ones. One such area is social care.
Clearly, budget cuts could reduce life expectancy, but the relationship is not straightforward. It is possible to have high or rising life expectancy during austerity, as is the case in Japan. Similarly, you can have rising life expectancy despite high levels of inequality – this was the case in Britain from 1900-1950.
Cause and effect
One significant cause of static or declining life expectancy can be immediately dismissed. There have been no major “mortality shocks”, which have caused declines in the past – including the Spanish Flu (1919) and the Second World War (1939-45). If something like this were happening in Britain, we would have noticed.
Is increased childhood mortality to blame? Given pressure on services and its past impact, then it could play a role. Fortunately, 2016 UK infant mortality is at a historic low of about 0.36% compared to 15% in 1900.
In the US, declining life expectancy has been linked to increased levels of obesity in the population. The UK has also gained weight but the rate of change is relatively slight – in the last decade, the proportion of Britons who were either overweight or obese rose from 60.5% to 62.9%. This is most marked in the population aged 55 to 84 and might be influencing life expectancy.
But perhaps the largest contributor to the shift in life expectancy trends is the dramatic increase in the number of older people in the population. From 2008 to 2015, the number of people aged over 90 increased from 657 to 854 per 100,000 people, and the total number of centenarians increased from 10,400 to 14,570.
Ageing drives the development of multiple diseases and conditions. Unfortunately, by the age of 85, almost nobody is free of functional impairment. The 85+ are typically frail and rely on others to look after them. Over the austerity period, mortality in England and Wales has increased markedly. For example, in 2015 there were approximately 530,000 deaths, an increase of 5.6% on 2014. These 30,000 excess deaths occurred among the elderly. About 10,000 happened in January, suggesting flu and pneumonia could have been the cause. But there were also elevated death rates year round, implying a broader failure of care.
That means it can be argued that the oldest people are highly vulnerable and that excess deaths among them, from underfunded services, are a major cause of the stalled life expectancy increases. In fact, austerity has unmasked the unhealthy ageing of our population.
What can we do?
Supporting the vulnerable is a moral imperative. But the costs of doing so will increase dramatically unless we can improve the health of the elderly. This in turn requires knowledge of what actually kills them. Many of the excess deaths during the austerity years were attributed to dementia but this is partly due to changes in the software used to record deaths and financial incentives to identify cognitive impairment. In reality, assigning a sole cause of death in an older person is difficult because most of their multiple conditions are potentially lethal. It could be said that the oldest die of old age.
Multiple causes of death result from just a few “ageing mechanisms”. That means that targeting these could improve health in a number of ways. For example, we know that drugs such as mTOR inhibitors (for example, Everolinmus) improve vaccination responses in older people. They also delay cognitive impairment in animal models and improve many markers of health. These compounds are available clinically and, with the proper trials, could rapidly be deployed.
The science of ageing is yielding many other potential routes to better health for older people, too. Some interventions are simple and cheap, for example there is increasing evidence that supplementation with the steroid hormone dehydroepiandrosterone (DHEA) improves the ageing immune system and could potentially benefit some hip fracture patients.
Targeting ageing could be a game changer. Computer models have shown that a modest increase in healthy human lifespan, based on laboratory data from other species, crashes the number of functionally impaired older people. The researchers calculated that this would save the US alone $7 trillion over 50 years, concluding we must therefore prioritise ageing research.
Far from being a hopeless search for cash, we can increase healthy life expectancy and lower care costs. What we need most is political vision and will. Both are currently in short supply.
Unless we turn this trend around and convert ageing research into treatments for older people then “social murder” truly will have taken place, with the bodies buried in the small print of life expectancy statistics.
How many lions does it take to kill a lamb? The answer isn’t as straightforward as you might think. Not, at least, according to game theory.
Game theory is a branch of maths that studies and predicts decision-making. It often involves creating hypothetical scenarios, or “games”, whereby a number of individuals called “players” or “agents” can choose from a defined set of actions according to a series of rules. Each action will have a “pay-off” and the aim is usually to find the maximum pay-off for each player in order to work out how they would likely behave.
This method has been used in a wide variety of subjects, including economics, biology, politics and psychology, and to help explain behaviour in auctions, voting and market competition. But game theory, thanks to its nature, has also given rise to some entertaining brain teasers.
One of the less famous of these puzzles involves working out how players will compete over resources, in this case hungry lions and a tasty lamb. A group of lions live on an island covered in grass but with no other animals. The lions are identical, perfectly rational and aware that all the others are rational. They are also aware that all the other lions are aware that all the others are rational, and so on. This mutual awareness is what’s referred to as “common knowledge”. It makes sure that no lion would take a chance or try to outsmart the others.
Naturally, the lions are extremely hungry but they do not attempt to fight each other because they are identical in physical strength and so would inevitably all end up dead. As they are all perfectly rational, each lion prefers a hungry life to a certain death. With no alternative, they can survive by eating an essentially unlimited supply of grass, but they would all prefer to consume something meatier.
One day, a lamb miraculously appears on the island. What an unfortunate creature it seems. Yet it actually has a chance of surviving this hell, depending on the number of lions (represented by the letter N). If any lion consumes the defenceless lamb, it will become too full to defend himself from the other lions.
Assuming that the lions cannot share, the challenge is to work out whether or not the lamb will survive depending on the value of N. Or, to put it another way, what is the best course of action for each lion – to eat the lamb or not eat the lamb – depending on how many others there are in the group.
This type of game theory problem, where you need to find a solution for a general value of N (where N is a positive whole number), is a good way of testing game theorists’ logic and of demonstrating how backward induction works. Logical induction involves using evidence to form a conclusion that is probably true. Backward induction is a way of finding a well-defined answer to a problem by going back, step-by-step, to the very basic case, which can be solved by a simple logical argument.
In the lions game, the basic case would be N=1. If there was only one hungry lion on the island it would not hesitate to eat the lamb, since there are no other lions to compete with it.
Now let’s see what happens in the case of N=2. Both lions conclude that if one of them eats the lamb and becomes too full to defend itself, it would be eaten by the other lion. As a result, neither of the two would attempt to eat the lamb and all three animals would live happily together eating grass on the island (if living a life solely dependent on the rationality of two hungry lions can be called happy).
For N=3, if any one of the lions eats the lamb (effectively becoming a defenceless lamb itself), it would reduce the game to the same scenario as for N=2, in which neither of the remaining lions will attempt to consume the newly defenceless lion. So the lion that is closest to the actual lamb, eats it and three lions remain on the island without attempting to murder each other.
And for N=4, if any of the lions eat the lamb, it would reduce the game to the N=3 scenario, which would mean that the lion that ate the lamb would end up being eaten itself. As none of the lions want that to happen, they leave the lamb alone.
Essentially, the outcome of the game is decided by the action of the lion closest to the lamb. For each integer N, the lion realises that eating the lamb would reduce the game to the case of N-1. If the N-1 case results in the survival of the lamb, the closest lion eats it. Otherwise, all the lions let the lamb live. So, following the logic back to the base case every time, we can conclude that the lamb will always be eaten when N is an odd number and will survive when N is an even number.
The first meeting of the Trump administration’s new advisory committee on election integrity consisted mainly of voter-fraud fear-mongering. As he opened the event, President Trump wondered aloud whether states that have refused to comply with the committee’s massive request for voter data (because it violates state law) have something to hide. “What are they worried about?” he asked. “There’s something, there always is.”
In their opening statements, secretaries of state and election commissioners from across the country all too happily offered up possibilities, raising specters of noncitizens registering to vote, voters being registered in multiple states, and people casting votes on behalf of the deceased. Hans von Spakovsky, a committee member and senior legal fellow at the right-learning Heritage Foundation, pointed to his organization’s database of 1,071 documented cases of voter fraud over the last several decades, neglecting to mention that figure constitutes just .0008 percent of the people who voted in the 2016 election alone. Together, they painted a picture of a pervasive and insidious threat to free and fair elections, despite the mountains of research showing that actual voter fraud is scarce.
But amid all the conjecture came one nugget of actual truth, offered by Judge Alan King of Jefferson County, Alabama. Not only did Judge King, one of the committee’s few Democrats, state that he’d never seen a single instance of voter fraud in all his years as head of elections in Jefferson County, he was also the lone member of the committee to use his opening remarks to raise the critically important issue of outdated voting technology. Unlike phantom zombie voters, that issue poses a real, and well-documented, threat to people’s voting rights.
“These voting machines are outdated. There’s no money there. Counties don’t have money. States don’t have money. We need money,” King said. “We can discuss a lot of things about voting, but … unless the technology is keeping up with voting, then we’re not using our time very wisely in my opinion.”
As King noted, much of the country’s voting technology is a decade or more old, purchased after the Help America Vote Act sent $2 billion to the states to upgrade election equipment, after dangling chads helped make a hash of the 2000 contest between Al Gore and George W. Bush. Many states haven’t upgraded since; a 2015 study by New York University Law School’s Brennan Center for Justice found that during the 2016 election, 43 states planned on using voting systems that were more than 10 years old.
That makes these tools especially vulnerable to attack, because the software that runs them—including Windows XP—is often no longer supported.
Not only that, but as aging systems break down, cash-strapped states and counties struggle to replace them. With fewer functioning voting machines in place, already long voting lines are likely to grow, which has been proven to dissuade people from voting.
What’s more troubling for anyone interested in equal voting rights is that not all communities share the burden of this old technology equally. In 2012, black voters waited in line twice as long as white voters. And the Brennan Center’s research shows that districts that planned to invest in new voting technology had a higher per capita income than districts that didn’t.
“Whatever the cost is, it lands disproportionately more on some people than others, and that’s unfair,” Charles Stewart, a professor of political science at MIT who studies voting lines, told WIRED last fall.
To ensure more secure elections, the committee might also consider requiring states to audit their elections. Right now, few safeguards exist to ensure that electronic voting machines accurately record votes from paper ballots. Auditing these results against physical ballots would go a long way toward assuring the accuracy of vote tallies. At least, that could help states that actually use paper ballots, another critical recommendation the election integrity committee could make, were it truly serious about ensuring election integrity.
Judge King noted that Jefferson County was able to upgrade its voting technology “to the tune of $3.1 million” in time for last year’s election. But such local investments aren’t possible in every county. Which is where a federal committee on election integrity might actually come in handy. If the committee wants to have a real impact on securing the sanctity of every vote, then investing in voting systems that actually work properly would be a mighty fine place to start.
On July 14th, 2015, the New Horizons mission made history when it became the first spacecraft to conduct a flyby of Pluto and its moons. In the course of making its way through this system, the probe gathered volumes of data on Pluto and its many satellites using a sophisticated suite of instruments. These included the first detailed images of what Pluto and its largest moon (Charon) look like up close.
And while scientists are still analyzing the volumes of data that the probe has sent home (and probably will be for years to come), the New Horizons mission team has given us plenty of discoveries to mull over in the meantime. For instance, using the many images taken by the mission, they recently created a series of high-quality, highly-detailed global maps of Pluto and Charon.
The maps were created thanks to the plethora of images that were taken by New Horizons’ Long-Range Reconnaissance Imager (LORRI) and its Multispectral Visible Imaging Camera (MVIC). Whereas LORRI is a telescopic camera that was responsible for obtaining encounter and high-resolution geologic data of Pluto at long distances, the MVIC is an optical and infrared instrument that is part of the Ralph instrument – the main imaging device of the probe.
The Principal Investigator (PI) for the LORRI instrument is Andy Cheng, and it is operated from Johns Hopkins University Applied Physics Laboratory (JHUAPL) in Laurel, Maryland. Alan Stern is the PI for the MVIC and Ralph instruments, which are operated from the Southwest Research Institute (SwRI) in San Antonio, Texas. And as you can plainly see, the maps are quite detailed and eye-popping!
Dr. Stern, who is also the PI of the New Horizons mission, commented on the release of the maps in a recent NASA press statement. As he stated, they are just the latest example of what the New Horizons mission accomplished during its historic mission:
“The complexity of the Pluto system — from its geology to its satellite system to its atmosphere— has been beyond our wildest imagination. Everywhere we turn are new mysteries. These new maps from the landmark exploration of Pluto by NASA’s New Horizons mission in 2015 will help unravel these mysteries and are for everyone to enjoy.”
And these were not the only treats to come from the New Horizons team in recent days. In addition, the mission scientists used actual New Horizons data and digital elevation models to create flyover movies that show what it would be like to pass over Pluto and Charon. These videos offer a new perspective on the system and showcase the many unusual features that were discovered on both bodies.
The video of the Pluto flyover (shown above) begins over the highlands that are located to the southwest of Sputnik Planitia – the nitrogen ice basin that measures some 1,050 by 800 km (650 by 500 mi) in size. These plains constitute the western lobe of the feature known as Tombaugh Regio, the heart-shaped region that is named after the man who discovered Pluto in 1930 – Clyde Tombaugh.
The flyover also passes by cratered terrain of Cthulhu Macula before moving north past the highlands of Voyager Terra. It then turns south towards the pitted region known as Pioneer Terra before concluding over Tartarus Dorsa, a mountainous region that also contains bowl-shaped ice and snow features called penitentes (which are found on Earth and are formed by erosion).
The flyover video of Charon begins over the hemisphere that the New Horizons mission saw during its closest approach to the moon. The view then descends over Serenity Chasma, the wide and deep canyon that is named after the ship from the sci-fi series Firefly. This feature is part of the vast equatorial belt of chasms on Charon, which is one of the longest in the Solar System – 1,800 km (1,100 mi) long 7.5 km (4.5 mi) deep.
The view then moves north, passing over the Dorothy Gale crater and the dark polar region known as Mordor Macula (appropriately named after the domain of the Dark Lord Sauron in The Lord of the Rings). The video then turn south to fly over the northern terrain known as Oz Terra before finishing over the equatorial plans of Vulcan Planum and the mountain of Clarke Montes.
These videos were color-enhanced in order to bring out the surface details, and the topographic relief was exaggerated by a factor or two to three to emphasize the topography of Pluto and its largest moon. Digital mapping and rendering of these videos was performed by Paul Schenk and John Blackwell of the Lunar and Planetary Institute (LPI) in Houston.
It may be many years before another mission is able to travel to the Trans-Neptunian region and Kuiper Belt. As a result, the maps and videos and images that were taken by the New Horizons mission may the last glimpse some us get of the Pluto system. Luckily, the New Horizons mission has provided scientists and the general public with enough information to keep them busy and fascinated for years!
Further Reading: NASA