Everyday entropy: secret addiction


Americans’ collective interest in spying is not a security interest.  Nor is our interest in keeping our own secrets.  A nation of 300 million people can neither be hurt nor protected by anything that is small or weak enough to be kept secret.  The fantasy of a secret message holding ultimate power – the apple, the ring, the curse –  is so old and intrenched that it is impossible to say whether it is a cultural artefact or cognitive instinct.  The addiction to information, particularly the idea that secret information is somehow more true or more powerful than common knowledge, transcends any rational basis or discussion.  At first, of course, secrets are powerful, but only up to the constraints of thermodynamics and transmission. No secret will reverse entropy or stay secret after it is put into practice. Nevertheless, from Alchemy to religion to science, throughout history and across cultures, the idea persists that there is some hidden knowledge that ties it all together: a secret that transcends entropy to bring the unlimited possibilities of information into a physical form.

The thing is that there is no secret. Momentum and information are conserved absolutely.  If you have a decent map of the configuration and momentum of parts in your environment, and an understanding of how it might evolve, information can do nothing more for you.  Entropy may behave mysteriously at times, but it can’t be made into a secret, and it can’t be kept a secret.  Magic shows work because the magician knows something about the configuration of his or her space that the audience doesn’t know, but magic only works where the magician is in control of the information.  Once the information is out, if anyone is allowed to investigate the actual entropy of the trick, the show is over.

This was the 2007 financial collapse, explained in depth in the May 2006 issue of Harper’s Magazine.  All of the issues were well known to anyone who wanted to know them.  The only secret was that the kingpins of finance had no understanding of entropy, and couldn’t believe that their system could change.

Excerpts:

Voyeur’s Motel: http://www.newyorker.com/magazine/2016/04/11/gay-talese-the-voyeurs-motel

He bought the property for a hundred and forty-five thousand dollars.

Foos said he began watching guests during the winter of 1966. He was often excited and gratified by what he saw, but there were many times when what went on below was so boring that he nodded off, sleeping for hours on the shag carpeting, until Donna woke him up before she left for the hospital. Sometimes she brought him a snack (“I’m the only one getting room service at this motel,” he told me, with a smile); at other times, if a particularly engaging erotic interlude was occurring in the room below, Donna would lie down next to him and watch. Sometimes they would have sex up on the viewing platform.

At times, I could almost picture Foos rubbing his hands together, like a mad scientist in a B movie: “I will have the finest laboratory in the world for observing people in their natural state, and then begin determining for myself exactly what goes on behind closed bedroom doors,” he wrote.

I asked him why, since he had spent half his life invading other people’s privacy, he was so critical of the government’s intelligence-gathering in the interest of national security. He reiterated that his spying was “harmless,” because guests were unaware of it and its purpose was never to entrap or expose anyone. He told me that he identified with Edward Snowden, the former National Security Agency contractor who illegally released government documents alleging that, for example, U.S. intelligence agencies were tapping the cell phone of Chancellor Angela Merkel of Germany.

“Snowden, in my opinion, is a whistle-blower,” Foos said, adding that instead of being prosecuted Snowden should be praised “for exposing things that are wrong in our society.”

He considers himself a whistle-blower, too, even though, so far, he hadn’t revealed anything to anyone except his wives and me. Asked which “things that are wrong” he wished to expose, he said, “That basically you can’t trust people. Most of them lie and cheat and are deceptive. What they reveal about themselves in private they try to hide in public. What they try to show you in public is not what they really are.”

 

Michael Flynn’s Fall

By David Ignatius Opinion writer April 27

“James J. Angleton, the CIA’s legendary counterintelligence chief, was secretive to the point of paranoia when he was at the agency. But when he left in the 1970s, he couldn’t stop talking to journalists and others about his conspiracy theories. Some other former CIA officers are similar: They work the press or lobbying clients the way they used to work their agency assets.

Gen. Stanley McChrystal, one of Flynn’s mentors, got fired as commander in Afghanistan after he and his staff made inappropriate comments to a Rolling Stone journalist. Gen. John Allen, a much-admired commander in Afghanistan, got involved in an email correspondence with a would-be Florida socialite that led to a Pentagon investigation, which derailed his appointment as NATO commander. Gen. David Petraeus, perhaps the most celebrated commander of his generation, pleaded guilty to improperly sharing classified information with his biographer, with whom he was romantically involved.

Each of these people served the country in remarkable ways. But looking at the difficulties they encountered, one senses a pattern. Senior command is a world unto itself. The tribal culture that envelops all our military and intelligence personnel is especially tight for our most secret warriors.

Michael Flynn’s fall tells a much bigger story.”

 

I was not shocked by the fact that the NSA had been spying on everyone they could spy on. In addition to having learned the lesson of history, I also accept the reality of the principle of Totally in Everyone’s Business. This is the principle that all states endeavor to get totally into everyone’s business to the degree that their capabilities allow. Or, put another way, states endeavor to spy as much as they possibly can. The main limiting factors on the totality include such factors as technology, competence, money, and human resources. Ethics and law are generally not limiting factors—as history clearly shows. Since I was aware that the NSA had the capacity to spy on American citizens and world leaders alike, I inferred that they were doing so.

There is also the fact that snooping, like cocaine, is addictive and it requires ever more to satisfy that desire. In general, people do like to snoop and once they get a taste of snooping, they often want more. As with any addiction, people can quickly become reckless and a bit irrational. This could be called the principle of addictive snooping. So, once the NSA snoops got to snooping, they really wanted to expand that snooping.

 

“Eavesdropping is a foray into uncharted psychological territory: the “unauthorized” ways that ordinary people use their senses to achieve important personal goals—intimate experience, personal power, and social control.  It illustrates our abiding attempt to understand the human story, and thus to understand what life is, and what one’s own life could be.”

Eavesdropping: An Intimate History by John L. Locke  (Author)

Who among us hasn’t eavesdropped on a stranger’s conversation in a theater or restaurant? Indeed, scientists have found that even animals eavesdrop on the calls and cries of others. In Eavesdropping, John L. Locke provides the first serious look at this virtually universal phenomenon. Locke’s entertaining and disturbing account explores everything from sixteenth-century voyeurism to Hitchcock’s “Rear Window”; from chimpanzee behavior to Parisian café society; from private eyes to Facebook and Twitter. He uncovers the biological drive behind the behavior and highlights its consequences across history and cultures. Eavesdropping can be a good thing–an attempt to understand what goes on in the lives of others so as to know better how to live one’s own. Even birds who listen in on the calls of distant animals tend to survive longer. But Locke also concedes that eavesdropping has a bad name. It can encompass cheating to get unfair advantage, espionage to uncover secrets, and secretly monitoring emails to maintain power over employees. In the age of CCTV, phone tapping, and computer hacking, this is eye-opening reading.
The Guardian revealed how GCHQ bugged the communications of wide range of targets which on the face of it had nothing to do with protecting the nation’s security. They included intelligence that would benefit large British companies, including the oil giants and banks, as well as the internal communications of those companies. GCHQ even bugged the pope.
Parents snooping

https://www.forbes.com/sites/kashmirhill/2012/05/03/are-parents-becoming-addicted-to-spying-on-their-kids/#3474275b6f22

“Kids are desperate to flee from their parents’ spying, reports the Wall Street Journal. In a piece about “Tweens’ Secret Lives Online,” the Journal tracks the online lengths kids are going to in order to get away from their stalkerish parents.

Digital anthropologist danah boyd told me last year that teens then were fleeing from Facebook to Twitter to escape the prying eyes of adults. WSJ journo Katherine Rosman says that Instagram is now one of the tools kids use to exchange messages in a semi-public way (where the public doesn’t include nosy adults).

In cataloging all of the sites out there where the kids go these days, Rosman writes: “It’s harder than ever to keep an eye on the children.” Um, what?

The digital age does offer a plethora of new digital spaces for kids to hang out, but it has also offered up a ridiculous number of tech tools for parents to watch kids as they hang out. I think Rosman is absolutely wrong. It’s easier than ever to keep multiple eyes on children. In fact, I suspect parents are growing addicted to spying on kids given the ease of monitoring what they’re doing, who they’re talking to, and where they are.

You can give your child a phone and then monitor their text messages with Mobile Watchdog and track their whereabouts using Location Labs technology. You can put a monitoring box in your teen driver’s car that sends you information about how they’re driving and where they are. You can put spyware on their computers to monitor their Internet use or simply use your computer’s built-in tools. You can force them to download Facebook apps that will alert you if they’re talking to strangers or using questionable language. Or you can friend them on Facebook so you know exactly who’s in their friend group and how they talk to each other. Or you can join the creepy 61% of parents who have secretly logged into their kids’ accounts without their permission. One day soon, helicopter parents may be able to buy a drone that just follows their children 24 hours a day.

Safely, a company that provides monitoring software that parents can put on their kids’ phones, did an analysis of how often parents using their product around the country check their children’s whereabouts. Most parents using their software geolocate their kids about 100 times a month. But in a small town in Missouri, extreme users check their kids’ location 7500 times per month. Maybe it’s time to change the motto there from The Show Me State to the Show Me Where You Are State.

That is mind-blowing. Those parents are tracking their kids 250 times per day. Given that kids are probably out of their parents’ sight for 10 hours at most per day, that’s 25 times an hour.

Everyday entropy: Shannon’s intuition

pia21386
Jupiter whorles
“Much like the kinetic energy of a massive object, the energy of a photon is a reference frame dependent quantity. A nice way put to it is to say the energy of light is expressed by the relationship between the emitter and receiver.”
“E = mc2 famously suggests the idea that you can get a lot of energy out of a small amount of mass. But that’s not what Einstein had in mind, really, and you won’t find that equation in the original paper. The way he wrote it was M = e/c2 and the original paper had a title that was a question, which was, “Does the inertia of a body depend on its energy content?” So right from the beginning Einstein was thinking about the question of could you explain mass in terms of energy. It turned out that the realization of that vision, the understanding of how not only a little bit of mass but most of the mass, 90 percent or 95 percent of the mass of matter as we know it, comes from energy. We build it up out of massless gluons and almost massless quarks, producing mass from pure energy. That’s the deeper vision.”
Frank Wilczek, Theoretical Physicist, MIT
Shannon’s use of “entropy” to describe information has created some confusion, possibly because Boltzmann’s equation is taken as the definition of entropy, rather than an explanation of entropy.  The same thing happened with e=mc2.  Note that Einstein was only interested in the relationship between energy and mass, not their definitions.  One reason why his equation was converted to define energy was the crisis of confidence arising from statistical mechanics and light quantisation.  The notion that energy was made of pure electromagnetic waves was crumbling, so a clear definition of energy was really attractive.
But Einstein didn’t define energy or mass, he just said they were inextricably linked to one another and to the speed of light.  But, because people felt like they had a handle on what mass was, i.e., weight, e=mc2 looked like a definition of energy that was stable and comprehensible in a way that statistical mechanical entropy was not.  Einstein meant to deconstruct the definition of mass and inertia, not create a definition of energy.  The popularity of e=mc2 resulted from the vacuum of understanding around energy, and it lead to a general misunderstanding of energy as well as mass.  Shannon and von Neumann, far from confusing the issues of entropy and information, recovered the relationship between energy, information and matter, albeit in a way that is extremely confusing and maybe not entirely intentional.

John von Neumann is usually credited with suggesting the term “entropy” to describe the state of information in a communication system in a flippant or arbitrary way, saying “nobody really knows what entropy is anyway, so you’ll have the upper hand in any debate”.  But, von Neumann was an entropy guy – he brought the concept of entropy into quantum entanglement – so he was fluent in Einstein’s use of entropy to describe the quantisation of light energy.  He could see, intuitively, the relationship between the quantisation of information and the quantisation of light, and their mutual relationship with the quantisation of momentum in Boltzmann’s statistical mechanics.  Besides, Shannon never credited von Neumann, and was himself convinced of the deeper relationship between information and thermodynamics.  In his 1968 Encyclopedia Britannia article “Information Theory”, Shannon wrote the following telling statement:

“The formula for the amount of information is identical in form with equations representing entropy in statistical mechanics, and suggest that there may be deep-lying connections between thermodynamics and information theory. Some scientists believe that a proper statement of the second law of thermodynamics requires a term related to information. These connections with physics, however, do not have to be considered in the engineering and other [fields?].”

Like the phases of Jupiter’s hydrogen – gas, liquid and metal – energy, information and mass are in constant circulation through one another, and never sufficiently static in space or time to have definitions that “make sense.”  Entropy, in all of its forms, tries to make sense of their relationships with one another and with space and time.

Everyday entropy: predictive cognition

crystal-ball-14785249855CZ.jpg
The Future

Curiously, most people don’t actually want to know the future.  Only 1% would want to know absolutely everything.  This may come from the mind intuitively saying “hey, that’s my job.  You can’t do it for me”.  Predictive cognition illustrates the pass-through informational entropy of organic intelligence: there is no separation between information input, processing, storage, and expression.  This pass-through entropy makes it impossible to establish an information hierarchy or processing map and impossible to model the process using artificial intelligence, which makes it counterintuitive for information scientists.  Not only is the input indistinguishable from the processing, but the entropy of information is indistinguishable, microscopically, from the chemical entropy of the ion channels in the nervous system.  Only at the high level of decisions and conscious thought can you say that the brain is a pure information system.  Like an ocean wave, a brain wave is macroscopically identifiable as a discrete bit of information, but microscopically indistinguishable from the entropy of the ions that create its action potential.  At some level, the success or failure of a thought depends on whether enough ions cross the neuronal membrane to keep the wave going, which comes down to the distribution of ions around the membrane, and, ultimately, the uncertain position of an individual calcium ion.  Even a perfect brain schematic couldn’t tell you the final form of a thought because it would be a map of a map, with all the uncertainty that entails.  In other words, the formation of discrete bits of information remains shrouded in the infinitesimal uncertainty of the actual entropy of the physical brain.

Words and the World: Predictive Coding and the Language-Perception-Cognition Interface

Gary Lupyan and Andy Clark

Department of Psychology, University of Wisconsin–Madison and School of Philosophy, Psychology, and Language Sciences, Edinburgh University

Abstract Can what we know change what we see? Does language affect cognition and perception? The last few years have seen increased attention to these seemingly disparate questions, but with little theoretical advance. We argue that substantial clarity can be gained by considering these questions through the lens of predictive processing, a framework in which mental representations—from the perceptual to the cognitive—reflect an interplay between downwardflowing predictions and upward-flowing sensory signals. This framework provides a parsimonious account of how (and when) what we know ought to change what we see and helps us understand how a putatively high-level trait such as language can impact putatively low-level processes such as perception. Within this framework, language begins to take on a surprisingly central role in cognition by providing a uniquely focused and flexible means of constructing predictions against which sensory signals can be evaluated. Predictive processing thus provides a plausible mechanism for many of the reported effects of language on perception, thought, and action, and new insights on how and when speakers of different languages construct the same “reality” in alternate ways.

Your brain – the advanced prediction machine

on July 22, 2013

Our findings suggest that information flow along this predictive hierarchy is crucially modulated by the engagement of attention, and is finely tuned by our prior expectations, or predictions, about future events. Successively complex neural processes are involved in predicting the future, and thereby try to explain away expected prediction errors generated at lower levels. Only information about failures in prediction flows up the hierarchy, and results in the revision of expectations about the future. Ultimately, this research offers a broad, elegant, and neuroscientifically grounded explanation of how brain deals with an ever-changing external world. It reasserts that cognition, and ultimately our consciousness is fundamentally shaped not just what is out there, but also by our biases and expectations, which themselves are governed by our past experiences.

A Guide to why your world is a hallucination

By Anil Ananthaswamy

What’s your brain doing when you process information? Could it be producing a “controlled online hallucination”?

Welcome to one of the more provocative-sounding explanations of how the brain works, outlined in a set of 26 original papers, the second part of a unique online compendium updating us on current thinking in neuroscience and the philosophy of mind.

In 2015, the MIND group founded by philosopher Thomas Metzinger of the Johannes Gutenberg University of Mainz, Germany, set up the Open MIND project to publish papers by leading researchers. Unusually, the papers were published in open access electronic formats, as an experiment in creating a cutting edge online resource – and it was free. The first volume, spanning everything from the nature of consciousness to lucid dreaming, was a qualified success.

The second volume, Philosophy and Predictive Processing, focuses entirely on the influential theory in its title, which argues that our brains are constantly making predictions about what’s out there (a flower, a tiger, a person) and these predictions are what we perceive.

To make more accurate predictions, our brains modify their internal models of the world or force our bodies to move, so that the external environment comes in line with predictions. This idea unifies perception, action and cognition into a single framework.

Some of the titles of the papers are playful, and maybe a tad over-the-top: “How to entrain your evil demon”, “How to knit your own Markov blanket” or “Of Bayes and Bullets”. But despite the titles, the content is serious and heavy-going: it’s written by some well-known proponents of predictive processing, including Andy Clark, based at the University of Edinburgh, UK, and Jakob Hohwy, at Monash University, Australia.

Perception isn’t passive

Lay readers will do well to start slowly, with the introduction to the field by Metzinger and Wanja Wiese, also based at Johannes Gutenberg, before dipping their toes into the deeper waters. Most of us will not get beyond sampling the introductory paragraphs in each paper, but even doing so can provide a flavour of the ideas they contain.

One of the keys to predictive processing is that it sets out to challenge our intuitive feeling that our brains passively receive information (via our senses) and create perceptions of what is actually out there – the so-called bottom up approach.

Instead, predictive processing argues that perception, action and cognition are the outcome of computations in the brain involving both bottom-up and top-down processing – in which prior knowledge about the world and our own cognitive and emotional state influence perception.

As Metzinger and Weise point out in their introduction, the idea of top-down processing is not new, but “dominant theories of perception have for a long time marginalized” its role. The novel contribution of predictive processing, they write, is that it emphasises the importance of top-down processing and prior knowledge as a feature of perception, one which is present all the time – not only when sensory input is noisy or ambiguous.

Predicting sensations

In a nutshell, the brain builds models of the environment and the body, which it uses to make hypotheses about the source of sensations. The hypothesis that is deemed most likely becomes a perception of external reality. Of course, the prediction could be accurate or awry, and it is the brain’s job to correct for any errors – after making a mistake it can modify its models to account better for similar situations in the future.

But some models cannot be changed willy-nilly, for example, those of our internal organs. Our body needs to remain in a narrow temperature range around 37°C, so predictive processing achieves such control by predicting that, say, the sensations on our skin should be in line with normal body temperature. When the sensations deviate, the brain doesn’t change its internal model, but rather forces us to move towards warmth or cold, so that the predictions fall in line with the required physiological state.

If you do get through enough of the papers to understand the computational principles behind predictive processing then, as Metzinger and Weise put it, you get closer to understanding why “it is only a small step towards describing processing in the brain as a controlled online hallucination”.

Everything we perceive, including ourselves, are simulacrums of reality. The takeaway here is this wild thought: we are always hallucinating

Everyday entropy: gene


Siddhartha Mukherjee, Gene. Page 454-455

“How much of the human genome can we “read” in a useable or predictive sense? Until recently, the capacity to predict fate from the human genome was limited by two fundamental constraints. First, most genes, as Richard Dawkins describes them, are not “blueprints” but “recipes.” They do not specify parts, but processes; they are formulaws for forms. If you change a blueprint, the final product is changed in a perfectly predictable manner: eliminate a widget specified in a the plan, and you get a machine with a missing widget. But the alteration of a recipe or formula does not change the product in a predictable manner: if you quadruple the amount of butter in a cake, the eventual effect is more complicated than just a quadruply buttered cake (try it; the whole thing collapses in an oily mess). By similar logic, you cannot examine most gene variants in isolation and decipher their influence on form and fate. That a mutation in the gene MECP2, whose normal function is to recognize chemical modifications to DNA, may cause a form of autism is far from self-evident (unless you understand how genes control neurodevelopmental processes that make a brain).The second constraint – possibly deeper in significance – is the intrinsically unpredictable nature of some genes. Most genes intersect with other triggers – environment, chance, behaviours, or even parental and prenatal exposures – to determine an organism’s form and function, and its consequent effects on its future. Most of these interactions, we have already discovered, are not systematic: they happen as a result of chance, and there is no method to predict or model them with certainty. These interactions place powerful limits on genetic determinism: the eventual effects of these gene-environment intersections can never be reliably presaged by the genetics alone. Indeed, recent attempts to use illnesses in one twin to predict future illnesses in the other have come up with only modest successes.

But there is no reason that the constraints on genetic diagnosis should be limited to diseases caused by mutations in single genes or chromosomes… A powerful enough computer should be able to hack the understanding of a recipe: if you input an alteration, one should be able to compute its effect on the product.

The genome will thus be read not in absolutes, but in likelihoods – like a report card that does not contain grades but probabilities, or a resume that does not list paste experiences but future propensities. It will become a manual of previvorship.”

 

Mukherjee is not an environmentalist, but he has illuminated the anthropological significance of the environment with more urgency than a generation of tree-huggers.  A stable, healthy environment is a precondition for genetic engineering, and all other types of engineering, to make us fitter. Without that, science and technology will be wasted on a world that is slipping through our fingers.

There are two other interesting things about Mukherjee’s statement. First, how closely it mirrors Stephen Hawking’s resurrection of determinism under the guise of probability. If everyone had a statistically meaningful number of lives to live, then probabilistic determinism would totally work. In a lab with an unlimited supply of cloned mice, it really does work (for the researchers, not the mice). The second thing is the soft constraints entropy places on scientific investigation and how scientists chafe at the bit to break free. It’s true that there is no hard limit to genetic prediction, but there is an asymptotic boundary around what is possible in the translation from information to configuration. Much like the limit to how fast a person can run 100 meters, the absolute limit is unknowable. It may not be 9 seconds. It may not even be 8 seconds. But it is almost certainly not less than 7 seconds and absolutely not below 5 seconds. So also there will always be new diseases cured by genetic engineering or prevented by genetic testing, but human life will be changed far less by genetics than by private abuse, and institutional cruelty.

The genome project also carries with it the frailties of all mapping projects: that the information embedded in the map is subject to the entropy of the map material.  It turns out that genes are subject to continuous mutation in individual bodies as well as the population at large.

Scientists Surprised to Find No Two Neurons Are Genetically Alike
The genetic makeup of any given brain cell differs from all others. That realization may provide clues to a range of psychiatric diseases

By Simon Makin on May 3, 2017, Scientific American Magazine

“Studies that preceded the consortium have confirmed mosaicism is commonplace. One report estimated there may be hundreds of changes in single letters of genetic code (single nucleotide variants, or SNVs) in each neuron in mouse brains. Another found over a thousand in human neurons. These findings suggest somatic mosaicism is the rule, not the exception, with every neuron potentially having a different genome than those to which it is connected. A primary cause of somatic mutations has to do with errors during the DNA replication that occurs when cells divide—neural progenitor cells undergo tens of billions of cell divisions during brain development, proliferating rapidly to produce the 80 billion neurons in a mature brain. The image of each cell carrying a carbon copy of the genetic material of all other cells is starting to fade—and for good reason. Genetic sequencing does not normally capture the somatic mutations in each cell. “You get a sort of average of the person’s genome, but that doesn’t take into account any brain-specific mutations that might be in that person,” says study lead author Michael McConnell of the University of Virginia.

A 2012 study found somatic mutations in the brains of children with hemimegalencephaly, a developmental disorder in which one hemisphere is enlarged, causing epilepsy and intellectual disability. The mutations were found in brain tissue, but not always in blood nor in cells from unaffected brain areas, and only in a fraction (around 8 to 35 percent) of cells from affected areas. Such studies, showing somatic mutations can cause specific populations of cells to proliferate, leading to cortical malformations, has researchers wondering whether somatic mutations may also play roles in more complex conditions.”

 

Everyday entropy: urban wilderness

3702d877c18868eb161c0f9cc192bad0

There is no reason for wilderness to be separated from urban space by more than a step.  The problem is one of isolation and accessibility.  How many people can afford to live near a park in London, San Francisco, New York or Paris?  The solution is as bizarrely simple as the problem is intractable.  Governments should buy derelict property, all of it.  They should demolish whatever structures are there and if it is near a place where people work, they should build new apartment buildings and auction off 30 year leases to property management companies.  The key here is to only build them, not finish them.  The buildings themselves will last for hundreds of years, and only a government can hold out for such a long period of depreciation.  The fittings and fixtures will only last a few years, so private industry can manage the investment.  If there is no work nearby, there is no need for a building.  What’s weird about this is that good apartments make good offices, but not the other way around.  No one should ever build an office building on purpose.  They are cheap and shitty.

There are over 4,000 cities in the world with over 100,000 people in them.  The vast majority of these cities could hold 5 million people within their geographical boundaries, with 50% green space, such is the poverty of imagination and consciousness in 20th century urban development.

What is important is that government not try to provide housing for the poor, but to sell housing to the rich.  People will move into their old houses.  Not only will these houses be better than anything that could be purpose-built for low-income occupants, it avoids the administrative burden of managing an oversubscribed and underfunded program that can’t possibly put homes in the right place.  It is also crucial that development forms a density spiral, like a galaxy or a whirlpool, with green space starting near the centre and expanding as the spiral expands.  The buildings themselves might also be spirals, but this gets into complicated architecture.

http://news.nationalgeographic.com/2017/04/london-national-park-greenspace-urban-conservation/

What is a park? For most of us, a park is a place apart—a reserve of nature in a world increasingly dominated by human activities and arranged to fulfill human needs and desires. But a park is also for people— a place of refuge for the human soul, which tends to wither when long separated from green and growing things.

John Muir, the great naturalist, captured this dual purpose at the dawn of the national parks movement. “Thousands of tired, nerve-shaken, over-civilized people are beginning to find out that going to the mountains is going home; that wildness is a necessity,” Muir wrote in 1901. Our concept of parks, especially in North America, Europe, and Australia, has remained largely unchanged since.

Daniel Raven-Ellison, a self-described “guerrilla geographer” and National Geographic explorer, would like to change it.

Raven-Ellison’s home isn’t the mountains—it’s London, a city founded in 43 AD, a metropolis today of almost nine million people, with 14,000 of them, on average, living in each square mile. Raven-Ellison is lobbying for the entire city to be declared a National Park.

In spite of its teeming streets and liberal use of concrete, he points out, London has many features we associate with parks. If you count not only the designated urban parks but also the backyards and the untended bits of land, the city is already 47 percent green space. What’s more, it’s highly biodiverse—and in many spots, quite wild.

A walk through Epping Forest at the edge of the city might turn up a badger, a bat, or a browsing fallow deer. Red foxes stroll the sidewalks and raise cubs in back gardens. Some 8.4 million trees dot the city: birch, lime, apple, sycamore, oak, hawthorn, and many more. The London Underground has even spawned its own biodiversity—Culex molestus, a mosquito that evolved into a new species in subway tunnels.

Unlike most large parks, London is not separate from people and their houses and cars. But that’s not a bug, Raven-Ellison says; it’s a feature.

“London is the most biologically diverse place in the United Kingdom precisely because people are there,” he says. Why shouldn’t that amazing diversity be valued alongside that of more remote and less altered places?

“Rainforest national parks are very different from desert national parks,” he says. “A city is very different from both of those but it is not necessarily less valuable.” By redefining what a park can be, Raven-Ellison hopes to open our eyes to the nature that’s already around us—and expand our ambition for adding more.

Raven-Ellison is not the only one calling for an end to the conceptual estrangement between humanity and the five million or so species with which we share the planet. With more than half the human population already living in cities, and that fraction increasing every year, there’s a growing movement to recognize the importance of urban nature.

Timothy Beatley, an urban planner at the University of Virginia, heads up a consortium of “biophillic cities”— including Singapore; Wellington, New Zealand; Vitoria-Gasteiz, Spain; Birmingham, UK; San Francisco, Portland, and Milwaukee—that have committed to weaving ever more greenness, diversity, and wildness into the urban fabric. The group’s moniker derives from biologist E. O. Wilson’s 1986 book Biophilia. In it Wilson argued that humans have an innate love of nature, a connection to other species that derives from our long evolutionary history of living among and relying upon them.

Everyday entropy: Boltzmann’s door

Planck and Einstein may not have appreciated the weight of the door that Boltzmann opened for them, but they fully appreciated the passageway he left behind.  Both of their nobel prize works departed through it.  It allowed Einstein, in particular, to describe the invisible behaviour of light quanta as point particles in space in a mathematically rigorous framework.
Concerning an Heuristic Point of View Toward the Emission and Transformation of Light, by A. Einstein, Bern, 17 March 1905.  http://www.esfm2005.ipn.mx/ESFM_Images/paper1.pdf
“If we confine ourselves to investigating the dependence of the entropy on the volume occupied by the radiation, and if we denote by S0 the entropy of the radiation at volume v0, we obtain
S − S0 = (E/βν) ln (v/v0).
This equation shows that the entropy of a monochromatic radiation of sufficiently low density varies with the volume in the same manner as the entropy of an ideal gas or a dilute solution. In the following, this equation will be interpreted in accordance with the principle introduced into physics by Herr Boltzmann, namely that the entropy of a system is a function of the probability its state…
If the entropy of monochromatic radiation depends on volume as though the radiation were a discontinuous medium consisting ofenergy quanta of magnitude Rβν/N, the next obvious step is to investigate whether the laws of emission and transformation of light are also of such a nature that they can be interpreted or explained by considering light to consist of such energy quanta.”
On the centenary of the death of Ludwig Boltzmann, Carlo Cercignani examines the immense contributions of the man who pioneered our understanding of the atomic nature of matter. The man who first gave a convincing explanation of the irreversibility of the macroscopic world and the symmetry of the laws of physics was the Austrian physicist Ludwig Boltzmann, who tragically committed suicide 100 years ago this month. One of the key figures in the development of the atomic theory of matter, Boltzmann’s fame will be forever linked to two fundamental contributions to science. The first was his interpretation of ‘entropy’ as a mathematically well-defined measure of the disorder of atoms. The second was his derivation of what is now known as the Boltzmann equation, which describes the statistical properties of a gas as made up of molecules. The equation, which described for the first time how a probability can evolve with time, allowed Boltzmann to explain why macroscopic phenomena are irreversible. The key point is that while microscopic objects like atoms can behave reversibly, we never see broken coffee cups reforming because it would involve a long series of highly improbable interactions – and not because it is forbidden by the laws of physics.
Happy centenary, photon, by Anton Zeilinger, Gregor Weihs, Thomas Jennewein and Markus Aspelmeyer, Nature 433, 230-238 (20 January 2005)

The way that Einstein arrives at the photon concept in his seminal paper “Über einen die Erzeugung und Verwandlung des Lichtes betreffenden heuristischen Gesichtspunkt” (“On a heuristic aspect concerning the production and transformation of light”) is, contrary to widespread belief, not through the photoelectric effect. Instead, Einstein compares the entropy of an ideal gas filling a given volume with the entropy of radiation filling a cavity. The logarithmic dependence on the volume of the entropy of the gas can easily be understood by referring to the connection between entropy and probability suggested by Boltzmann. Because it is less probable that the gas particles will occupy a smaller volume, such a state has a higher order, and hence lower entropy. Interestingly, for the case of radiation filling a cavity, Einstein merely uses the Wien black-body radiation density, which is known to be correct only for high radiation frequencies.

Einstein’s crucial insight comes when he observes that the entropy of light in a cavity varies in exactly the same way with the volume of the cavity as the entropy of a gas. On the basis of this observation, he suggests that light also consists of particles which he calls light quanta. He clearly states that this is only a heuristic point of view and not a logically binding conclusion. Only in the last chapter (of eight) of the paper does Einstein finally get to the photoelectric effect by asking where quanta of light might have implications. He notes that it would naturally explain why the wavelength of light emitted in photo-luminescence is always larger than that of the absorbed light. This is because a single particle of light is absorbed and unless additional energy is supplied, the energy of the emitted particles of light in general is lower.

Einstein’s Revolutionary Light-Quantum Hypothesis, by Roger H. Stuewer

Einstein gave two arguments for light quanta, a negative and a positive one. His negative argument was the failure of the classical equipartition theorem, what Paul Ehrenfest later called the “ultraviolet catastrope.” 5 His positive argument proceeded in two stages. First, Einstein calculated the change in entropy when a volume Vo filled with blackbody radiation of total energy U in the Wien’s law (high-frequency) region of the spectrum was reduced to a subvolume V. Second, Einstein used Boltzmann’s statistical version of the entropy to calculate the probability of finding n independent, distinguishable gas molecules moving in a volume Vo at a given instant of time in a subvolume V. He found that these two results were formally identical, providing that U = n(Rβ/N)ν, where R is the ideal gas constant, β is the constant in the exponent in Wien’s law, N is Avogadro’s number, and ν is the frequency of the radiation. Einstein concluded: “Monochromatic radiation of low density (within the range of validity of Wien’s radiation formula) behaves thermodynamically as if it consisted of mutually independent energy quanta of magnitude Rβν/N.” 6 Einstein cited three experimental supports for his light-quantum hypothesis, the most famous one being the photoelectric effect, which was discovered by Heinrich Hertz at the end of 18867 and explored in detail experimentally by Philipp Lenard in 1902.

Einstein cited three experimental supports for his light-quantum hypothesis, the most famous one being the photoelectric effect, which was discovered by Heinrich Hertz at the end of 1886 and explored in detail experimentally by Philipp Lenard in 1902. Einstein wrote down his famous equation of the photoelectric effect, in the Patent Office in Bern, Switzerland.  Πe = (R/N)βν – P, where Π is the potential required to stop electrons (charge e) from being emitted from a photosensitive surface after their energy had been reduced by its work function P. It would take a decade to confirm this equation experimentally. Einstein also noted, however, that if the incident light quantum did not transfer all of its energy to the electron, then the above equation would become an inequality: Πe < (R/N)βν – P.

We see, in sum, that Einstein’s arguments for light quanta were based upon Boltzmann’s statistical interpretation of the entropy. He did not propose his light-quantum hypothesis “to explain the photoelectric effect,” as physicists today are fond of saying. As noted above, the photoelectric effect was only one of three experimental supports that Einstein cited for his light-quantum hypothesis, so to call his paper his “photoelectric-effect paper” is completely false historically and utterly trivializes his achievement. In January 1909 Einstein went further by analyzing the energy and momentum fluctuations in black-body radiation.  He now assumed the validity of Planck’s law and showed that the expressions for the mean-square energy and momentum fluctuations split naturally into a sum of two terms, a wave term that dominated in the Rayleigh-Jeans (low frequency) region of the spectrum and a particle term that dominated in the Wien’s law (high frequency) region.

This constituted Einstein’s introduction of the wave-particle duality into physics.  Einstein presented these ideas again that September in a talk he gave at a meeting of the Gesellschaft Deutscher Naturforscher und Ärzte in Salzburg, Austria.  During the discussion, Max Planck took the acceptance of Einstein’s light quanta to imply the rejection of Maxwell’s electromagnetic waves which, he said, “seems to me to be a step which in my opinion is not yet necessary.”  Johannes Stark was the only physicist at the meeting who supported Einstein’s light-quantum hypothesis.  In general, by around 1913 most physicists rejected Einstein’s light-quantum hypothesis, and they had good reasons for doing so.

First, they believed that Maxwell’s electromagnetic theory had to be universally valid to account for interference and diffraction phenomena. Second, Einstein’s statistical arguments for light quanta were unfamiliar to most physicists and were difficult to grasp. Third, between 1910 and 1913 three prominent physicists, J.J. Thomson, Arnold Sommerfeld, and O.W. Richardson, showed that Einstein’s equation of the photoelectric effect could be derived on classical, non-Einsteinian grounds, thereby obviating the need to accept Einstein’s light-quantum hypothesis as an interpretation of it.  Fourth, In 1912 Max Laue, Walter Friedrich, and Paul Knipping showed that X rays can be diffracted by a crystal, which all physicists took to be clear proof that they were electromagnetic waves of short wavelength. Finally, in 1913 Niels Bohr insisted that when an electron underwent a transition in a hydrogen atom, an electromagnetic wave, not a light quantum, was emitted.

 

Everyday entropy: famine

hunger

The number of famished people in this Washington Post article is shocking even if it isn’t news or even entirely true.  The information it contains is hard to find and harder to verify, but the baseline, that the number of people facing severe hunger is growing even though the hunger index is falling, parallels the metrics on deforestation where we are not winning so much as losing less rapidly. Assuming that each hungry person matters as an individual human being and not as an abstract quantum of famine attributable to the mean field of hunger, then the total number matters much more than the percentage in the index. Moreover, we do not count the refugees flooding Europe in percentages, but in whole numbers of individuals.

“Our world produces enough food to feed all its inhabitants. When one region is suffering severe hunger, global humanitarian institutions, though often cash-strapped, are theoretically capable of transporting food and averting catastrophe.

But this year, South Sudan slipped into famine, and Nigeria, Somalia and Yemen are each on the verge of their own. Famine now threatens 20 million people — more than at any time since World War II. As defined by the United Nations, famine occurs when a region’s daily hunger-related death rate exceeds 2 per 10,000 people.

The persistence of such severe hunger, even in inhospitable climates, would be almost unthinkable without war.

Each of these four countries is in a protracted conflict. While humanitarian assistance can save lives in the immediate term, none of the food crises can be solved in the long term without a semblance of peace. The threat of violence can limit or prohibit aid workers’ access to affected regions, and in some cases, starvation may be a deliberate war tactic.

Entire generations are at risk of lasting damage stemming from the vicious cycle of greed, hate, hunger and violence that produces these famines. Children are always the most affected, as even those who survive may be mentally and physically stunted for life. And while this article focuses on the four countries most immediately at risk, ongoing conflicts in Congo, the Central African Republic, Libya, Iraq, Syria and Afghanistan has left millions hungry in those places, too.”

But even for those not in danger of starvation, access to food may be complicated economic factors in countries far away.

“Oxfam believes that food aid can be essential to humanitarian response. However, food aid cannot be a substitute for sustainable development, which is the best way to reduce hunger for the more than 850 million people who are still suffering from chronic malnutrition. For many development and humanitarian needs, food aid is not an appropriate or efficient tool. In particular, in-kind food aid often fails to improve access to food due to delays in delivery and monetization, and mismatches between recipient needs and the commodities donated.”

The danger of losing less quickly is that you can convince yourself that eventually you will start winning, and then with that momentum victory will be inevitable.  This was the battle of the bulge for Germany and Iwo Jima for Japan.  It is the tennis player who thinks that having lost the first set 6-0, then the second set 6-2, that the tide is turning and victory will come in the tenth set.  But the match will be over after the next set, lost 6-3.  We have the same delusion in carbon emissions, where a levelling off of emissions is taken as success, when the reality is that only a staggering reduction would suffice.  When, with maximum effort, the best you can manage is a reduction in losses, you know that maximum entropy is in the offing.

The bleaching of Australia’s Great Barrier Reef is following an eerily familiar script.  From the Guardian:

Some reef scientists are now becoming despondent. Water quality expert, Jon Brodie, told the Guardian the reef was now in a “terminal stage”. Brodie has devoted much of his life to improving water quality on the reef, one of a suite of measures used to stop bleaching.  ARC conducted an aerial and underwater survey of the reef which concluded that two-thirds of it has been hit by mass coral bleaching for second time in 12 months.  He said measures to improve water quality, which were a central tenet of the Australian government’s rescue effort, were failing.

“We’ve given up. It’s been my life managing water quality, we’ve failed,” Brodie said. “Even though we’ve spent a lot of money, we’ve had no success.”  Brodie used strong language to describe the threats to the reef in 2017. He said the compounding effect of back-to-back bleaching, Cyclone Debbie, and run-off from nearby catchments should not be understated.  “Last year was bad enough, this year is a disaster year,” Brodie said. “The federal government is doing nothing really, and the current programs, the water quality management is having very limited success. It’s unsuccessful.””