Friday, December 2, 2016

Friday Fotos: Fr8s

GASP.jpg

BM 33 03.jpg

IMGP7862rd.jpg

IMGP9971rd

IMGP0024rd

An Executive Guide to the Computer Age


William Benzon, "An Executive Guide to the Computer Age." Raymond T. Yeh and Paul B. Schneck, Co-Directors, Computer Science: Key to a Space Program Renaissance: Final Report of the 1981 NASA/ASEE Summer Study on the Use of Computer Science and Technology in NASA. University of Maryland, Computer Science Technical Report Series, No. 1168, Vol. II: F-1 - F-14, January 1982.

Abstract: Computing technologies have the potential to radically transform the way we live. Such a transformation is not inevitable, nor is it necessarily good. The purpose of this paper is to place this possibility into its proper historical perspective and to consider, in a general way, how one plans for it. The first section considers the place of the "Information Age" in history, suggesting that it is the fourth major transformation in human cultural evolution. The next section develops a five-dimensional metaphor outline certain basic factors which may be considered in a strategic plan for the use of computing technology. The final section discusses a specific aspect of that planning – the relationship between computing and productivity – and suggests that the transforming power of computing technology lies in the possibility of dramatically increasing productivity as intelligent computer becomes routine and reliable.

Wednesday, November 30, 2016

For Trump's supporters, corruption is part of the deal

But Jan-Werner Müller, a Princeton political scientist who recently published an excellent little book about authoritarian populist movements, finds that Trump supporters’ indifference to Trump’s corrupt leanings is actually rather typical. Even when clear evidence of corruption emerges once an authoritarian populist regime is in place, the regime’s key supporters are generally unimpressed.

“The perception among supporters of populists is that corruption and cronyism are not genuine problems as long as they look like measures pursued for the sake of a moral, hardworking ‘us’ and not for the immoral or even foreign ‘them,’” he writes, “hence it is a pious hope for liberals to think that all they have to do is expose corruption to discredit populists.”

George Mason University’s Justin Gest is the author of a recent study of white working-class politics in the United States and United Kingdom, and one of his major themes is that there is a pervasive cynicism about politics and government among the people he interviews.

“Today’s working class, Rust Belt voters are disenchanted by what they perceive to be a political and economic culture of exploitative greed and gridlock,” he writes, “and are waiting for someone to adopt their cause.”

Per Müller, their enthusiasm for Trump doesn’t necessarily reflect a misperception that he is honest or that he will eschew greed and corruption. Rather, their view is that he is on their side and that the protestations of his opponents merely reflect the self-interested defensiveness of the establishment. Highlighting themes of racial and ethnic conflict as central to American politics further feeds this dynamic. Trump may be a sonofabitch, the thinking goes, but at least he’s our sonofabitch.
Ignore the clown, focus on policy:
A November 22 Quinnipiac poll revealed both the risks and the opportunities currently facing Democrats. It showed that attacks on Trump’s character have set in, and most people agree that Trump is not honest and not levelheaded. But it also showed that a majority believe he will create jobs, that he cares about average Americans, and that he will bring change in the right direction. Yet at the same time, Quinnipiac also finds that most voters favor legal abortion, oppose tax cuts for the wealthy, oppose deregulation of business, and oppose weakening gun control regulation.

Which is to say that the most normal, blandly partisan parts of Trump’s agenda are also among the least popular. And yet Trump’s support for them is what immunizes him from Republican criticism and oversight over the abnormal stuff. Defending the basic norms of American constitutional government is important, but doing it as a partisan agenda won’t work — it turns off Trump’s core supporters and signals to wavering ones that his opponents are focused on abstractions rather than daily life. As long as Trump is enjoying the lockstep support of congressional Republicans, his opponents need to find ways to turn attention away from the Trump Show and focus it on his basic policy agenda and the ways in which it touches millions of people.

Tuesday, November 29, 2016

Documental "Lucumi, el Rumbero de Cuba" (Rumbero of Cuba)

This is an excellent little film (26 minutes). There's some delightful dancing and drumming in the last third. Note that the white dress is ceremonial.



From the description at YouTube:
Lucumi is ten and lives in Havana's black district. Brought up to the beat of drums, he dreams of becoming a great rumbero. With other kids on his block he improvises rumbas on old cans and pots and pans. One Saturday the best of Havana's musicians decide to get together at the "Solar California" to honor the memory of Chano Pozo, otherwise known as "the drum of Cuba". With the rumba beat, Lucumi sings, dances, plays and talks about his life, as if better to express the hardships he's already endured and to have his message heard. On this Saturday he joins up with the great rumberos and wakes up the old spirits of the tumbadora.

* * * * *

Tony Gatlif brings us epic scenes as young rumbero Michael Herrera Duarte (Lucumi) stars in this film with Cuban legends Tata Guines & Pancho Quinto. Beautiful cinematography coupled with great drumming and dancing, you'll love watching this one!

Self-portrait in green & shadow

20161015-P1120477

Quick takes: detect animate vs. inanimate in 250 msec

Back in the 1970s & 1980s David Hays and I hypothesized the existence of perceptual mechanisms that would support a quick determination of whether or not something was alive. We figured such perception would have survival value as knowing whether you're facing an animate being or not could mean life or death in the wild. Well, now we know:

In PsyPost:
UC Berkeley scientists have discovered a visual mechanism they call “ensemble lifelikeness perception,” which determines how we perceive groups of objects and people in real and virtual or artificial worlds.

“This unique visual mechanism allows us to perceive what’s really alive and what’s simulated in just 250 milliseconds,” said study lead author Allison Yamanashi Leib, a postdoctoral scholar in psychology at UC Berkeley. “It also guides us to determine the overall level of activity in a scene.”

Vision scientists have long assumed that humans need to carefully consider multiple details before they can judge if a person or object is lifelike.

“But our study shows that participants made animacy decisions without conscious deliberation, and that they agreed on what was lifelike and what was not,” said study senior author David Whitney, a UC Berkeley psychology professor. “It is surprising that, even without talking about it or deliberating about it together, we immediately share in our impressions of lifelikeness.” [...]

Moreover, if we did not possess the ability to speedily determine lifelikeness, our world would be very confusing, with every person, animal or object we see appearing to be equally alive, Whitney said.
* * * * *

Fast ensemble representations for abstract visual impressions

Allison Yamanashi Leib, Anna Kosovicheva & David Whitney
Nature Communications 7, Article number: 13186 (2016) doi:10.1038/ncomms13186
Published online: 16 Nov. 2016

Abstract

Much of the richness of perception is conveyed by implicit, rather than image or feature-level, information. The perception of animacy or lifelikeness of objects, for example, cannot be predicted from image level properties alone. Instead, perceiving lifelikeness seems to be an inferential process and one might expect it to be cognitively demanding and serial rather than fast and automatic. If perceptual mechanisms exist to represent lifelikeness, then observers should be able to perceive this information quickly and reliably, and should be able to perceive the lifelikeness of crowds of objects. Here, we report that observers are highly sensitive to the lifelikeness of random objects and even groups of objects. Observers’ percepts of crowd lifelikeness are well predicted by independent observers’ lifelikeness judgements of the individual objects comprising that crowd. We demonstrate that visual impressions of abstract dimensions can be achieved with summary statistical representations, which underlie our rich perceptual experience.

* * * * *

From the conclusion:

Our findings reveal that ensemble perception of lifelikeness is achieved extremely rapidly. While previous work has shown that observers categorize stimuli in a brief time period (for example, animal or non- animal34,35), our study shows that observers can perceive relative lifelikeness (that is, whether one stimulus is more life-like than another) on a similarly rapid timescale for groups as well. These results parallel the rapid time scale reported in previous ensemble coding experiments using stimuli with explicit physical dimensions24,26, highlighting the remarkable efficiency of ensemble representations that support abstract visual impressions.

Our findings suggest that lifelikeness is an explicitly coded perceptual dimension that is continuous as opposed to dichot- omous. One prior study has investigated whether animacy is a strictly dichotomous representation, or whether animacy is represented as a continuum36. While this prior study focused on single repeated stimuli shown for longer exposure durations, our findings extend this question to groups of heterogeneous objects that were briefly presented. Our participants extracted a graded ensemble percept of group lifelikeness. Because of the rapid timescale, the judgements of lifelikeness in our experiment would not allow for cognitive reasoning or social processes. Consistent with this, explicit memory of the objects in the sets was not sufficient to account for the number of objects integrated into the ensemble percept. Our results suggest that graded representations of object and crowd lifelikeness emerge as a basic, shared visual percept, available during rudimentary and rapid visual analysis of scenes.

Animacy, as a general construct and topic of cognition research, is extremely complex. Numerous contextual, cognitive and social mechanisms come into play when determining whether an object exhibits animate qualities. Specifically, when making judgements about animacy, theory of mind37–39, contextual cues40,41 and cognitive strategies42 contribute significantly to animacy evaluations. These complexities help explain why there are relatively few agreed-upon operational definitions of animacy or lifelikeness.

In contrast to the ambiguity of the terms animacy or lifelikeness, our results show that the ensemble perception of lifelikeness in groups of static objects was surprisingly consistent across observers. When stimuli were presented for brief durations, observers reached a remarkable consensus on the average lifelikeness—even regarding objects that exhibit seemingly ambiguous qualities. This consistency suggests that a similar percept of lifelikeness is commonly available to observers who glance at a scene. Numerous cognitive and social mechanisms may come online later, and observers may refine their percepts of lifelikeness when given longer periods to evaluate items and context. However, in a first-glance impression of the environment, observers share a relatively unified, consistent percept of lifelikeness.

Change in national mood shows up in patterns of word usage observed in historical databases

In the wake of the election, it’s clear American society is fractured. Negative emotions are running amok, and countless words of anger and frustration have been spilled. If you were to analyze this news outlet for the ratio of positive emotional words to negative ones, would you find a dip linked to the events of the past few weeks?

It’s possible, suggests a study published last week in Proceedings of the National Academy of Sciences. Analyzing Google Books and The New York Times’s archives from the last 200 years, the researchers examined a curious phenomenon known as “positive linguistic bias,” which refers to people’s tendency to use more positive words than negative words. Though the bias is robust — and found consistently across cultures and languages — social scientists are at odds about what causes it.

In this study, the authors shed light on some possible new patterns behind the effect. Across two centuries’ of texts, they found that people’s preference for positive words varied with national mood, and declined during times of war and economic hardship.
* * * * *

Linguistic positivity in historical texts reflects dynamic environmental and psychological factors

      Significance

      For nearly 50 y social scientists have observed that across cultures and languages people use more positive words than negative words, a phenomenon referred to as “linguistic positivity bias” (LPB). Although scientists have proposed multiple explanations for this phenomenon—explanations that hinge on mechanisms ranging from cognitive biases to environmental factors—no consensus on the origins of LPB has been reached. In this research, we derive and test, via natural language processing and data aggregation, divergent predictions from dominant explanations of LPB by examining it across time. We find that LPB varies across time and therefore cannot be explained simply as the product of cognitive biases and, further, that these variations correspond to fluctuations in objective circumstances and subjective mood.

      Abstract

      People use more positive words than negative words. Referred to as “linguistic positivity bias” (LPB), this effect has been found across cultures and languages, prompting the conclusion that it is a panhuman tendency. However, although multiple competing explanations of LPB have been proposed, there is still no consensus on what mechanism(s) generate LPB or even on whether it is driven primarily by universal cognitive features or by environmental factors. In this work we propose that LPB has remained unresolved because previous research has neglected an essential dimension of language: time. In four studies conducted with two independent, time-stamped text corpora (Google books Ngrams and the New York Times), we found that LPB in American English has decreased during the last two centuries. We also observed dynamic fluctuations in LPB that were predicted by changes in objective environment, i.e., war and economic hardships, and by changes in national subjective happiness. In addition to providing evidence that LPB is a dynamic phenomenon, these results suggest that cognitive mechanisms alone cannot account for the observed dynamic fluctuations in LPB. At the least, LPB likely arises from multiple interacting mechanisms involving subjective, objective, and societal factors. In addition to having theoretical significance, our results demonstrate the value of newly available data sources in addressing long-standing scientific questions.

      PNAS November 21, 2016

      Monday, November 28, 2016

      Wires over Hoboken

      20161126-P1120662

      AI Panics (When will they learn?) – A post at Language Log

      The last month or so has seen renewed discussion of the benefits and dangers of artificial intelligence, sparked by Stephen Hawking's speech at the opening of the Leverhulme Centre for the Future of Intelligence at Cambridge University. In that context, it may be worthwhile to point again to the earliest explicit and credible AI warning that I know of, namely Norbert Wiener's 1950 book The Human Use of Human Beings [...]:
      [T]he machine plays no favorites between manual labor and white-collar labor. Thus the possible fields into which the new industrial revolution is likely to penetrate are very extensive, and include all labor performing judgments of a low level, in much the same way as the displaced labor of the earlier industrial revolution included every aspect of human power. […]

      The introduction of the new devices and the dates at which they are to be expected are, of course, largely economic matters, on which I am not an expert. Short of any violent political changes or another great war, I should give a rough estimate that it will take the new tools ten to twenty years to come into their own. […]
      Liberman goes on to offer an old sorta' prognostication of his own (more or a cautionary note) and quotes more of Wiener's book. His point in quoting Wiener, which he makes explicit in a reply to a comment by Victor Mair, is that Wiener's time scale was way off:
      Wiener seriously underestimated the difficulty of pattern recognition, of robotic control for complex mechanisms, and of integrating the two. Considerable progress has been made in those areas but there are still unsolved problems. He also underestimated the difficulty of based speech recognition and text analysis.

      In my opinion, current prognosticators tend to similarly underestimate the difficulty of human-like communicative interaction. It's relatively easy to give the impression of solving the problem (Eliza, Siri) without really even trying to solve it.
      Thus Siri has no understanding of questions put to it or of the answers it provides, even if the answers are good ones. But there is powerful technology behind Siri, powerful in a way that could scarcely have been imagined in Wiener's time. 

      I've appended a comment I made to Liberman's post.

      * * * * *

      Back in the mid-1970s I was studying computational semantics with David Hays. Every now and then I would ask him, When do you think we'll be able to do X? where X ranged over various interesting things one might want of linguistic computing. He always refused to answer, asserting that these things are deeply unpredictable. Remember, was in the the first generation of researchers into machine translation and he'd been on the committee that wrote the ALPAC report. He had practical experience in such things.

      In 1975 he got invited to review the computational linguistics literature for the journal, Computers and the Humanities. He asked me to draft the text (as I'd been reviewing the literature for the American Journal of Computational Linguistics). I did so and included a bit about an article about computational semantics I was publishing in MLN (Modern Language Notes), as it spoke directly to humanist concerns and included an analysis of a Shakespeare sonnet. We then floated, as a thought experiment, the idea of a computational system capable of reading a Shakespeare play, in some interesting, but unspecified, sense of the the word 'reading.' We called it Prospero and set no date on when Prospero would be operational, but in my mind I figured we'd have it in 20 years or so.

      Well, the article appeared in 1976 ("Computational Linguistics and the Humanist"). Add 20 to that and we have 1996. Was anything like Prospero available then? No. Not only that, but the symbolic computing that was at the center of our review, and of Prospero, was being pushed into the background by statistical methods. It's now 2016, 40 years after that paper. We don't have anything like Prospero – though I believe Patrick Henry Winston is using the Macbeth story (but not Shakespeare's play) in an investigation of story comprehension – now and I see no prospects for Prospero in the near future. And yet, by the practical standards of 1976 Siri, as is Google's translation tech, and self-driving vehicles. Etc.

      It's a brave new world that has such machines in it, and most of it is still unexplored.

      * * * * *

      I've been entertaining the idea that, in some ways, we're on the edge of the Marvelous Future. No, we're not flying around in jet packs; getting humans to low-earth orbit is not as routine as Kubrick depicted in 2001; the computational marvels of the Star Trek computer are still in the unforeseeable future, not to mention Cmdr Data; and environmental catastrophe seems to be closing in on us. But we're living in a very different world from that of 1950 and confront very different possibilities. Technology is at the center of it. Now we have to accommodate our thinking about society to fit the very different world before us. We need to think about universal basic income. Among other things.

      I just watched conversation in which economist Glenn Loury (of Brown) cited Dani Rodrik to the effect that, given globalization, the national sovereignty, and democracy, you can have any two of the three, but not all three.

      Saturday, November 26, 2016

      Protect the vulnerable: Identity politics is here to stay

      Michelle Goldberg in Slate; some definitions:
      Identity politics and political correctness aren’t the same thing, but they are interrelated. One situates political claims in a person’s racial and sexual status. The other tries to force a surface consensus on racial and sexual equality through taboos and speech codes.
      Guilt-mongering is counter productive:
      The spasms of unchained bigotry we’ve seen post-election suggest that some Trump supporters were simply longing to howl NIGGER! KIKE! CUNT! FAGGOT! Among those I spoke to, however, some felt bullied for violating more arcane speech rules they neither assented to nor understood. Social media had forced them to submit to an alien set of norms; Trump liberated them. The late cultural critic Ellen Willis might have seen this coming. “Coercion and guilt-mongering—the symbiotic weapons of authoritarian culture—inevitably provoke resistance; when the left uses these tactics it merely encourages people to confuse their most oppressive impulses with their need to be themselves, offensively honest instead of hypocritically nice,” she wrote in a 1992 essay aptly titled “Identity Crisis.” “Perversely, racism and sexism become badges of freedom rather than stigmata of repression, while the roots of domination in people’s rage and misery remain untouched.”
      The political advantages of fascist culture:
      Trump offers his followers the fascist bargain that Walter Benjamin described in the epilogue to The Work of Art in the Age of Mechanical Reproduction. “Fascism attempts to organize the newly created proletarian masses without affecting the property structure which the masses strive to eliminate,” he wrote. “Fascism sees its salvation in giving these masses not their right, but instead a chance to express themselves.” Benjamin, a Marxist, treated this as an example of false consciousness. Perhaps, however, we should pay Trump voters the courtesy of assuming that at least some of them knew what they were doing when they opted for the politics of cultural revenge delivered by a billionaire in a gold-plated airplane. The question, then, is what those of us who are the objects of this revenge should do now.
      Going forward:
      Certainly, Democrats should champion the interests of working people. They should struggle to expand the social safety net and defend the labor movement against conservative attempts to destroy it. They should work to preserve the gains of the Affordable Care Act, even for those Trump supporters who just voted to gut their own health care. But there can be no going back on defending the tenuous gains of women and people of color, or foregrounding their demands for full equality. They are the base of the party, the people who gave Hillary Clinton a popular vote majority but will now be ruled by a hostile minority.

      Edward hopper Visits Jersey City

      I checked my Flickr stats this morning and found that some people have been looking at some old photos I took of Jersey City near the Holland Tunnel. One of them looks a bit like a Hopper painting:

      BK-holland

      The Crown, Episode 7: Elizabeth Sticks up for herself

      I urge you, if you’ve got Netflix, to watch The Crown, which I’ve already written about in an earlier post. I’ve just watched episode 7, “Scientia Potentia Est”, and it’s stunning. Basically, the young queen goes into battle against the sexist jerks who hem her in with tradition.

      1.) She procures a tutor. She realizes that her education has been pitiful, all protocol and the constitution. Necessary, but hardly sufficient. She cannot imagine fulfilling her duty, which she construes as seeing that governance is properly executed by those with direct authority, without knowing more about the world.

      2.) The prime minister, Winston Churchill, is taken ill, and it is relatively serious. He misses two of his weekly meetings with her, and she becomes suspicious. She discovers the truth and gives him a proper dressing down. She obviously does not like being treated as an ignorant twit.

      3.) She must choose a private secretary. One man is “in line” for the job, as he’s senior. But she prefers a younger man, more in tune with her view of the world. She chooses the younger man, but is over-ridden by an official concerned with propriety. He’s also a condescending sexist jerk. She finds out about this and insists on having her way.

      This all may seem obvious and right, abstractly considered from a certain (American? – though obviously not necessarily so) point of view. But abstract thought is one thing. The point of the drama is to see and feel it acted out with some fullness of being. That’s different. And important. It’s why we have stories.

      Friday, November 25, 2016

      Not quite round, but organic

      20161031-_IGP7790

      Flexible Hubs and Behavioral Mode

      This sort of thing is on my mind at the moment, so I thought I'd bump it to the top of the queue, just to cement it in my mind (temporarily). Plus I've added abstracts from and links to the original research.

      * * * * *

      From Medical Xpress:
      Now, research from Washington University in St. Louis offers new and compelling evidence that a well-connected core brain network based in the lateral prefrontal cortex and the posterior parietal cortex – parts of the brain most changed evolutionarily since our common ancestor with chimpanzees – contains "flexible hubs" that coordinate the brain's responses to novel cognitive challenges.

      Acting as a central switching station for cognitive processing, this fronto-parietal brain network funnels incoming task instructions to those brain regions most adept at handling the cognitive task at hand, coordinating the transfer of information among processing brain regions to facilitate the rapid learning of new skills, the study finds.
      "Flexible hubs are brain regions that coordinate activity throughout the brain to implement tasks – like a large Internet traffic router," suggests Michael Cole, PhD., a postdoctoral research associate in psychology at Washington University and lead author of the study published July 29 in the journal Nature Neuroscience
      This is consistent with the concept of behavioral mode that David Hays and I adopted and adapted from Warren McCulloch.

      This is in contrast to concepts of rigid modularity, where the brain is said to consist of quasi-autonomous behavioral modules, each dedicated to a specific perceptual, cognitive, or behavioral activity. These modules are conceived as being wired-in and universal across humans in a manner similar to, say, the skeletal system or the muscles. Barring pathology and injury, everyone's got the same set in the same arrangement. The notion of modes, and of behavioral hubs, allows for an open-ended arrangement of task specific configurations. The patterns of configuration are not wired-in, though many of the configured components would be.

      Note: McCulloch was specifically interested in the reticular activating system, which is in the core of the brain and brain stem and is phylogenetically old. The structures pinpointed by Dr. Cole are in the cerebral cortex, which is a much newer structure. Beyond citing McCulloch's model Hays and I had no specific suggestions about other neural mechanisms that might be involved in modal organizing, though we talked informally about the need for such mechanisms.

      * * * * * *

      Zero-Shot Translation – "implicit bridging between language pairs never seen explicitly during training"


      Google’s Multilingual Neural Machine Translation System: Enabling Zero-Shot Translation

      Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat melvinp,schuster,qvl,krikun,yonghui,zhifengc,nsthorat@google.com
      Fernanda Viégas, Martin Wattenberg, Greg Corrado, Macduff Hughes, Jeffrey Dean

      Abstract
      We propose a simple, elegant solution to use a single Neural Machine Translation (NMT) model to translate between multiple languages. Our solution requires no change in the model architecture from our base system but instead introduces an artificial token at the beginning of the input sentence to specify the required target language. The rest of the model, which includes encoder, decoder and attention, remains unchanged and is shared across all languages. Using a shared wordpiece vocabulary, our approach enables Multilingual NMT using a single model without any increase in parameters, which is significantly simpler than previous proposals for Multilingual NMT. Our method often improves the translation quality of all involved language pairs, even while keeping the total number of model parameters constant. On the WMT’14 benchmarks, a single multilingual model achieves comparable performance for EnglishFrench and surpasses state-of-the-art results for EnglishGerman. Similarly, a single multilingual model surpasses state-of-the-art results for FrenchEnglish and GermanEnglish on WMT’14 and WMT’15 benchmarks respectively. On production corpora, multilingual models of up to twelve language pairs allow for better translation of many individual pairs. In addition to improving the translation quality of language pairs that the model was trained with, our models can also learn to perform implicit bridging between language pairs never seen explicitly during training, showing that transfer learning and zero-shot translation is possible for neural translation. Finally, we show analyses that hints at a universal interlingua representation in our models and show some interesting examples when mixing languages. 

      arXiv:1611.04558 [cs.CL]

      * * * * *


      Thinking off the top of my head, if I were going to summon this article for use in discussing literary criticism, I could see it supporting both structuralist/deconstructive thought and Darwinian lit crit. The former emphasizes differential relations between words as the source of meaning. And that's all these programs have to work from, differential relations as inferred from distributional patterns. That "secret internal language" is a pattern 'distilled' from patterns of differential relationships that are congruent across languages. And that congruence is what a Darwinian would expect, because it reflects the core semantic proclivities of the adapted mind. Now, making this argument in detail, that's a different matter. I'll pass on that, at least for now.