Highlights & Marginalia
There’s another military phrase: “in harm’s way.” That’s what everybody assumes going to war means – putting yourself in danger. But the truth is that for most soldiers war is no more inherently dangerous than any other line of work. Modern warfare has grown so complicated and requires such immense movements of men and materiel over so vast an expanse of territory that an ever-increasing proportion of every army is given over to supply, tactical support, and logistics. Only about one in five of the soldiers who took part in World War II was in a combat unit (by the time of Vietnam the ratio in the American armed forces was down to around one in seven). The rest were construction workers, accountants, drivers, technicians, cooks, file clerks, repairmen, warehouse managers – the war was essentially a self-contained economic system that swelled up out of nothing and covered the globe.
For most soldiers the dominant memory they had of the war was of that vast structure arching up unimaginably high overhead. It’s no coincidence that two of the most widely read and memorable American novels of the war, Joseph Heller’s Catch-22 and Thomas Pynchon’s Gravity’s Rainbow, are almost wholly about the cosmic scale of the American military’s corporate bureaucracy and mention Hitler and the Nazis only in passing. Actual combat could seem like almost an incidental side product of the immense project of military industrialization. A battle for most soldiers was something that happened up the road, or on the fogbound islands edging the horizon, or in the silhouettes of remote hilltops lit up at night by silent flickering, which they mistook at first for summer lightning. And when reporters traveled through the vast territories under military occupation looking for some evidence of real fighting, what they were more likely to find instead was a scene like what Martha Gellhorn, covering the war for Collier’s, discovered in the depths of the Italian countryside: “The road signs were fantastic….The routes themselves, renamed for this operation, were marked with the symbols of their names, a painted animal or a painted object. There were the code numbers of every outfit, road warnings – bridge blown, crater mines, bad bends – indications of first-aid posts, gasoline dumps, repair stations, prisoner-of-war cages, and finally a marvelous Polish sign urging the troops to notice that this was a malarial area: this sign was a large green death’s-head with a mosquito sitting on it.”
That was the war: omnipresent, weedlike tendrils of contingency and code spreading over a landscape where the battle had long since passed. [Sandlin]
It’s as if we don’t want to admit that we can’t do this alone, or that success may require dealing with the soft parts of ourselves, the uncomfortable, sticky parts we’d rather pretend weren’t there. We have trouble seeing the ramifications of our personal lives on our professional lives and that the best way to navigate the public world is to master and find contentment in the private one.
The myth is of the lone creative entrepreneur battling the world without an ally in sight. A defiant combination of Atlas and Sisyphus and David, wrestling a Goliath-sized mass of doubters and demons. In reality, I’ve found that nearly every person I admire—every person I’ve met who strikes me as being someone who I would like to one day be like—lives a quiet life at home with a person who they’ve teamed up with…for life. The reason this one person strikes us as special, I find, is because they’re really two people. [Holiday]
Conceptualization, in this sense, is the trademark of the ontological turn just as, say, ‘explanation’ epitomizes positivist approaches and that of ‘interpretation’ typifies hermeneutic ones. Indeed, much of the theoretical traction of the ontological turn comes down to the alternative that it presents to this rather hackneyed choice in the social sciences, between explanation and interpretation. For anthropologists to imagine their task as that of explaining why people do what they do, they must first suppose that they understand what these people are doing. The ontological turn often involves showing that such ‘why’ questions (explanation) are founded on a misconception of ‘what’ (conceptualization). E.g. the question of why certain people might ‘believe’ in nations, say, or ghosts, may be raised precisely because questions as to what a nation or a ghost (and indeed what ‘belief’ and ‘doubt’) might be have not been properly explored. And similarly for hermeneutics: conceived as cultural translation, to imagine that one’s job as an anthropologist is to ‘interpret’ people’s discourse or actions one must assume that one is in principle equipped with concepts that may facilitate such a process. To this the ontological turn counterposes the possibility that the reason why the things people say or do might require interpretation at all may be that they go beyond what the anthropologist is able to understand from within his conceptual repertoire. [Holbraad 16]
When a shaman shows you a magic arrow extracted from a sick man, a medium gets possessed by a god, a sorcerer laboriously constructs a voodoo doll, we only see one thing: Society (belief, power, fetishism). In other words, we only see ourselves. As Davi Kopenawa, the Yanomami shaman, scathingly observed: ‘You Whites sleep a lot, but you dream only of yourselves’ (Kopenawa and Albert 2010: 412). (Analysing that remark, a tour de force of reverse anthropology, would take us too far.) I propose to illustrate that difficulty of our ethnoanthropology with an example from contemporary literature. There is no need to go back to the days when Evans-Pritchard found it necessary to warn his readers that ‘Witches, as the Azande conceive them, cannot exist’ (and then taking as his responsibility that of explaining to us why the Azande found it necessary to conceive things that cannot exist as we conceive them as existing). [Castro 12]
Are there more everyday tactics for cultivating an ability to discern the vitality of matter? One might be to allow oneself, as did Charles Darwin, to anthropomorphize, to relax into resemblances discerned across ontological divides: you (mis)take the wind outside at night for your father’s wheezy breathing in the next room; you get up too fast and see stars; a plastic topographical map reminds you of the veins on the back of your hand; the rhythm of the cicada’s reminds you of the wailing of an infant; the falling stone seems to express a conative desire to persevere. If a green materialism requires of us a more refined sensitivity to the outside-that-is-inside-too, then maybe a bit of anthropomorphizing will prove valuable. Maybe it is worth running the risks associated with anthropomorphizing (superstition, the divinization of nature, romanticism) because it, oddly enough, works against anthropocentrism: a chord is struck between person and thing, and I am no longer above or outside a nonhuman “environment.” Too often the philosophical rejection of anthropomorphism is bound up with a hubristic demand that only humans and God can bear any traces of creative agency. To qualify and attenuate this desire is to make it possible to discern a kind of life irreducible to the activities of humans or gods. This material vitality is me, it predates me, it exceeds me, it postdates me. [Bennett 120]
To what do historicist critics refer when they refer to literary texts? Surely to more than the author’s manuscript or to the limited run of a single edition, frozen in time as it leaves the bindery. But if we expand our sights from the moment of production to take in the history of the distribution and reception of literary texts, we run into a host of problems that threaten the coherence of our critical narratives: temporal lags between production and reception, the mediation of other texts, individuals, and institutions, the disjunction between the ideal text and the material book, between the generality of genres, rhetorics, and forms of address, and specific communities of readers. Can a thick description of a historical “moment” be defined by a text that is untimely, one that is reissued or reread, recited, reprinted, illocal (imported from elsewhere), or revived from the dead by association with other texts — that is, one that does not originate with the culture in question, but is repeated? Or does the tendency of literary culture to circle back on itself threaten the unfolding of literary history? [McGill 20]
I think the world is ready to consider Dril as literature. He, perhaps more than anyone else, has made us ready. It could even be argued that giving an internet-based writer the prize would, in the year 2019, make an important political point. Once radically open, the internet is now becoming increasingly closed — both as a result of how it is being regulated, and the disproportionate influence of companies such as Amazon and Facebook. Given the importance of the internet to human consciousness and life, the political decisions we make around the internet will likely determine the future of the human species. A powerful reminder of this was given to us early last week, when MySpace announced they had lost 12 years worth of music uploads, approximately 50 million tracks by 14 millions artists, in a server migration; a huge loss of material which may quite simply not exist elsewhere: who knows what future generations could have discovered in that archive? The writer Kate Wagner has compared such losses to the destruction of the Great Library at Alexandria, in which much of the knowledge of the Ancient world was lost.
There are no laws (as far as I know) protecting the heritage of the internet, but these losses are every bit as barbaric as the destruction of the Buddhas of Bamyan by the Taliban, or of Assyrian heritage sites by ISIS. The internet, more than anywhere else right now, is where culture takes place. If I were the Swedish Academy I would show I understand this, by simply awarding one of the two 2019 Nobel Prizes in Literature,, to Dril. [Whyman]
These Darwinian approaches, which borrow their models from population genetics, grant only a limited role to psychology. Yet the micro-mechanisms that bring about the propagation of ideas are mostly psychological and, more specifically, cognitive mechanisms. [Sperber 3]
Institutions are neither public nor mental representations. How, then, could an epidemiology of representations help provide a materialist account of institutions? [Sperber 29]
An epidemiology of representations would establish a relationship of mutual relevance between the cognitive and the social sciences. similar to that between pathology and epidemiology. This relationship would in no way be one of reduction of the social to the psychological. Social-cultural phenomena are, on this approach, ecological patterns of psychological phenomena. Sociological facts are defined in terms of psychological facts, but do not reduce to them. [Sperber 31]
The new conceptual scheme bears, as will be shown, a systematic relation to the standard one, and this allows us to draw extensively on past achievements in the social sciences. The goal of the naturalisitc programme turns out to be not a Grand theory — a physics of the social world, as Auguste Comte imagined — but a complex of interlocking, middle-range models. [Sperber 6]
If materialism is right, then everything is material: law, religion and art no less than forces or relationships of production. From a truly materialist point of view, effects cannot be less material than their causes. [Sperber 11]
Let us generalize: in order to represent the content of a representation, we use another representation with a similar content. We don’t describe the content of a representation; we paraphrase it, translate it, summarize it, expand on it - in a nutshell, we interpret it. An interpretation is a representation of a representation by virtue of a similarity of content. In this sense, a public representation, the content of which resembles that of the mental representation it serves to communicate, is an interpretation of that mental representation. Conversely, the mental representation resulting from the comprehension of a public representation is an interpretation of it. The process of communication can be factored into two processes of interpretation: one from the mental to the public, the other from the public to the mental. [Sperber 34]
A meaning is not a cause; and the attribution of a meaning is not a causal explanation. [Sperber 43]
We call ‘cultural’, I suggested, those representations that are widely and durably distributed in a social group. If so, then there is no boundary, no threshold, between cultural representations and individual ones. Representations are more or less widely and durably distributed, and hence more or less cultural. [Sperber 49]
What kinds of things are social-cultural things? [Sperber 9]
Why are these representations more successful than others in a given human population? [Sperber 49]
But terms can have meaning without having denotation – without, that is, referring to anything. Thus the resemblance between ‘goblin’, ‘leprechaun’, ‘imp’, and ‘gnome’ is a resemblance among meanings, among ideas, and not among things. If an anthropologist were to use the word ‘goblin’ to render a notion of the people she studied, she would not commit herself to the existence of goblin-like creatures, but only to the existence of ‘goblin’-like representations. Interpretively used terms do not commit their user to the existence of the things purportedly denoted by these terms. [Sperber 18]
Are anthropologists committed to the existence of irreducibly cultural things? [Sperber 12]
Is it butchery, or sacrifice?” (24) — so we also need to account for the involved representations. “Whatever one’s theoretical or methodological framework, representations play an essential role in defining cultural phenomena. But what are representations made of? [Sperber 24]
Most cultural institutions do not affect the chances of survival of the groups involved to an extent that would explain their persistence. In other words, for most institutions a description of their functional powers is not explanatory. [Sperber 48]
I am not aware of any naturalistic programme effectively outlining a causal-mechanistic approach to social phenomena in general. [Sperber 4]
Yet, any bearer of meaning, be it a text, a gesture, or a ritual, does not bear meaning in itself, but only for someone. For whom, then, does the institution have its alleged meaning? Surely, it must be for the participating people […] There is every reason to suppose, however, that the participants take a view of their institution that is richer, more varied, and more linked to local considerations than a transcultural interpretation could ever hope to express. At best, therefore, these general interpretations are a kind of decontextualized condensation of very diverse local ideas: a gain in generality means a loss in faithfulness. [Sperber 43]
Self-contradictory materialsm is a by-product of ill-digested Marxism. [Sperber 11]
One may therefore predict that ineffective practices aimed at preventing unavoidable misfortunes, such as the death of very old people, are subject to a very rapid cognitive erosion. Such practices should be much rarer in human cultures than ineffective practices aimed at preventing misfortunes with an intermediate incidence, such as perinatal mortality in non-medicalized societies. One may also predict that when adherence to an inefficacious practice falls below a specific threshold (which is itself a function of the incidence of the type of misfortune involved), its inefficacy becomes manifest, and the practice either disappears or is radically transformed. [Sperber 52]
Scholars in literature departments often regard statistics and economics as hubristic endeavours. [So 669]
The advantage of statistical modeling is that it does not present cut-and-dried results that one accepts or rejects. Built into the modeling process is a self-relexive account of what the model has sought to measure and the limitations of its ability to produce such a measurement. Again, as Box reminds us, “all models are wrong.” What’s important is not to insist on how the model is right or nearly right but rather to understand how it is wrong. This approach invites the reader to think through with the analyst the mediating figure of the model. Rather than see the model as a potential antagonist, an entity that has brought forth intractable empirical truths that one must accept or reject, the reader is encouraged to reason with the model and its maker to better grasp the data. [So 671]
The purpose of my response is not to expose error and demand correction in his work; rather, it is to argue that error is a constitutive part of science and that quantitative literary criticism would benefit from viewing error as less something to be tolerated or avoided and more something to be integrated formally into our research (Wimsatt). Accepting that all models are wrong might prove liberating. [So 672]
At present, however, the two communities of information retrieval and author identification hardly intersect, whereas integration of technologies from both fields is necessary to scale author identification to the web. [Potthast et al. 394]
As can be seen, some approaches are very effective on long texts (PAN12) but fail on short (C10) or very short texts (PAN11) [4,7]. Moreover, some approaches are considerably affected by imbalanced datasets (PAN11) . It is interesting that in two out of the three corpora used (PAN12 and PAN11) at least one of the approaches competes with the best reported results to date. In general, the compression-based models seem to be more stable across corpora probably because they have few or none parameters to be fine-tuned [5,23,29,41]. The best macro-average accuracies on these corpora are obtained by Teahan and Harper  and Stamatatos . Both follow the profile-based paradigm which seems to be more robust in case of limited text-length or limited number of texts per author. Moreover, they use character features which seem to be the most effective ones for this task. [Potthast et al. 404]
With respect to the classification methods, there are two main paradigms : the profile-based approaches are author-centric and attempt to capture the cumu-lative style of the author by concatenating all available samples by that author and then extracting a single representation vector. Usually, generative models (e.g., naive Bayes) are used in profile-based approaches. On the other hand, instance-based methods are document-centric and attempt to capture the style of each text sample separately. In case only a single long document exists for one candidate author (e.g., a book), it is split into samples and each sample is represented separately. Usually, discriminative models (e.g., SVM) are exploited in instance-based approaches. [Potthast et al. 397]
It is interesting that poets who are known for being more stylistically experimental often seem to have the least varying of careers. Their difference from the world closes them off from their own self-differentiation. [Piper 163]
In Semiotics and the Philosophy of Language, Umberto Eco offers a thoughtful critique of the history of the Porphyrian language tree—the tradition in which we think of the meaning of words hierarchically as a series of descending branches. This is still a strong component of the way linguists continue to think about language (and one, for example, underlying the hypernymic structure of WordNet used in the first chapter). For Eco, such hierarchies nevertheless can never truly account for language’s diversity. As he writes, “There is no bidimensional tree able to represent the global semantic competence of a given culture.”47 Instead, Eco reminds us, we are left only with manifestations, contingent constructions that could always be otherwise. As he writes, paraphrasing Thomas Aquinas, “essential differences cannot be known . . . We can only know them through the effects (accidents) that they produce” [Piper 69]
The critical systems given birth to by Kant, the emphasis on observational truth emphasized by Locke, the premium on originality within a new commercial writing system, and finally the value of synthetic forms of knowledge over the exemplary all gradually replaced the topic or loci as the center of modern thought. In Goethe’s The Man of Fifty [Piper 1816]
For openness to have been overcome by closure, material would have to have exhausted openness but a world of material alone would be a world without content, without substance, a world of form akin to a world of mathematics. Closure provides the discreteness, the things of the world, but without texture, without openness, these would have no content. [Lawson 13]
In the familiar everyday picture of the world, the world is divided into things: the sun and moon, the sea and sky, houses and people, tables and chairs. We are able to describe these things and the way they interact through language. We refine our account of the world by testing our views against reality. We throw out those descriptions that are not accurate, or modify them, so that our account of how things are is continuously improved upon. Something is understood to be true because it accurately reflects the way the world is, and is false because it does not do so. [Lawson 1]
Consider again the example of the page of dots. In the page of dots it was possible to find patterns or images. These patterns can be seen to be in addition to the initial perception of the page of dots. Nor are these patterns and images merely present awaiting discovery but are in some sense the product of the process of closure itself. It was argued that the initial page of dots contains an almost limitless number of possible patterns and images and is in this sense open. Closure in this instance consists in the process of realising these images or patterns. [Lawson 8]
What we take to be reality is thus the complex web of closures we have come to use in order to make our way about in the world, and as we become accustomed to them and rely on them so the original possibilities held within openness fade from view. [Lawson 6]
Closure enables us to escape the flux of possibility through the provision of particularity; and while it obscures the potential of openness, without closure and the provision of things we would have no means of understanding, or intervening, to any particular effect. [Lawson 8]
We have the impression that there is no alternative than to see the world as we do, and to divide it into the familiar objects of everyday life; but instead it will be argued that the categories of language and the objects that make up reality are the result of closures and could have been otherwise, for in principle there can be no logical limit to the number of possible closures available. Although, as it will later be shown, seemingly unlikely closures can be realised, we are not in a position to adopt any closure we please, for we are constrained, on the one hand, by the historical legacy of previous closures held within the web of language, and on the other, by our physiology. [Lawson 6]
Many of the worst results can be attributed to the fact that, in one way or another, a given poem or group of poems is uncharacteristic of its author. [Burrows 278]
Apart from the very short poems considered earlier, those cases where the procedure gives poor results on poems of more than 500 words represent bad matches—recognizably bad matches—between specimen and authorial subset. They mostly arise from a difficulty encountered by everyone who works in computational stylistics—the fact that authors work at times in very uncharacteristic literary genres. Whereas procedures such as principal component analysis can often overcome the aberrations that arise in this way by absorbing them in the lesser vectorsof their output, the single vector of the Delta procedure has no such cushion. With the Delta procedure, an aberrant set of scores is expressed as a lower ranking for the author in question. This being so, it is surprising that the procedure is as resilient as the results demonstrate. [Burrows 279]
And it shows promise as a means of indicating that the true author of a given text may lie beyond a current set of candidates, a task we have not hitherto accomplished. How is it that such a primitive statistical instrument can satisfy these purposes? The answer must lie, I believe, in areas where we are still extremely ignorant—in the communicative resilience of the language and the astonishing force of human individuality. [Burrows 282]
In this sort of work on language, so our researches teach us, a wealth of variables, many of which may be weak discriminators, almost always offer more tenable results than a smaller number of strong ones. Strong features, perhaps, are easily recognized and modified by an author and just as easily adopted by disciples and imitators. At all events, a distinctive ‘stylistic signature’ is usually made up of many tiny strokes. [Burrows 268]
This negative evidence also offers strong general support for the ancient but no longer unchallenged belief that the concept of authorial signatures is well-founded. [Burrows 277]
A study of so many specimens justifies some strong conclusions. The overall level of success is impressive because the task of identifying the right candidate from a group of twenty-five offers a much higher level of difficulty than the two- or three-author tasks to which we are accustomed. The unfettered operation of chance would lead, after all, to a roughly equal spread over the several ranks. Two hundred trials would yield only eight cases, not ninety-four, in which the true author ranked first out of twenty-five. In forty cases, not 157, the true author would rank between first and fifth. And in forty cases, not three, the true author would rank between twenty-first and twenty-fifth. Only a genuine authorial factor could yield results like those we have. [Burrows 276]
It behoves us, as always, to remember that, by relying on statistical analysis, even in this simple form, we are dealing in probabilities and not in absolutes. [Burrows 281]
While many critics have taken issue with important historical inaccuracies that haunt the play, my focus is on Frayn’s portrayal of quantum physics and its philosophical implications, a portrayal, I will argue, that is fraught with difficulties. [Barad 5]
Public fascination with quantum physics is probably due in large part to several different factors, including the counterintuitive challenges it poses to the modernist worldview, the fame of the leading personalities who devel oped and contested the theory (Einstein not least among them), and the profound and world-changing applications quantum physics has wrought (often symbolized in the public imagination, fairly or unfairly, by the development of the atomic bomb). But can it be this factor alone-this public hunger to know about quantum physics-that accounts for the plethora of incorrect, misleading, and otherwise inadequate accounts? What is it about the subject matter of quantum physics that it inspires all the right questions, brings the key issues to the fore, promotes open-mindedness and inquisi tiveness, and yet when we gather round to learn its wisdom, the response that we get almost inevitably seems to miss the mark? One is almost tempted to hypothesize an uncertainty relation of sorts that represents a necessary trade-off between relevance and understanding. But this is precisely the kind of analogical thinking that has so often produced unsatisfactory understandings of the relevant issues. [Barad 6]
For Frayn, no historical fact can trump psychological uncertainty; we are not accountable to history, in principle. [Barad 11]
And in a draft written in 1962, the year of Bohr’s death, Bohr tells Heisenberg it is “quite incomprehensible to me that you should think that you hinted to me that the German physicists would do all they could to prevent such an application of atomic science,” in direct contradiction of the story Heisenberg tells to Jungk, which is later embellished by Powers. [Barad 10]
The moral issues always finally depend on theepistemological one, on the judgment of other people’s motives, because if you can’t have any knowledge of other people’s motives, it’s very difficult to come to any objective moral judgment of their behavior. [Barad 7]
Wooden yet lively, verbal yet vegetal, alive yet inert, Odradek is ontologically multiple. He/it is a vital materiality and exhibits what Gilles Deleuze has described as the persistent “hint of the animate in plants, and of the vegetable in animals.” The late-nineteenth-century Russian scientist Vladimir Ivanovich Vernadsky, who also refused any sharp distinction between life and matter, defined organisms as “special, distributed forms of the common mineral, water. . . . Emphasizing the continuity of watery life and rocks, such as that evident in coal or fossil limestone reefs, Vernadsky noted how these apparently inert strata are ‘traces of bygone biospheres.’” Odradek exposes this continuity of watery life and rocks; he/it brings to the fore the becoming of things. [Bennett 8]
The course and intensity of terminal thought patterns in near-death experiences can tell us much about our frameworks of subjectivity. A subjectively-centred framework capable of sustaining action and purpose, must, I think, view the world ‘from the inside’, structured so as to sustain the concept of an invincible, or at least a continuing, self; we remake the world in that way as actionable, investing it with meaning, reconceiving it as sane, survivable, amenable to hope and resolution. The lack of fit between this subject-centred version, in which one’s own death is unimaginable, and an ‘outside’ version of the world comes into play in extreme moments. In its final, frantic attempts to protect itself from the knowledge of vulnerability and impending death that threatens the normal, subject-centred framework, the mind can instantaneously fabricate terminal doubt in extravagant, Cartesian proportions: this is not really happening, this is a nightmare, from which I will soon awake. This desperate delusion split apart as I hit the water. In that flash, when my consciousness had to know the bitter certainty of its end, I glimpsed the world for the first time ‘from the outside’, as no longer my world, as raw necessity, an unrecognisably bleak order which would go on without me, indifferent to my will and struggle, to my life as to my death. This near-death knowledge, the knowledge of the survivor and of the prey, has some strange fruits, not all of them bitter. [Plumwood 30]
Here, in what may be considered Shakespeare’s last tragedy, our vision of the protagonist is unrelentingly singular, and the “satiric” perspective does not modify that vision in a humorous way, as it does, for example, in Antony and Cleopatra. Coriolanus is not funny except at certain isolated moments; it is capable of engendering disgust, but not a sense of absurdity, as Timon is, because Shakespeare meticulously explores the character of Caius Marcius in a manner he never does that of the Athenian misanthrope, who remains substantially in an unfinished state. Moreover the relationship of Coriolanus to his community is thoroughly deducible in human terms. This is not true in the earlier play because Timon has no fully developed human attachments. In Coriolanus we are prevented for the most part from open laughter by the dreadful and carefully investigated emptiness of the central figure. [Proser 507]
Caesar, with his much-debated aspirations to kingship, his personal courage, his desire for power was an equivocal figure not unlike the Guise. League pamphleteers seem to have been fond of drawing analogies between them. [Briggs 266]
Yet although Marlowe did not have access to the wide variety of sources available to modern historians, his account comes much closer to historical fact than has been previously acknowledged; it is at least arguable that he represents the events much as they would have struck an impartial observer of the time. In his presentation of the massacre itself he appears to have reproduced with remarkable accuracy forms of ritualized violence peculiar to the French religious wars. [Briggs 259]
Something that meaningful to us cannot be left just to sit there bathed in pure significance, and so we describe, analyse, compare, judge, classify; we erect theories about creativity, form, perception, social function; we characterize art as a language, a structure, a system, an act, a symbol, a pattern of feeling; we reach for scientific metaphors, spiritual ones, technological ones, political ones; and if all else fails we string dark sayings together and hope someone else will elucidate them for us. The surface bootlessness of talking about art seems matched by a depth necessity to talk about it endlessly. And it is this peculiar state of affairs that I want here to probe, in part to explain it, but even more to determine what difference it makes. [Geertz 1474]
But anyone at all responsive to aesthetic forms feels it as well. Even those among us who are neither mystics nor sentimentalists, nor given to outbursts of aesthetic piety, feel uneasy when we have talked very long about a work of art in which we think we have seen something valuable. The excess of what we have seen, or imagine we have, over the stammerings we can manage to get out concerning it is so vast that our words seem hollow, flatulent, or false. After art talk, ‘whereof one cannot speak, thereof one must be silent,’ seems like very attractive doctrine. [Geertz 1473]
If we are to have a semiotics of art (or for that matter, of any sign system not axiomatically self-contained), we are going to have to engage in a kind of natural history of signs and symbols, an ethnography of the vehicles of meaning. Such signs and symbols, such vehicles of meaning, play a role in the life of a society, or some part of a society, and it is that which in fact gives them their life. Here, too, meaning is use, or more carefully, arises from use, and it is by tracing out such uses as exhaustively as we are accustomed to for irrigation techniques or marriage customs that we are going to be able to find out anything general about them. This is not a plea for inductivism — we certainly have no need for a catalog of instances — but for turning the analytic powers of semiotic theory, whether Peirce’s, Saussure’s, Levi-Strauss’s, or Goodman’s, away from an investigation of signs in abstraction toward an investigation of them in their natural habitat — the common wold in which men look, name, listen, and make. [Geertz 1498]
Positivism is dead. By now it has gone off and is beginning to smell. If, in the words of the Russian proverb, it is rotting from the head, that means that whilst there are those in science practice who think that it is still a valid metatheoretical position and foundation for a methodological programme there are very few who think about the validity of metatheoretically-founded methodological programmes who still think this. [Byrne 37]
It is perhaps not to be marveled, then, that the New Historicism has such talent for theater; the past is a costume drama in which the interpreter’s subject plays. Historical understanding, or what Collingwood called the ‘re-enactment of past experience,’ is an act. [Liu 733]
The overall result is that the New Historicism is at once more frank that the New Criticism — because it makes no bones about wishing to establish a subversive intersubjectivity and interaction between texts and their contexts — and excruciatingly more embarrassed (literally, “barred, obstructed”). While driven to refer literature to history (most literally in its notes referring to historical documents), it is self-barred from any method able to ground, or even to think, reference more secure than trope. Indeed, the very concept of reference becomes taboo. Ignoring the fact that historical evidence by and large is referred to in its notes (which has the effect of lending documentary material an a priori status denied the literary works and anecdotes it reads and re-reads), the New Historicism proceeds tropologically as if texts and historical con-texts had equal priority. Literary ‘authors’ thus claim an equivalence with political ‘authority,’ and ‘subjected’ intellects with their monarchical ‘Subject,’ through an argument of paradox, ambiguity, irony, or (to recur to dialectic) Lordship/Bondage not far removed at base from the etymological wordplay of deconstruction. As deconstructive ‘catachresis’ is to reference, then, so subversion is to power — but without the considered defense of tropology allowing deconstruction to found a-logical figuration in the very substrate of its version of historical context: the intertext. New Historicist contextuality is an intertextuality of culture without a functional philosophy or anti-philosophy. No Derrida of the field has made of the subversive relation between authors and authority what deconstruction makes of its subversion of reference: deferral. After all, it would be too embarrassing to admit that subversion of historical power (and of all the ontological and referential hierarchies still retained by Althusser in the gestural phrase, “in the last instance”) is just another differance. Such would be to confess the formalism of the New Historicism. [Liu 744]
Social order is not the result of the architectural order created by T squares and slide rules. Nor is social order brought about by such professionals as policemen, nightwatchmen, and public officials. Instead, says Jacobs, “the public peace-the sidewalk and street peace-of cities … is kept by an intricate, almost unconscious network of voluntary controls and standards among the people themselves, and enforced by the people themselves.” The necessary conditions for a safe street are a clear demarcation between public space and private space, a substantial number of people who are watching the street on and off (“eyes on the street”), and fairly continual, heavy use, which adds to the quantity of eyes on the street.85 Her example of an area where these conditions were met is Boston’s North End. Its streets were thronged with pedestrians throughout the day owing to the density of convenience and grocery stores, bars, restaurants, bakeries, and other shops. It was a place where people came to shop and stroll and to watch others shop and stroll. The shopkeepers had the most direct interest in watching the sidewalk: they knew many people by name, they were there all day, and their businesses depended on the neighborhood traffic. Those who came and went on errands or to eat or drink also provided eyes on the street, as did the elderly who watched the passing scene from their apartment windows. Few of these people were friends, but a good many were acquaintances who did recognize one another. The process is powerfully cumulative. The more animated and busier the street, the more interesting it is to watch and observe; all these unpaid observers who have some familiarity with the neighborhood provide willing, informed surveillance. [Scott 135]
Let us pause, however, to consider the kind of human subject for whom all these benefits were being provided. This subject was singularly abstract. Figures as diverse as Le Corbusier, Walther Rathenau, the collectivizers of the Soviet Union, and even Julius Nyerere (for all his rhetorical attention to African traditions) were planning for generic subjects who needed so many square feet of housing space, acres of farmland, liters of clean water, and units of transportation and so much food, fresh air, and recreational space. Standardized citizens were uniform in their needs and even interchangeable. What is striking, of course, is that such subjects-like the “unmarked citizens” of liberal theory-have, for the purposes of the planning exercise, no gender, no tastes, no history, no values, no opinions or original ideas, no traditions, and no distinctive personalities to contribute to the enterprise. They have none of the particular, situated, and contextual attributes that one would expect of any population and that we, as a matter of course, always attribute to elites.
The lack of context and particularity is not an oversight; it is the necessary first premise of any large-scale planning exercise. To the degree that the subjects can be treated as standardized units, the power of resolution in the planning exercise is enhanced. Questions posed within these strict confines can have definitive, quantitative answers. The same logic applies to the transformation of the natural world. Questions about the volume of commercial wood or the yield of wheat in bushels permit more precise calculations than questions about, say, the quality of the soil, the versatility and taste of the grain, or the well-being of the community. The discipline of economics achieves its formidable resolving power by transforming what might otherwise be considered qualitative matters into quantitative issues with a single metric and, as it were, a bottom line: profit or loss. Providing one understands the heroic assumptions required to achieve this precision and the questions that it cannot answer, the single metric is an invaluable tool. Problems arise only when it becomes hegemonic. [Scott 346]
Recent debates may also tend to overstate the technical challenges of interdisciplinarity. Distant readers admittedly enjoy discussing new unsupervised algorithms that are hard to interpret. 5 But many useful methods are supervised, comparatively straightforward, and have been in social-science courses for decades. A grad student could do a lot of damage to received ideas with a thousand novels, manually gathered metadata, and logistic regression.
What really matter, I think, are not new tools but three general principles. First, a negative principle: there’s simply a lot we don’t know about literary history above the scale of (say) a hundred volumes. We’ve become so used to ignorance at this scale, and so good at bluffing our way around it, that we tend to overestimate our actual knowledge. 6 Second, the theoretical foundation for macroscopic research isn’t something we have to invent from scratch; we can learn a lot from computational social science. (The notion of a statistical model, for instance, is a good place to start.) The third thing that matters, of course, is getting at the texts themselves, on a scale that can generate new perspectives. This is probably where our collaborative energies could most fruitfully be focused. The tools we’re going to need are not usually specific to the humanities. But the corpora often are. [Underwood 533]
The data may not contain the answer. The combination of some data and an aching desire for an answer does not ensure that a reasonable answer can be extracted from a given body of data. [Tukey 74-75]
In this article we have sought to construct a computational model of SOC style to study its diffusion as a world literary form. We find support for our initial hypothesis that SOC followed a wavelike pattern of dispersion from the world literary system’s core to its semiperiphery and periphery. Yet, at each stage of our analysis, our model charts the broad contours of this diffusion while exposing how this diffusion is marked by constant, heterogeneous variance. We do not see a single, monolithic pattern of diffusion but patterns of dissemination. In other words, we find patterns of difference (or variation) in sameness. This is an idea that is not extrinsic to computational or statistical methods but is deeply embedded in them. Indeed, among humanists, a common misunderstanding of modern statistical modeling is that quantitative models seek to explain everything about a social phenomenon and leave no room for interpretive ambiguity or indeterminacy. The opposite is true. A key feature of every statistical model is an error term that captures precisely what the model cannot explain. Moreover, a common reflexive technique in modeling is to estimate a model’s own inability to fully measure the underlying processes that generate a data set. Modeling is thus deeply invested in indeterminacy, whether of itself or of the data to which it is applied. [Long + So 364–5]
Our habit of doubting ourselves, echoing our earnest but never conclusive efforts to address the misgivings of others, shows more than anything else how at home are digital humanists in the humanities. Indeed, we are perfectly capable of sustaining all sides of this debate without encouragement. Of course I speak as someone to whom this particular problem is merely “academic” (understand this word in a weirdly reversed sense), as I am not personally dependent, at least for the present, on academic funding, either “soft” or “hard”. As soon as the discussion became consequential, I suppose I might be reluctant to take issue with any administrator or committee charged with managing a budget or prioritizing line items. Their jobs are difficult enough, I imagine. Yet it is with considerable astonishment that I read accounts of Lord Browne’s Report (again, this is easy enough for readers to learn about online if they don’t already know too much) and its promise to make British universities more “competitive” (sic) by decimating funding for education in arts, language and literacy. Is this the way a great nation treats its children, I wonder? What would Dr Arnold think? [Piez]
Is it appropriate to deploy positivistic techniques against those self-same positivistic techniques? In a former time, such criticism would not have been valid or even necessary. Marx was writ ing against a system that laid no specific claims to the apparatus of knowledge production itself-even if it was fueled by a persistent and pernicious form of ideological misrecognition. Yet, today the state of affairs is entirely reversed. The new spirit of capitalism is found in brainwork, self-measurement and self-fashioning, perpetual critique and innovation, data cre ation and extraction. In short, doing capitalist work and doing intellectual work-of any variety, bourgeois or progressive-are more aligned today than they have ever been. [Galloway 110]
In anatomical distant reading, the reason that the scholar is refraining from critical close reading is its uneconomical time-consuming quality and its deliberately narrow focus, but in the other case, distant reading is the only way for the scholar to extend the scope of his study without the anxiety of impossibility of an ideal knowledge. As a consequence, in the former, the distant reader’s inductive reasoning guides the direction of the study, but in the latter, it is the constellation of previous studies that directs the inductive reasoning. When the comparatist has attentively distanced his focus of study from a limited number of works, his mastership over those works is still preserved, thus he has the power to direct the path of his inferences. But in the other case, the scholar’s access to the object of study is already funneled, and his only way through it is to pass the filter of others’ researches. Moretti’s idea of collaborative research is problematic not just for implication of the political hegemony of English or the imperialistic attitude of the comparatist, as Arac noted, but also regarding whether the role of induction is placed primary or secondary. There is a huge difference between using other scholars’ works as source of inspiration, influence, or acknowledgment, and treating them as “data”. [Khadem 415]
For literary research to be possible, there must be some ambiguous space between patterns that are transparently legible in our memories and patterns that are too diffuse to matter. In fact this ambiguous space is large and important. We often dimly intuit literary-historical patterns without being able to describe them well or place them precisely on a timeline. For instance, students may say that they like contemporary fiction because it has more action than older books. I suspect that changes in pacing are part of what they mean. There is actually plenty of violence in Robinson Crusoe (1719), but it tends to be described from a distance, in summaries that cover an hour or two. We don’t see Crusoe’s fingers slipping, one by one, off the edge of a cliff. Twentieth-century fiction is closer to the pace of dramatic presentation. Protagonists hold their breath; their heartbeat accelerates; time seems to stand still. This pace may feel more like action, or even (paradoxically) faster, although diegetic time is actually passing more slowly from one page to the next. Even high-school students can feel this difference, although they may not describe it well. Narratologists have described it well, but typically credit it, mistakenly, to modernism. This change is a real literary phenomenon—in fact a huge one. But to trace its history accurately we need numbers. [Underwood 349]
The essence of the traditional humanist’s work-style is illuminated by comparing the pace and character of research publication across the disciplines, as Thomas Kuhn suggested years ago from his own experience (1977: 8–10). It varies widely, from the rapid exchange of results in the sciences to the slower pace of argument in the humanities. To varying degrees within the humanities themselves, this argument is the locus of action: the research itself (e.g. in philosophy) or its synthesis into a disciplinary contribution (e.g. in history) takes place during the writing, in the essay or monograph, rather than in a non-verbal medium, such as a particle accelerator. Contrast, as Kuhn did, the traditional research publication in experimental physics, which reports on results obtained elsewhere. In the natural sciences, as that ‘elsewhere’ has shifted from the solitary researcher’s laboratory bench to shared, sometimes massive equipment or through a division of labour to the benches of many researchers, collaboration has become a necessity. In the humanities, scholars have tended to be physically alone when at work because their primary epistemic activity is the writing, which by nature tends to be a solitary activity. Humanists have thus been intellectually sociable in a different mode from their laboratory-bound colleagues in the sciences.
If we look closely at this solitary work, we have no trouble seeing that the normal environment has always been and is virtually communal, formerly in the traditional sense of ‘virtually’ – ‘in essence or effect’ – and now, increasingly, in the digital sense as well. However far back in time one looks, scholarly correspondence attests to the communal sense of work. So do the conventions of acknowledgement, reference and bibliography; the crucial importance of audience; the centrality of the library; the physical design of the book; the meaning of publication, literally to ‘make public’; the dominant ideal of the socalled ‘plain style’, non sibi sed omnibus, ‘not for oneself but for all’; and of course language itself, which, as Wittgenstein argued in Philosophical Investigations, cannot be private. Writing only looks like a lonely act. [McCarty 12–13]
Ten years ago, I had a brief flirtation with another kind of unbifurcated garment, the Utilikilt, and discovered the joy of an undivided Y-axis; now having learned the pleasures of an unbroken X-axis, I’m contemplating trying out a muu-muu, just to see what it’s like when all bifurcations are dispensed with. If you find me walking around town wearing a trashbag with a neckhole and two armholes cut out, you have my permission to send me home and make me put on some proper clothing. [Doctorow]
The sort of person who jumps in and gives advice to the masses without doing a lot of research first generally believes that you should jump in and do things without doing a lot of research first. [West]
The Red Tribe is most classically typified by conservative political beliefs, strong evangelical religious beliefs, creationism, opposing gay marriage, owning guns, eating steak, drinking Coca-Cola, driving SUVs, watching lots of TV, enjoying American football, getting conspicuously upset about terrorists and commies, marrying early, divorcing early, shouting “USA IS NUMBER ONE!!!”, and listening to country music.
The Blue Tribe is most classically typified by liberal political beliefs, vague agnosticism, supporting gay rights, thinking guns are barbaric, eating arugula, drinking fancy bottled water, driving Priuses, reading lots of books, being highly educated, mocking American football, feeling vaguely like they should like soccer but never really being able to get into it, getting conspicuously upset about sexists and bigots, marrying later, constantly pointing out how much more civilized European countries are than America, and listening to “everything except country”.
(There is a partly-formed attempt to spin off a Grey Tribe typified by libertarian political beliefs, Dawkins-style atheism, vague annoyance that the question of gay rights even comes up, eating paleo, drinking Soylent, calling in rides on Uber, reading lots of blogs, calling American football “sportsball”, getting conspicuously upset about the War on Drugs and the NSA, and listening to filk – but for our current purposes this is a distraction and they can safely be considered part of the Blue Tribe most of the time) [Alexander]
Reinforcement learners take a reward function and optimize it; unfortunately, it’s not clear where to get a reward function that faithfully tracks what we care about. That’s a key source of safety concerns.
By contrast, AlphaGo Zero takes a policy-improvement-operator (like MCTS) and converges towards a fixed point of that operator. If we can find a way to improve a policy while preserving its alignment, then we can apply the same algorithm in order to get very powerful but aligned strategies.
Using MCTS to achieve a simple goal in the real world wouldn’t preserve alignment, so it doesn’t fit the bill. But “think longer” might. As long as we start with a policy that is close enough to being aligned — a policy that “wants” to be aligned, in some sense — allowing it to think longer may make it both smarter and more aligned. [Christiano]
It’s also possible to take this reasoning to an extreme—to become radically pessimistic about the consequences of pessimism. In “Suicide of the West,” the conservative intellectual Jonah Goldberg argues that progressive activists—deluded by wokeness into the false belief that Western civilization has made the world worse—are systematically dismantling the institutions fundamental to an enlightened society, such as individualism, capitalism, and free speech. (“Sometimes ingratitude is enough to destroy a civilization,” Goldberg writes.) On the left, a parallel attitude holds sway. Progressives fear the stereotypical paranoid conservative—a nativist, arsenal-assembling prepper whose world view has been formed by Fox News, the N.R.A., and “The Walking Dead.” Militant progressives and pre-apocalyptic conservatives have an outsized presence in our imaginations; they are the bogeymen in narratives about our mounting nihilism. We’ve come to fear each other’s fear. [Rothman]
The lag time from major successes in Deep Learning to generally-socially-concerned research funding like that of the Media Lab / Klein Center has been several years, depending on how you count. We need that reaction time to get shorter and shorter until our civilization becomes proactive, so that our civilizational capability to align and control superintelligent machines exists before the machines themselves do. I suspect this might require spending more than 0.00001% of world GDP on alignment research for human-level and super-human AI.
Granted, the transition to spending a reasonable level of species-scale effort on this problem is not trivial, and I’m not saying the solution is not to undirectedly throw money at it. But I am saying that we are nowhere near done the on-ramp to acting even remotely sanely, as a societal scale, in anticipation of HLAI. And as long as we’re not there, the people who know this need to keep saying it. [Critch]
The title of a paper by the Belgian social scientist Guillaume Wunsch, ‘God has chosen to give the easy problems to the physicists, or why demographers need theory’, accords with a remark attributed to Gregory Bateson, that ‘there are the hard sciences, and then there are the difficult sciences’. Both remarks point in two directions: towards the disciplines paradigmatic of what is ‘hard’, in the dual sense of solid reality and difficult methods; and towards those other disciplines, still in the shadow of the first, which in their relative softness present difficulties (so it is claimed) of a much more demanding kind. The imagery is dubious for other reasons, 46 but within its own frame, what is it saying? When softness is regarded as bad, what is the problem? [McCarty 145-6]
Best paper title ever.
Digital methods can provide new evidence and even new kinds of evidence in support of literary claims, and can make new kinds of claims possible. They can also make some claims untenable. In addition to allowing for “distant” kinds of readings of enormous collections of texts that are simply too large to be studied otherwise, the extraordinary powers of the computer to count, compare, collect, and analyze can be used to make our close readings even closer and more persuasive. Perhaps the availability of new and more persuasive kinds of evidence can also inspire a greater insistence on evidence for literary claims and push traditional literary scholars in some productive new directions. I would not argue that digital methods should supplant traditional approaches (well, maybe some of them). Instead, they should be integrated into the set of accepted approaches to literary texts. [Hoover]
The ‘well, maybe some of them’ is the epitome of scholarly politeness, here, but I’d be a litle less polite. A lot of our traditional approaches are shockingly vague, partial, and wrong-headed, and we should supplant them as soon as it’s technically possible to do so.
Vaillant’s other main interest is the power of relationships. “It is social aptitude,” he writes, “not intellectual brilliance or parental social class, that leads to successful aging.” Warm connections are necessary—and if not found in a mother or father, they can come from siblings, uncles, friends, mentors. The men’s relationships at age 47, he found, predicted late-life adjustment better than any other variable, except defenses. Good sibling relationships seem especially powerful: 93 percent of the men who were thriving at age 65 had been close to a brother or sister when younger. In an interview in the March 2008 newsletter to the Grant Study subjects, Vaillant was asked, “What have you learned from the Grant Study men?” Vaillant’s response: “That the only thing that really matters in life are your relationships to other people.” [Wolf Shenk]
With familiar competitive habits, this growth rate change implies falling wages for intelligent labor, canceling nature’s recent high-wage reprieve. So if we continue to use all the nature our abilities allow, abilities growing much faster than nature’s abilities to resist us, within ten thousand years at most (and more likely a few centuries) we’ll use pretty much all of nature, with only farms, pets and (economically) small parks remaining. If we keep growing competitively, nature is doomed.
Of course we’ll still need some functioning ecosystems to support farming a while longer, until we learn how to make food without farms, or bodies using simpler fuels. Hopefully we’ll assimilate most innovations worth digging out of nature, and deep underground single cell life will probably last the longest. But these may be cold comfort to most nature lovers. [Hanson]
It takes a deliberate effort to visualize your brain from the outside—and then you still don’t see your actual brain; you imagine what you think is there, hopefully based on science, but regardless, you don’t have any direct access to neural network structures from introspection. That’s why the ancient Greeks didn’t invent computational neuroscience. [Yudkowsky]
Access to stored knowledge has changed over time. It was available to only a few when most people were illiterate. Universal education then made such knowledge accessible to almost all. But now we are moving toward a state in which a great deal of knowledge contained in databases is inaccessible to those without the necessary computational skills. This is not the familiar digital divide of economic and educational opportunities but an epistemological division based on a number of different factors. An important barrier is a lack of technical expertise. Degrees of epistemic inaccessibility based on abilities have always existed; understanding contemporary molecular biology is possible only to a limited degree for most knowers. But cognitive limitations are not the only obstacles to access. Proprietary algorithms and intellectual property laws may prevent open access to a database. Although such barriers are complemented by the great openness of information produced (accidentally) by the internet, the evidence suggests that we are now undergoing something like the successive periods of agricultural enclosure in England during the eighteenth and early nineteenth centuries, when common land was enclosed by private landowners. This inaccessibility of information to human agents can also arise because of the sheer size and complexity of the data and the calculations needed to process them, because we do not have the representational means to display that knowledge, or because we cannot construct suitable algorithms to process the data. [Alvarado + Humphreys 740]
With the emergence of “big data” collections, there are too many accessible texts to read each one closely; even if one could read them closely, it is unlikely that one could read them consistently; and if one could read them consistently, it is inconceivable that one would be able to remember even a small percentage of them. Developing a model of “meaning” by applying unsupervised machine learning techniques across the entire corpus might be a solution to this problem. Yet, while this is a worthwhile idea and one not addressed in this paper, such an approach would have limited applicability beyond providing a first level approximation of the general contours of topics in a particular literature at a particular time. Except for encyclopedic projects, most contemporary literary scholarship does not focus on making broad generalizations about a national literature, but rather it emphasizes narrower developments in the literary landscape coupled to a thorough contextual knowledge of the impact and spread of those developments. Analysis of this type is largely dependent on a scholar’s “domain expertise.””
Literary domain expertise is formed from the study of an imperfect and largely arbitrary canon. We say “largely arbitrary” as matters of reception, sales, publication, circulation, critical reviews and so on contribute significantly to the recognition of a literary work as exceptional. Exceptional works that have “staying power”—that are able to engage critics for a considerable period of time—are those that enter the canon. At the same time, despite the impression of immutability, the canon often changes radically over time so that unknown works can suddenly become known (and canonical), while well-known (and canonical) works can suddenly fall out of favor and disappear from the canon altogether. [Tangherlini + Leonard 726–7]
One hypothesis for why code is more repetitive than NL is that humans find reading and writing code harder to read and write than NL. Code has precise denotational and operational semantics, and computers cannot deal automatically with casual errors like human listeners. As a result, natural software is not merely constrained by simpler grammars; programmers may further deliberately limit their coding choices to manage the added challenge of dealing with the semantics of code.
Cognitive science research suggests that developers process software in similar ways to natural language, but do so with less fluency. Prior work suggests (Siegmund et al, 2014) that some of the parts of the brain used in natural language comprehension are shared when understanding source code. Though there is overlap in brain regions used, eye tracking studies have been used to show that humans read source code differently from natural language (Busjahn et al, 2015; Jbara and Feitelson, 2017). Natural language tends to be read in a linear fashion. For English, this would be left-to-right, top-to-bottom. Source code, however, is read non-linearly. People’s eyes jump around the code while reading, following function invocations to their definitions, checking on variable declarations, etc. Busjahn et al. (Busjahn et al, 2015) found this behavior in both novices and experts; but also found that experts seem to improve in this reading style over time. In order to reduce this reading effort, developers might choose to write code in a simple, repetitive, idiomatic style, much more so than in natural language.
This hypothesis concerns the motivations of programmers, and is difficult to test directly. We therefore seek corpus-based evidence in different kinds of natural language. Specifically, we would like to examine corpora that are more difficult for their writers to produce and readers to understand than general natural language. Alternatively, we also would like corpora where, like code, the cost of miscommunication is higher. Would such corpora evidence a more repetitive style? To this end, we consider a few specialized types of English corpora: 1) corpora produced by non-fluent language learners and 2) corpora written in a technical style or imperative style. [Casalnuovo et al. 6–7]
Models represent an important new form of mediation in reading and interpretation, and, like other forms of mediation, their role or agency needs to be better understood. The emphasis on bigness and distance overlooks the minutiae that stand between us and these larger scales. Focusing on models, thinking small to think big, moves us away from a sense of communion and ultimately toward one of craft. It helps draw attention to the constructed nature of knowledge.
A great deal has been written in the history and philosophy of science about the role modeling plays in knowledge creation. This literature should become increasingly integral to our field. Much of the writing has focused on how a model represents, rather than indexically stands for, some real-world phenomenon (Hacking; Hughes; Giere). “The characteristic—perhaps the only characteristic—that all theoretical models have in common,” writes Richard Hughes, “is that they provide representations of parts of the world” (S325). By focusing on the representational qualities of models, philosophers of science have encouraged us to move past a form of empiricism that asserts an unproblematic relation between data and the world. Instead, they push us to look at one of the core concerns of our discipline, that of representation, the activities of construction and creativity that are involved in the process of understanding, the way models stand for something but are not to be confused with the thing itself. We could say, using terminology closer to home, that models shit the focus toward the signifiers of research and away from the signifieds. [Piper 652]
Every child knows that stories are supposed to have a happy ending. Every adult knows that this is not always true. In fact, in the course of the 19th Century a happy ending became a sign of popular literature, while high literature was marked by a preference for the opposite. This makes happy endings an interesting point of research in the field of digital literary studies, since automatically recognizing a happy ending, as one major plot element, could help to better understand plot structures as a whole.
To achieve this, we need a representation of plot that a computer can work with. In digital literature studies, it has been proposed to use emotional arousal as a proxy. But can we just use existing data mining methods in combination with sentiment features and expect good results for happy ending classification?
In this work, we tackle the problem of identifying novels with a “happy ending” by training a classifier. Our goal is not to present the best method for doing so, but to show that it is generally possible. We introduce our proposed approach, which already yields results considerably above a random baseline, and point out some problems and their possible solutions. Our method uses sentiment lexica in order to derive features with respect to semantic polarity and basic emotions. To account for the structural dynamics of happy endings, these features are built by considering the relation of different sections of the novels. We are able to train a support vector machine (SVM) which yields an average F1 score of 0.73 on a corpus with over 200 labelled German novels. To the best of our knowledge, our work is the first to cover happy ending classification. [Zehe et al. 9]
I can’t tell which thing makes me love this paper the most: is it because it reports on a sensible model with impressive performance, or is it because it repeatedly uses the phrase ‘happy ending classification’?
My concern that visualization projects are not often mentioned as being part of the Digital Humanities might seem a little paranoid; clearly there are presentations on visualizations at Digital Humanities conference. However, I am not alone. Svensson (2013) has pointed out the great amount of projects done that can be described as digital humanities even if they are not textual studies.
Meeks (2013) entitled his provocative article ‘Is Digital Humanities Too Text-Heavy?’ and he observed that at Digital Humanities conferences ‘a quick look at the abstracts shows how much the analysis of English Literature dominates a conference attended by archaeologists, area studies professors and librarians, network scientists, historians, etc.’ Perhaps there are so many text-focussed attendees because they do not feel their digital leanings are appreciated at mainstream academic conferences in their field. Perhaps geographers and archaeologists do not attend en masse because their digital leanings are appreciated in their discipline but publications in Digital Humanities-specific proceedings and journals are not.
However, there may be another reason. As Meeks himself recounts, early Digital Humanities books were keen to show a trail of mythical origins in the Humanities Computing field, and the Humanities Computing field is itself heavily indebted to text-based research. Hence text-based research historically dominates Digital Humanities events. As an example, Hockey (2004) wrote the following in her chapter ‘The History of Humanities Computing’, in one of the first books dedicated to Digital Humanities (Schreibman et al., 2004): ‘Applications involving textual sources have taken center stage within the development of humanities computing as defined by its major publications and thus it is inevitable that this essay concentrates on this area’. [Champion 25]
Shit, am I a Part Of The Problem again?
The ambiguity is widely prevalent in natural language, both in word meaning and in sentence structure. Works like “take” have polysemy, or many meanings. Syntactic structure (even without polysemic words) can lead to ambiguity. One popular example of ambiguous sentence structure is that of prepositional attachment. Consider the sentence:
They saw the building with a telescope.
There are two meanings, depending on where the phrase with a telescope attaches: did they see using the telescope, or is the telescope mounted on the building? Both meanings are valid, where one or the other may be preferred based on the context. [Casalnuovo et al. 5]
Natural Language is haaaaarrrrrrd.
It is no accident to see Putin, Saakashvili, and Hussein, bringing in religious imagery of the past, like the baptism of Prince Vladimir the Great in Putin’s, or Georgian King Mirian and Helena’s husband Emperor Konstantin in Saakashvili. The common trait between them is temporizing, that is, expressing ‘the logically prior in terms of the temporally prior’ (Burke, 1945, p. 430). In other words, these speakers slither ‘back and forth between logical and temporal vocabularies’ (Burke, 1945, p. 430) because they aim to find perfect truth by looking at the origin. Putin, Saakashvili, and Hussein retrofit the present into tales of origins (historical). By tying Crimea to the origin of Russia, Putin cannot eschew an ambiguous justification because seeking for perfection in the origin is confounding and humanly impossible. Unless Putin’s asymmetry of speeches humanizes him by offering two forms of thought, paradox and connection, rolled into one: in this case, vagueness follows changes slow enough to go undetected, but more acceptable than abrupt ones (Gilbert, 2007, Section ‘Perceiving differences’, pp. 42–6). But the fact remains: dressing up a glorious history to magnify the present is myth, the purpose of which is ‘to provide a logical model capable of overcoming a contradiction’ (Lévi-Strauss, 1955, p. 443). And it is saying something about their world: the need to say in another way what they could not say, or else wishing the contradiction away. [Hogenraad 308]
With the scientific method, as most of us have used it, there is apparently another defect. In our literary laboratory there is no talk about elements. Organic compounds are taken for granted and treated as ultimate phenomena, without much recognition that may be ‘inorganic’ or less complicated forms of the same kind, as well as constant ultimate elements whose presence in new proportions and new combinations make up all differences observed. It is as if there had once been, or should be, an there effort to teach Chemistry without recognition of the unlike molecular constitution, we will say, of spring water and coal tar. In other words, Chaucer and Shakespeare are considered simply as Chaucer and Shakespeare, with no reference to the fact that there must be in both common constituents and factors which, in different frequency and degrees of potency, make up the very diverse effects of their respective poetry. The same must also be true of prosaists. The differences between the style of Newman and De Quincey can be analyzed out through inventorying all points of sentence structure, as also each element or item in the character of their respective terms, phrases, and figures. [Sherman ix-x]
1893 was a good year for ‘manuals for the objective study of English prose and poetry’ written by guys with the name Lucius Adelno Sherman. Yes, also the only year. Don’t at me.
It’s debatable whether following TOS is a legal requirement. In the United States, failure to follow TOS may violate the Computer Fraud and Abuse Act (CFAA), with surprisingly serious penalties. Case law in the US on whether violating TOS is actually illegal is unclear, with cases such as Facebook v. Power Ventures and HiQ v. LinkedIn suggesting it may be legal in many circumstances. However, both of these cases led to costly and drawn-out litigation. Tragically, the suicide of internet pioneer Aaron Swartz was directly linked to stress from his prosecution under the CFAA, in part for violating TOS of a journal site. Researchers Christian Sandvig and Karrie Karahalios are currently mounting a legal challenge to the CFAA on behalf of the research community. The bottom line is, if you chose to not abide by TOS, you are taking a risk.
Note however that legal and ethical may not be the same thing. And in fact you can make a compelling argument that violating TOS is not only ethically possible, but might even be ethically required in some circumstances. The issue is far reaching: if we abide by overly restrictive TOS, are we giving up the ability to reflect on systems that increasingly shaping society? If we only work with permission of Large Corporation, can we ever be critical of Large Corporation? If the products and services of Large Corporation are having a profound impact, what is the obligation of the research community to understand that impact? [SIGCHI Research Ethics Committee]
Any entity lining its pockets by selling only the aesthetic of political action to those whose lives depend on the results of political action is malevolent and amoral, and it’s a practice in use to varying degrees by an ever-expanding number of companies.
Take, for example, any brand that’s adopted the anesthetized aesthetic of body positivity in the past decade, decoupling it from its radical politics in order to sell you soap, or straight-size underwear, or anything made by the desperately poor foreign factory workers who manufacture most of the inventory of big-box stores. The marketing arms of these corporations have sensed changing cultural values among their customers and taken that opportunity to adopt the aesthetic of good politics, and they’ve asked shoppers to choose them based on how well they pantomime the real activities that might make a difference. They skip the part where any difference is actually made.
The politics of any company are bad. They’re not all bad in the same way — they exist on a continuum that extends roughly from “not purposely trying to make anyone’s life worse” to “defense contractor” — but the structure of capitalism means that they all need to pay their employees less than their labor is actually worth and find ways to separate consumers from more money than it actually costs to produce and distribute their product. [Mull]
After I arrived, I was ushered into what I thought was the green room. But instead of being wired with a microphone or taken to a stage, I just sat there at a plain round table as my audience was brought to me: five super-wealthy guys — yes, all men — from the upper echelon of the hedge fund world. After a bit of small talk, I realized they had no interest in the information I had prepared about the future of technology. They had come with questions of their own.
They started out innocuously enough. Ethereum or bitcoin? Is quantum computing a real thing? Slowly but surely, however, they edged into their real topics of concern.
Which region will be less impacted by the coming climate crisis: New Zealand or Alaska? Is Google really building Ray Kurzweil a home for his brain, and will his consciousness live through the transition, or will it die and be reborn as a whole new one? Finally, the CEO of a brokerage house explained that he had nearly completed building his own underground bunker system and asked, “How do I maintain authority over my security force after the event?” [Rushkoff]
The real blockage lies elsewhere. I return again to my thesis that distant reading assents to the doxa of reading in literary study even as it contradicts the orthodoxy of close reading. It takes textual properties (for Moretti, preeminently stylistic ones) as the major objects of interpretation, and it considers that it has done its work when it says what a textual pattern means. This choice constructs a universe of texts which is only weakly connected to the institutional matrix that actually produces and circulates them. Where book historians speak of the communications circuit, distant reading has a short. It looks for patterns in texts instead of analyzing social systems in which texts (in their various material forms) play a role. Seeking to explain that systemic role is only “reading” in the most extended sense. The kind of analysis I have in mind deprives texts of much of their autonomy, simply by relegating the internal analysis of texts, on any scale, to the status of one piece of a different kind of puzzle. But whether literary scholars can organize themselves to solve that puzzle, at the cost of some of their long-standing claims to exceptional status, is another question. [Goldstone]
Literary criticism in the academy has reached a crisis point, and what we mean by “reading” stands at the center of the storm. For while we academic critics have moved from the hegemony of New Criticism to deconstruction to new historicism to cultural materialism to queer theory to postcolonial criticisms, “close reading” — the attentive inspection of the verbal texture of poems, novels, and plays — continues to be the methodological basis of what we do in our most important venue: the college classroom, especially the Intro to Lit classroom. There are good reasons for this. From the rapid postwar expansion of the 1950s forward, close reading became and remained central to the introductory class, where teachers found that students lacking specialized knowledge of the ins and outs of English history or the finer points of Aristotelian logic could still get excited by talking about the form of a Donne lyric or image-patterns in Portrait of the Artist as a Young Man. The so-called “New Criticism” was thus a perfect tool for the enlarging universities of the post-war era; it allowed public-school trained students at the University of Illinois or Iowa to have as much to say about texts as their preppie counterparts at Yale or Harvard. And speaking as a teacher, I can testify to its utility. Moments when students glimpse why Milton uses enjambment in the first lines of Paradise Lost (“Of man’s first disobedience and the Fruit/Of that forbidden tree…”: Milton’s ambivalence about the cost and the necessity of the Fall is contained in the gap between the last word of the first line and the phrase that begins the second) or how Trollope makes the plot lines of The Way We Live Now rhyme with each other to suggest what it’s like to live in a marketplace world, are among the most rewarding of my career.
But one by one, the props under the regime of close reading have been knocked aside. New critical imperatives — the study of gendered difference, imperialisms, of class, of race construed over time — didn’t just challenge our attitudes towards texts, they changed our very ways of reading them. Close reading became, at best, “reading against the grain” (a formulation adopting Walter Benjamin’s famous injunction to “brush history against the grain”); at worst, close reading was seen to crystallize an ahistorical, undialectical, power-reaffirming, aestheticizing academic practice. [Freedman]
Technology will route around the diminishment or disappearance of Reader. Even if this means something other than feeds are being used.
It’s a tough call. Google’s leaders may be right to weaken or abandon Reader. I feel more people should acknowledge this.
However, saying “no” to projects doesn’t make you Steve Jobs if you say no to inspiring things. It’s the discernment that’s meaningful, not the refusal. Anyone can point their thumb to the ground.
The shareable social object of subscribe-able items makes Reader’s network unique and the answer to why change is painful for many of its users is because no obvious alternative network exists with exactly that object. The social object of Google+ is…nearly anything and its diffuse model is harder to evaluate or appreciate. The value of a social network seems to map proportionally to the perceived value of its main object. (Examples: sharing best-of-web links on Metafilter or sharing hi-res photos on Flickr or sharing video art on Vimeo or sharing statuses on Twitter/Facebook or sharing questions on Quora.) If you want a community with stronger ties, provide more definition to your social object. [Wetherell]
Every time I read about the death of Google Reader I get sad and angry.
Six months into my first digital humanities job, it was clear that the bulk of my time had been spent being a good colleague, but a poor institutional resource. Supporting the individual is part of the broader agenda, but one must distinguish between activity which supports the institution as a whole, and that which satisfies a colleague who’d like to try something digital, but isn’t committed in the long term to whatever that digital thing might be.
Most institutions want digital humanities, but only some know why, and even fewer have really thought about what it looks like in the context of their scholarship and teaching. There are places where ‘capacity’ relates to a concrete set of activities and processes to which the DH person will be contributing. But there are also institutions where ‘building capacity’ means, ‘we don’t really know what we want, we just know that we don’t have it’.
I have a game I like to play with search committees when I interview for a job. I ask them for their understanding of digital humanities. You never get consensus, which is positive in some respects, but you often get such vague and tentative answers that you wonder if they have even heard of it before. [O’Sullivan]
I focus on Moretti and Jockers because they dominate academic and general discussions of data-rich literary history. Not only have they written some of the few book-length contributions to the field (Jockers 2013a; Moretti 2005, 2013a), but their work is reported on in major public forums such as the Paris Review, the Financial Times, the New York Times, and the New Yorker (Lohr 2013; Piepenbring 2015; Rothman 2014; Schultz 2011; Sunyer 2013), and distant reading is a term now routinely applied to data-rich literary research in general (e.g., Clement 2013: par. 1). Moretti’s work, especially, has also received its share of criticism, following several lines. The most resistant scholars maintain that data are inimical to literature, which only close reading can explore in its nuance and complexity (e.g., Trumpener 2009). James English (2010: xii, xiii) attributes this response to the discipline’s foundationally “negative relationship” with “counting” and to Moretti’s role in widening the divide. More receptive critics advocate the use of data but echo Moretti’s (2013a: 48) account of distant reading as “a little pact with the devil,” acknowledging that quantification inevitably abstracts and simplifies complex phenomena (e.g., Love 2010: 374). Regarding Moretti’s tendency to “overestimat[e] the scientific objectivity of his analyses” (Ross 2014), some perceive his claim to authoritative knowledge as an unfor- tunate side effect of his polemic against close reading (Burke 2011: 41), while others ascribe a more foundational essentialism to his work. John Frow (2008: 142) argues that Moretti conceives “literary history … as an objective account of patterns and trends” by “ignor[ing] the crucial point that these morphological categories he takes as his base units are not pre-given but are constituted in an interpretive encounter by means of an interpretive decision.”
In my view, these criticisms describe the symptoms—not the essence—of a problem, which in fact inheres in Moretti’s and Jockers’s common neglect of the activities and insights of textual scholarship: the bibliographical and editorial approaches that explore and explicate the literary-historical record. In dismissing the critical and interpretive nature of these activities, and the historical insights they embody, Moretti and Jockers model and analyze literary history in reductive and ahistorical ways. Their neglect of textual scholarship is not an effect of importing data into literary history but is inherited from the New Criticism: contrary to the prevailing view, close reading and distant reading are not opposites. [Bode 78–9]
All new technologies develop within a background of tacit understanding of human nature and human work. The use of technology in turn leads to fundamental changes in what we do, and ultimately what it is to be human. We encounter deep questions of design when we recognize that in designing tools we are designing ways of being. By confronting these questions directly, we can develop a new background for understanding computer technology - one that can lead to important advances in the design and use of computer systems. [Winograd & Flores xi]
You spend your time tweeting, friending, liking, poking, and in the few minutes left, cultivating friends in the flesh. Yet sadly, despite all your efforts, you probably have fewer friends than most of your friends have. But don’t despair — the same is true for almost all of us. Our friends are typically more popular than we are.
Don’t believe it? Consider these results from a colossal recent study of Facebook by Johan Ugander, Brian Karrer, Lars Backstrom and Cameron Marlow. (Disclosure: Ugander is a student at Cornell, and I’m on his doctoral committee.) They examined all of Facebook’s active users, which at the time included 721 million people — about 10 percent of the world’s population — with 69 billion friendships among them. First, the researchers looked at how users stacked up against their circle of friends. They found that a user’s friend count was less than the average friend count of his or her friends, 93 percent of the time. Next, they measured averages across Facebook as a whole, and found that users had an average of 190 friends, while their friends averaged 635 friends of their own.
Studies of offline social networks show the same trend. It has nothing to do with personalities; it follows from basic arithmetic. For any network where some people have more friends than others, it’s a theorem that the average number of friends of friends is always greater than the average number of friends of individuals. [Strogatz]
All this to say: The straw is officially part of the culture wars, and you might be thinking, “Gah, these contentious times we live in!” But the straw has always been dragged along by the currents of history, soaking up the era, shaping not its direction, but its texture.
The invention of American industrialism, the creation of urban life, changing gender relations, public-health reform, suburbia and its hamburger-loving teens, better living through plastics, and the financialization of the economy: The straw was there for all these things—rolled out of extrusion machines, dispensed, pushed through lids, bent, dropped into the abyss.
You can learn a lot about this country, and the dilemmas of contemporary capitalism, by taking a straw-eyed view. [Madrigal]
In truth, this is nonsense on stilts. One of the great scandals of modern intellectual life is the way generations of statistics students have been indoctrinated into the farrago of significance testing. Take coins again. In reality you won’t meet a heads-biased coin in a month of Sundays. But if you keep tossing coins five times, and apply the method of significance tests “at the 5 per cent level”, you’ll reject the hypothesis of fairness in favour of heads-biasedness whenever you see five heads, which will be about once every thirty-second coin, simply because fairness implies that five heads is less likely than 5 per cent.
This isn’t just an abstract danger. An inevitable result of statistical orthodoxy has been to fill the science journals with bogus results. In reality genuine predictors of heart disease, or of wage levels, or anything else, are very thin on the ground, just like biased coins. But scientists are indefatigable assessors of unlikely possibilities. So they have been rewarded with a steady drip of “significant” findings, as every so often a lucky researcher gets five heads in a row, and ends up publishing an article reporting some non-existent discovery.
Science is currently said to be suffering a “replicability crisis”. Over the last few years a worrying number of widely accepted findings in psychology, medicine and other disciplines have failed to be confirmed by repetitions of the original experiments. Well-known psychological results that have proved hard to reproduce include the claim that new-born babies imitate their mothers’ facial expressions and that will power is a limited resource that becomes depleted through use. In medicine, the drug companies Bayer and Amgen, frustrated by the slow progress of drug development, discovered that more than three-quarters of the basic science studies they were relying on didn’t stand up when repeated. When the journal Nature polled 1,500 scientists in 2016, 70 per cent said they had failed to reproduce another scientist’s results.
This crisis of reproducibility has occasioned much wringing of hands. The finger has been pointed at badly designed experiments, not to mention occasional mutterings about rigged data. But the only real surprise is that the problem has taken so long to emerge. The statistical establishment has been reluctant to concede the point, but failures of replication are nothing but the pigeons of significance testing coming home to roost. [Papineau]
There were plenty of impious people gambling like mad. Marcus Aurelius was so obsessed with throwing dice to pass the time that he was regularly accompanied by his personal croupier. Less reputable gentlemen are also well documented. Someone with only the most modest knowledge of probability mathematics could have won himself the whole of Gaul in a week. The fact that some people were pious and others superstitious, far from preventing the opportunists of an opulent empire discovering some elementary arithmetic of dice, is a positive incentive. [Hacking]
The proliferation of auditing culture in post Fordism indicates that the demise of the big Other has been exaggerated. Auditing can perhaps best be conceived of as fusion of PR and bureaucracy, because the bureaucratic data is usually intended to fulfill a promotional role: in the case of education, for example, exam results or research ratings augment (or diminish) the prestige of particular institutions. The frustration for the teacher is that it seems as if their work is increasingly aimed at impressing the big Other which is collating and consuming this ‘data’. ‘Data’ has been put in inverted commas here, because much of the so-called information has little meaning or application outside the parameters of the audit: as Eeva Berglund puts it, ‘the information that audit creates does have consequences even though it is so shorn of local detail, so abstract, as to be misleading or meaningless - except, that is, by the aesthetic criteria of audit itself’.
New bureaucracy takes the form not of a specific, delimited function performed by particular workers but invades all areas of work, with the result that - as Kafka prophesied - workers become their own auditors, forced to assess their own performance. [Fisher]
The thing is, it’s always been obvious that by spying on internet users, you could improve the efficacy of advertising. That’s not so much because spying gives you fantastic insights into new ways to convince people to buy products as it is a tribute to just how ineffective marketing is. When an ad’s expected rate of success is well below one percent, doubling or tripling its efficacy still leaves you with a sub-one-percent conversion rate.
But it was also obvious from the start that amassing huge dossiers on everyone who used the internet could create real problems for all of society that would dwarf the minute gains these dossiers would realize for advertisers.
It’s as though Mark Zuckerberg woke up one morning and realized that the oily rags he’d been accumulating in his garage could be refined for an extremely low-grade, low-value crude oil. No one would pay very much for this oil, but there were a lot of oily rags, and provided no one asked him to pay for the inevitable horrific fires that would result from filling the world’s garages with oily rags, he could turn a tidy profit.
A decade later, everything is on fire and we’re trying to tell Zuck and his friends that they’re going to need to pay for the damage and install the kinds of fire-suppression gear that anyone storing oily rags should have invested in from the beginning, and the commercial surveillance industry is absolutely unwilling to contemplate anything of the sort. [Doctorow]
Algorithm-based challenges typically come from a place where the interviewer, in all their self-aggrandizing smugness, comes up with something they think demonstrates cleverness. A reliable bet is to try solving it with recursion from the start; it’s goddamn catnip for interviewers. If that doesn’t work, try doing it all in one pass rather than in an O(n) operation, because the extra 1ms you save in this use case will surely demonstrate your worth to the organization.
When you come at it from this perspective, you’re immediately telling your prospective coworker than “I have a secret that only I know right now, and I want you to arrive at this correct answer.” It becomes stressful because there is a correct answer.
Every single product I’ve built in my professional career has not had a correct answer. It’s more akin to carving a statue out of marble: you have a vague understanding of what you want to see, but you have to continually chip away at it and refine it until you end up with one possible result. You arrive at the answer, together, with your teammates. You don’t sit on a preconceived answer and direct your coworker to slug through it alone. [Holman]
If hope is an expression of what we value and need as individuals and as a culture, then Milton’s view of hope in Paradise Lost and Paradise Regain’d reveals in profound ways the character of early modern culture. Samson Agonistes, however, moves instead toward the modern. Milton’s conception of hope in Samson Agonistes parallels those views which emphasize hope as concerned with motion, with moving from one psychological place to another, with changing the way things are, with not remaining inert or powerless in the face of intense suffering. However, while modern studies affirm the absolutely crucial role that hope plays in the human psyche and in human actions, in none of his works does Milton put the accent exclusively on hope’s psychological dimensions. Even in Samson, he considers hope both more holistically and from a cultural orientation different from our own. Milton does not perceive hope as entirely detachable from spirituality, and even when he aligns himself with the more radical religious sects such as the Quakers, he does not detach hope from the fundamental human condition of dwelling in real, material places. [Fenton 195]
Before he was screaming at crowds, Adolph Hitler was charming Ernst Hanfstaengl, a hilariously named German aristocrat whose wealth and social cache were instrumental in Hitler’s rise to power, by being a sympathetic buffoon. The first time Hanfstaengl had Hitler over for dinner, it was clear that he had no idea how to use utensils like a fancy aristocrat. He had bad manners, said awkward things, and at one point even poured sugar into fine wine because he didn’t like the taste. His goofiness made Hanfstaengl love him: “He could have peppered it, for each naive act increased my belief in his homespun sincerity.”
That’s right, witnesses tended to describe Hitler like he was Michael Scott from The Office (who adorably put Splenda in his Scotch). And look at the arc of viewers’ collective attitude toward that character. It doesn’t matter that he’s an objectively terrible boss and clearly makes everyone’s lives worse; at some point in the series, you start to like the guy, and then root for him. Which is why the writers of the show had his character wind up happy and married despite being cluelessly destructive at every step. By the time he flew off into the sunset, very few of us paused to say, “Isn’t this the guy who made Pam sob in the first episode?” [Evans]
One thing that people who wield great power often fail to viscerally understand is what it feels like to have power wielded against you. This imbalance is the source of many of the most monstrous decisions that get made by powerful people and institutions. The people who start the wars do not have bombs dropped on their houses. The people who pass the laws that incarcerate others never have to face the full force of the prison system themselves. The people who design the economic system that inflicts poverty on millions are themselves rich. This sort of insulation from the real world consequences of political and economic decisions makes it very easy for powerful people to approve of things happening to the rest of us that they would never, ever tolerate themselves. No health insurance CEO would watch his child die due to their inability to afford quality health care. No chickenhawk Congressman will be commanding a tank battle in Iran. No opportunistic race-baiting politician will be shunned because of their skin color. Zealots condemn gay people—except for their own gay children. The weed-smoking of young immigrants should get them deported—but our own weed-smoking was a youthful indiscretion. Environmentalist celebrities fly on carbon-spouting private jets. Banks make ostentatious charity donations while raking in billions from investments in defense contractors and gun manufacturers and oil companies. This is human nature. It is very, very easy to do things that hurt others as long as those same things benefit, rather than hurt, you. Self-justification is a specialty of mankind. [Nolan]
Technology is innate to the human condition, though. People have made and adopted and used technology since the time of early hominids, under every economic system. Wheels and pulleys, cars and rockets, pacemakers and vaccines, have all been manufactured under a variety of economic and social arrangements, and there is no credibility to the position that tech is neoliberalism’s handmaiden, no matter whether it’s advanced by the left or the right.
A critique of technology that focuses on its market conditions, rather than its code, yields up some interesting alternate narratives. It has become fashionable, for example, to say that advertising was the original sin of online publication. Once the norm emerged that creative work would be free and paid for through attention – that is, by showing ads – the wheels were set in motion, leading to clickbait, political polarization, and invasive, surveillant networks: “If you’re not paying for the product, you’re the product.”
But if we understand the contours of the advertising marketplace as being driven by market conditions, not “attention economics,” a different story emerges. Market conditions have driven incredible consolidation in every sector of the economy, meaning that fewer and fewer advertisers call the shots, and meaning that more and more of the money flows through fewer and fewer payment processors. Compound that with lax anti-trust enforcement, and you have companies that are poised to put pressure on publishers and control who sees which information. [Doctorow]
That lasted for a couple of years. But the self-centered, work-obsessed barrier I built around myself began to crack sometime last year. It didn’t happen suddenly, and I lied to myself by ignoring it for months, but something was changing. I completely poured myself into my job to the point where I was enjoying neither the work nor the reward anymore. I began to feel burned out and often not good enough for the website I had so passionately built over the course of eight years. A constant feeling of unease and dissatisfaction percolated through other aspects of my life as well. I pretended to be relaxed and have fun in social situations and important life events; in reality, there was a persistent sense of anxiety always there, sitting in the back of my mind where the fear of cancer also was, telling me that I wasn’t good enough or hadn’t done enough. That it was only a matter of days until someone figured out that I sucked and everything I had built was easily replaceable – a trivial, forgettable commodity. [Viticci]
We must all of us, especially our elected officials, stop thinking of a trillion seconds as merely a long time ago and a trillion dollars as just a lot of money. The next time our senators and representatives consider the Federal deficit and the cost of the arms race, they should allow themselves briefly to think of seconds instead of dollars. They might then picture, if they would, prehistoric man hunched in a smoke-filled cave, gnawing at the bones of a woolly mammoth. [Morrell]
I wish that Dorothy Morrell, of Seattle, had written more than just letters to the NYT editor.
One of the many things that Wilhelm was convinced he was brilliant at, despite all evidence to the contrary, was “personal diplomacy,” fixing foreign policy through one-on-one meetings with other European monarchs and statesmen. In fact, Wilhelm could do neither the personal nor the diplomacy, and these meetings rarely went well. The Kaiser viewed other people in instrumental terms, was a compulsive liar, and seemed to have a limited understanding of cause and effect. In 1890, he let lapse a long-standing defensive agreement with Russia—the German Empire’s vast and sometimes threatening eastern neighbor. He judged, wrongly, that Russia was so desperate for German good will that he could keep it dangling. Instead, Russia immediately made an alliance with Germany’s western neighbor and enemy, France. Wilhelm decided he would charm and manipulate Tsar Nicholas II (a “ninny” and a “whimperer,” according to Wilhelm, fit only “to grow turnips”) into abandoning the alliance. In 1897, Nicholas told Wilhelm to get lost; the German-Russian alliance withered. [Carter]
More broadly, the history of psychology shows very limited success in finding any useful categorization scheme for students. By far, the most successful type of categorization is one that is already painfully obvious to educators: Differences in prior knowledge and ability ought to be respected (Cronbach & Snow, 1977).
Psychology has had much greater success describing commonalities among students than it has had in describing categorization schemes for differences. Researchers have compiled a fairly impressive list of properties of the mind that students share. And although going from lab to classroom is not straightforward, there is evidence that students benefit when educators deploy classroom methods that capitalize on those commonalities. For example, we know that spacing learning over time and quizzing (among other methods) improve memory (Dunlosky, Rawson, Marsh, Nathan, & Willingham, 2013). We know that teachers can modify the classroom environment to decrease problem behaviors (Osher, Bear, Sprague, & Doyle, 2010). In mathematics, there is a particular developmental progression by which teachers can best teach numbers and operations (National Mathematics Advisory Panel, 2008). In reading, phonics instruction benefits most children (Reynolds, Wheldall, & Madelaine, 2011).
Thus, psychologists have made some impressive contributions to education. When it comes to learning styles, however, the most we deserve is credit for effort and for persistence. Learning styles theories have not panned out, and it is our responsibility to ensure that students know that. [Willingham & Hughes & Dobolyi 269]
As a helpful aside, in discussions relating to the preparation of this article, Hadley Wickham noted: “Academics tend to like warm feet, but they don’t appreciate who makes their socks.” I will be forever grateful for this metaphor, since my main point in the preceding paragraphs is to create an environment where the hard work such as tool and/or dataset development for reproducible research (the “sock development”) is understood to be an important research contribution in addition to the publication of the primary manuscript (the “warm feet”). [Waller 11]
After a year of this training, I went back to San Diego to defend my title at the Nationals. Predictably enough, it the finals I faced off with the same guy as the year before. The opening phase of the match was similar to our previous meeting. I began by controlling him, neutralizing his aggression, building up a lead. Then he got emotional and started throwing head-butts. My reaction was very different this time. Instead of getting mad, I just rolled with his attacks and threw him out of the ring. His tactics didn’t touch me emotionally, and when unclouded, I was simply at a much higher level than him. It was amazing how easy it all felt when I didn’t take the bait.
There were two components to this work. One related to my approach to learning, the other to performance. On the learning side, I had to get comfortable dealing with guys playing outside the rules and targeting my neck, eyes, groin, etc. This involved some technical growth, and in order to make those steps I had to recognize the relationship between anger, ego, and fear. I had to develop the habit of taking on my technical weaknesses whenever someone pushed my limits instead of falling back into a self-protective indignant pose. Once that adjustmant was made, I was free to learn. If someone got into my head, they were doing me a favor, exposing a weakness. They were giving me a valuable opportunity to expand my threshold for turbulence. Dirty players were my best teachers. [Waitzkin 205–6]
The correlations among the components in an intelligence test, and between tests themselves, are all positive, because that’s how we design tests. But that means that the correlation matrix only has positive entries. This has implications for factor analysis, because the usual way of finding the factors is to take the eigenvectors of the correlation matrix (after an adjustment which reduces the diagonal entries but leaves them positive). The larger the corresponding eigenvalue, the more of the variance is described by that eigenvector. The Frobenius-Perron Theorem, however, tells us that any matrix with all-positive entries has a unique largest eigenvalue, and that the corresponding eigenvector only has positive entries. (If some of the correlations are allowed to be zero, there can be multiple eigenvectors which all have the largest eigenvalue, and all their components are non-negative.) Translated into factor analysis: there has to be a factor which describes more of the variance than any other, and every variable is positively loaded on it. So making up tests so that they’re positively correlated and discovering they have a dominant factor is just like putting together a list of big square numbers and discovering that none of them is prime — it’s necessary side-effect of the construction, nothing more. [Shalizi]
Many things, the flat-Earthers understand, are being hidden. God, of course. Also, beyond the Antarctic ice wall lie thousands of miles of land—“an America 2.0,” one speaker said—that powerful people are keeping to themselves. Onstage, Mark Sargent had suggested that the world was run by “a small, scary group of smoking men sitting around a table.” nasa, meanwhile, is hoarding billions of dollars in taxpayer money for its operations, which include guarding the ice wall with armed employees and paying frizzy-haired actors to pretend to float in zero gravity. The astronauts are Freemasons, sworn to secrecy. The other workers, the engineers and functionaries, have either been duped or don’t want to speak out, for fear of losing their jobs. [Burdick]
The children of Camberwell Green are among the fattest in England. Half of ten- and 11-year-olds there are overweight or obese (meaning that a boy of average height would weigh over 40kg, as opposed to a healthy 35kg or so). By contrast, in Dulwich Village, a few miles south, where household incomes are twice as high, only a fifth of children are in that category, one of the lowest levels in the country.
Poor children have been fatter than rich ones since around the 1980s. But over the past decade the rich have started to slim down, as the poor have got bigger. This is true in poor Camberwell and posh Dulwich, where rates of childhood obesity have respectively risen by ten percentage points and fallen by two in the past six years. And it is true of the country at large (see chart).
Since the 1990s the public has become more aware of the risks of obesity. The rich and well educated are best placed to act on this knowledge, says Ronny Cheung, a paediatrician in London. They have more time to cook healthy meals at home and are more likely than poor folk to live near green spaces or join sports clubs. [The Economist]
I missed the anonymity of the city, longed to visit the post office without having to exchange my whole life story for a stamp, to walk down a country lane without drivers slowing down to get a good look at you.
The warmth and charm of my native people seduces tourists in their millions every year, but for an introvert like me, it’s sometimes exhausting. I feel there is a time and a place for socialising, but in rural Ireland that time is always — and, honestly, it wore me out. [Cullen]
Under the ice, the divers find themselves in a separate reality, where space and time acquire a strange new dimension. Those few, who have experienced the world under the frozen sky, often speak of it as going down into the cathedral. [Herzog]
After breakfasting on the collected food, we were ushered into an audience with Achaan Chaa. A severe-looking man with a kindly twinkle in his eyes, he sat patiently waiting for us to articulate the question that had brought us to him from such a distance. Finally, we made an attempt: “What are you really talking about? What do you mean by ‘eradicating craving’?” Achaan Chaa looked down and smiled faintly. He picked up the glass of drinking water to his left. Holding it up to us, he spoke in the chirpy Lao dialect that was his native tongue: “You see this goblet? For me, this glass is already broken. I enjoy it; I drink out of it. It holds my water admirably, sometimes even reflecting the sun in beautiful patterns. If I should tap it, it has a lovely ring to it. But when I put this glass on a shelf and the wind knocks it over or my elbow brushes it off the table and it falls to the ground and shatters, I say, ‘Of course.’ But when I understand that this glass is already broken, every moment with it is precious.” Achaan Chaa was not just talking about the glass, of course, nor was he speaking merely of the phenomenal world, the forest monastery, the body, or the inevitability of death. He was also speaking to each of us about the self. This self that you take to be so real, he was saying, is already broken. [Epstein]
We have documented a corpus of data on human ratings of words, syntax, and larger constructs in programming languages, with the goal of helping instructors and language designers understand the following broad point: novices find many of the choices made by language designers to be unintuitive. While we imagine many computer science instructors already have a “gut feeling” that this is the case, our results help to formalize this notion, in addition to cataloging a wide swath of possible choices. To sum up the results more specifically, our two studies provide evidence for the following claims: (1) while humans vary substantially in their opinions, not all programming languages, nor their corresponding constructs, are rated as equally intuitive, (2) some very common programming language syntax (e.g., for loops), is considered by non-programmers (and sometimes by programmers) to be unintuitive, and (3) as programmers gain experience, they rate familiar syntax higher—by an average of about half a point per year on an 11-point scale. [Stefik & Siebert 22]
Ultimately, this is what the formatter does. It applies some fairly sophisticated ranking rules to find the best set of line breaks from an exponential solution space. Note that “best” is a property of the entire statement being formatted. A line break changes the indentation of the remainder of the statement, which in turn affects which other line breaks are needed. Sorry, Knuth. No dynamic programming this time.
I think the formatter does a good job, but how it does it is a mystery to users. People get spooked when robots surprise them, so I thought I would trace the inner workings of its metal mind. And maybe try to justify to myself why it took me a year to write a program whose behavior in many ways is indistinguishable from
I am interested in using craft as a metaphor for programming. A colorful and concrete metaphor is helpful for talking about programming (to managers, customers, or bridging to another tradition). Talking about programming helps to coordinate with our teammates and even our past and future selves. What was I doing on Friday (or 5 seconds ago)? Oh, I was partway through underdrawing such-and-so. We may be able to port useful techniques from craft to programming. [Hines]
Because poker players are in a constant struggle to keep in-the-moment fluctuations in perspective, their jargon has a variety of terms for the concept that “bad outcomes can have an impact on your emotions that compromise your decision-making going forward so that you make emotionally charged, irrational decisions that are likely to result in more bad outcomes that will then negatively impact your decision-making going forward and so on.” The most common is tilt. Tilt is the poker player’s worst enemy, and the word instantly communicates to other poker players that you were emotionally unhinged in your decision-making because of the way things turned out.* If you blow some recent event out of proportion and react in a drastic way, you’re on tilt.
The concept of tilt comes from traditional pinball machines. To keep players from damaging the machines by lifting them to alter the course of the ball, the manufacturers placed sensors inside that disabled the machine if it was violently jostled. The flippers stopped working, the lights went off, and the word “tilt” flashed at numerous places on the layout. The origin of tilt in pinball is apt because what’s going on in our brain in moments of tilt is like a shaken pinball machine. When the emotional center of the brain starts pinging, the limbic system (specifically the amygdala) shuts down the prefrontal cortex. We light up . . . then we shut down our cognitive control center. [Duke]
Up until the age of 30 I was the least athletic person imaginable. But for the last five years, I’ve been regularly loading up on cannabis chocolates and sprinting through the city, feeling weightless as I leap up steep hills and tackle distances I never thought possible. The combination of music, stress, and weed blend into a euphoric stew that has somehow turned me into a runner. And a troubled addict. [Hesse]
This is bizarre.
Many consequentialists claim to adopt firm rules like “my word is inviolable” and then justify those rules on consequentialist grounds. But I think on the one hand that approach is too demanding—the people I know who take promises most seriously basically never make them—and on the other it does not go far enough—someone bound by the literal content of their word is only a marginally more useful ally than someone with no scruples at all. [Christiano]
All new ventures have their detractors, and James had his full share with the Cavendish project. One diminishing but still powerful school of critics held that, while experiments were necessary in research, they brought no benefit to teaching. A typical member was Isaac Todhunter, the celebrated mathematical tutor, who argued that the only evidence a student needed of a scientific truth was the word of his teacher, who was ‘probably a clergyman of mature knowledge, recognised ability, and blameless character’. One afternoon James bumped into Todhunter on King’s Parade and invited him to pop into the Cavendish to see a demonstration of conical refraction. Horrified, Todhunter replied: ‘No, I have been teaching it all my life and don’t want my ideas upset by seeing it now!’. [Mahon 151]
First, consider a normal case of belief embedded in memory. Inga hears from a friend that there is an exhibition at the Museum of Modern Art, and decides to go see it. She thinks for a moment and recalls that the museum is on 53rd Street, so she walks to 53rd Street and goes into the museum. It seems clear that Inga believes that the museum is on 53rd Street, and that she believed this even before she consulted her memory. It was not previously an occurrent belief, but then neither are most of our beliefs. The belief was sitting somewhere in memory, waiting to be accessed.
Now consider Otto. Otto suffers from Alzheimer’s disease, and like many Alzheimer’s patients, he relies on information in the environment to help structure his life. Otto carries a notebook around with him everywhere he goes. When he learns new information, he writes it down. When he needs some old information, he looks it up. For Otto, his notebook plays the role usually played by a biological memory. Today, Otto hears about the exhibition at the Museum of Modern Art, and decides to go see it. He consults the notebook, which says that the museum is on 53rd Street, so he walks to 53rd Street and goes into the museum.
Clearly, Otto walked to 53rd Street because he wanted to go to the museum and he believed the museum was on 53rd Street. And just as Inga had her belief even before she consulted her memory, it seems reasonable to say that Otto believed the museum was on 53rd Street even before consulting his notebook. For in relevant respects the cases are entirely analogous: the notebook plays for Otto the same role that memory plays for Inga. The information in the notebook functions just like the information constituting an ordinary non-occurrent belief; it just happens that this information lies beyond the skin. [Clark & Chalmers]
Parfit attributed his obsession with a handful of places — he once said that there were only 10 things in the world he wanted to photograph — to a condition called aphantasia, the inability to form mental images. He was unable to visualise things familiar to him, even his wife’s face when they weren’t together.
While it is tempting to conclude that photography was for Parfit a kind of compensation for that affliction, this doesn’t really explain why he should have made multiple images of the same places. Nor does it begin to account for the way he treated the images once he had shot them. It wasn’t simply that he wanted a record of his favourite places to make up for some neurophysiological deficit — he was also trying in his photographs to capture their timeless essence. [Derbyshire]
Turns out Derek Parfit was really into taking photos of St Petersburg and Venice. And nothing else.
If the Church-Turing thesis is true and if our cognitive processes are algorithmically calculable then in principle a large enough computer can give an exact simulation of the mind. But what of the second ‘if’? In his 1936 paper Turing mooted the possibility that our cognitive processes are algorithmically calculable. Subsequently this has become an article of deepest faith among AI researchers and has even acquired the status of something ‘too obvious to mention’. Take Newell’s expression of the blanket defence of symbolic AI that we’ve been considering: ‘A universal [symbol] system always contains the potential for being any other system, if so instructed. Thus, a universal system can become a generally intelligent system.’ The hypothesis that a system exhibiting general intelligence will be algorithmically calculable is thought so obvious that it is not even mentioned. Justin Leiber is no less dogmatic: ‘In the in principle sense, you are [a] Turing Machine, but you do it all through the marvel of cunningly coordinated parallel processes, shortcuts piled within shortcuts.’ In point of fact there is no reason — apart from a kind of wishful thinking — to believe that all our cognitive processes are algorithmically calculable. This is terra incognita, and for all we know many aspects of brain function may be noncomputable. [Copeland 223]
Given the frequency with which we discover that processes we previously thought were noncomputable are, in fact, computable, I think it may be a stretch to call this position ‘a kind of wishful thinking’. I would be surprised if there were a tonne of currently-unknown-and-fundamentally-noncomputable brain functions hiding in the bushes.
Say a king wishes to support a stand ing army of fifty thousand men. Under ancient or medieval conditions, feeding such a force was an enormous problem-unless they were on the march, one would need to employ almost as many men and ani mals just to locate, acquire, and transport the necessary provisions. On the other hand, if one simply hands out coins to the soldiers and then demands that every family in the kingdom was obliged to pay one of those coins back to you, one would, in one blow, turn one’s entire national economy into a vast machine for the provisioning of soldiers, since now every family, in order to get their hands on the coins, must find some way to contribute to the general effort to provide soldiers with things they want. Markets are brought into existence as a side effect. [Graeber 49–50]
But before we can plunge into the experience of being wrong, we must pause to make an important if somewhat perverse point: there is no experience of being wrong.
There is an experience of realizing that we are wrong, of course. In fact, there is a stunning diversity ofsuch experiences. As we’ll see in the pages to come, recognizing our mistakes can be shocking, confusing, funny, embar rassing, traumatic, pleasurable, illuminating, and life-altering, sometimes for ill and sometimes for good. But by definition, there can’t be any par ticular feeling associated with simply being wrong. Indeed, the whole reason it’s possible to be wrong is that, while it is happening, you are oblivious to it. When you are simply going about your business in a state you will later decide was delusional, you have no idea of it whatsoever. You are like the coyote in the Road Runner cartoons, after he has gone off the cliff but before he has looked down. Literally in his case and figuratively in yours, you are already in trouble when you feel like you’re still on solid ground. So I should revise myself: it does feel like something to be wrong. It feels like being right. [Schulz 18]
When Ayn Rand’s long-running affair with Nathaniel Branden was revealed to the Objectivist membership, a substantial fraction of the Objectivist membership broke off and followed Branden into espousing an “open system” of Objectivism not bound so tightly to Ayn Rand. Who stayed with Ayn Rand even after the scandal broke? The ones who really, really believed in her—and perhaps some of the undecideds, who, after the voices of moderation left, heard arguments from only one side. This may account for how the Ayn Rand Institute is (reportedly) more fanatic after the breakup, than the original core group of Objectivists under Branden and Rand. [Yudkowsky]
The future of data analysis is bright. The next Kinsey, I strongly suspect, will be a data scientist. The next Foucault will be a data scientist. The next Freud will be a data scientist. The next Marx will be a data scientist. The next Salk might very well be a data scientist. [Stephens-Davidowitz]
Further evidence that ‘data science’ might just be science.
Occasionally, I have been what I’ve been accused of being before, cantankerous. As far as I can see one isn’t meant to show anger. But as I write, the National Union of Mineworkers has been heroically pursuing its just struggle for six months; and while I worked on the book it seemed that the age of the first Elizabeth in all its brutalities was all too like the age of the second; and I think I have been insufficiently cantankerous. There’s no undoing history, but I hate to see it repeated. May the miners, therefore, win — however long it takes. [Shepherd ix–x]
- I will remember that I didn’t make the world, and it doesn’t satisfy my equations.
- Though I will use models boldly to estimate value, I will not be overly impressed by mathematics.
- I will never sacrifice reality for elegance without explaining why I have done so.
- Nor will I give the people who use my model false comfort about its accuracy. Instead, I will make explicit its assumptions and oversights.
- I understand that my work may have enormous effects on society and the economy, many of them beyond my comprehension. [Derman & Wilmott]
As if to make secure its newly-won respectability, professorial Marxism has, in the manner of all exclusive bodies, carried out its discourse through the medium of an arcane language not readily accessible to the uninstructed. Certainly no-one could possibly accuse the Marxist professoriate of spreading the kind of ideas likely to cause a stampede to the barricades or the picket lines. Indeed, the uncomplicated theory that has traditionally inspired that sort of extra-mural activity is now rather loftily dismissed as ‘vulgar’ Marxism – literally, the Marxism of the ‘common people’. This is not necessarily to suggest that the new breed of Marxists are less dedicated than the old to the revolutionary transformation of society; their presence at the gates of the Winter Palace is perfectly conceivable, provided that satisfactory arrangements could be made for sabbatical leave. [Parkin x]
For example, suppose that Jim and Anne will both live until they are 75, but Jim requires 11 hours of sleep each night, while Anne requires only 8 hours of sleep each night. We can work out that Anne has a waking life expectancy of 50 years, while Jim only has a waking life expectancy of just over 40 years: almost a ten year difference. Even once we correct for the value of dreams, that’s comparable with the waking life expectancy costs associated with lifelong smoking and morbid obesity.
This raises a question regarding health policy: shouldn’t we spend the same resources trying to cure Jim of his greater sleep requirements that we would spend trying to cure someone of an illness that will cause them to die eight years prematurely, all else being equal? I believe we should. [Askell]
Finally, somebody else gets it.
But, Shepherd further inquires, what precisely is that “real”? Transgressive pleasure? Audience complicity? Or is it more like the combination of outrage and impotence elicited by a sharp weapon smuggled into the ring by a present-day professional wrestler—a despi- cable villain with a stage history of remorseless violence and brutality ever ready to exceed his previous record of crime? Like rock and roll or current popular reportage, Marlowe’s dramaturgy has to be excessively, terribly, delectably dangerous—like that well-known but seldom rerun 1968 NBC news footage of the South Vietnamese national police chief summarily blowing out the brains of a hand-cuffed Vietcong guerrilla in broad daylight on a street in Saigon. No histrionics. That really happened, and because it was real it skips an aesthetic beat to elicit a nonplussed profundity of visceral apprehension. Violent theatre, by contrast, must be more comprehensible: as fake as professional wrestling but as real as the feelings of torment, distress, revulsion (and even ridiculousness) that professional wrestling can elicit and news footage can graphically deliver. [Bowers 22]
A crisp salute from a young marine in dress uniform at the main gate’s guard shack marked the transition from the “street” to the “base” - from the civilian realm to the military. The base is a place of close-cropped haircuts and close-cropped lawns. Here nature and the human form are control led, arranged, disciplined, ready to make a good impression. In boot camp inductee’s credo is: “If it moves, salute it. If it doesn’t move, pick it up. If you can’t pick it up, paint it white.” The same mindset imposes an orderliness and a predictability on both the physical space and the social world of the military base. [Hutchins 6–7]
Notwithstanding the celebrated tale of his “Midnight Ride,” Paul Revere’s role in the complex of events leading up to the American Revolution remains rather obscure. The few who have delved into this gap in the historical narrative suggest that Revere’s real importance is not to be found in that one spectacular exploit (Countryman 1985; Fischer 1994; Forbes 1942; Triber 1998). What then was his importance, if any? In other words, if Revere was more than a messenger who just happened upon the assignment to ride to Lexington on that fateful night of April 18-19, 1775, and if he indeed had “an uncanny genius for being at the center of events” (Fischer 1994: xv), what exactly was the role he played? Joseph Warren, known mostly as the man who sent Revere on that ride, presents a similar quandary (Cary 1961; Truax 1968). What was his role? Also, what was his relationship with Revere in the context of the incipient movement?
Using the membership rosters of key Whig groups and supplementary secondary data, I address these questions by examining the underlying relational structure that created opportunities for Revere and Warren in the mobilization process. The analysis shows that Paul Revere’s genius was in his being a bridge par excellence. The role Joseph Warren played was of the same kind, welding the movement as a whole. Both men were bridges that spanned the various social chasms and connected disparate organizational elements, helping to forge an emerging movement that gave rise to the American Revolution. The effectiveness of the brokerage they provided in linking the microlevel interactions to the macrolevel mobilization was due mainly to the fact that the network they were embedded in was highly multiplex, and the positions they occupied in it were singularly instrumental. Moreover, they complemented each other as structural doubles. This is the other ride of Paul Revere and Joseph Warren—far less known, yet, I argue, much more crucial. [Han 143]
This manages to be a very good article about social network analysis, brokerage roles, the challenges of rigour in the social sciences, and the myth of Paul Revere.
Linguistics differs from the other scientific disciplines mainly in the unsystematic way of the construction of its edifice. This building looks like a construction knocked together out of incoherent parts, of cabins and annexes through which it is impossible to pass. Different linguistic schools present their conceptions based mainly on definitions and classifications of language constructs. This is analogical to the situation in other sciences. However, linguistics mostly refuses to build up scientific theories with reference to the alleged specific position of natural languages among all other imaginable phenomena. [Hrebíček 82]
In symbolist learning, there is a one-to-one correspondence between symbols and the concepts they represent. In contrast, connectionist representations are distributed: each concept is represented by many neurons, and each neuron participates in representing many different concepts. Neurons that excite one another form what Hebb called a cell assembly. Concepts and memories are represented in the brain by cell assemblies. Each of these can include neurons from different brain regions and overlap with other assemblies. The cell assembly for “leg” includes the one for “foot,” which includes assemblies for the image of a foot and the sound of the word foot. If you ask a symbolist system where the concept “New York” is represented, it can point to the precise location in memory where it’s stored. In a connectionist system, the answer is “it’s stored a little bit everywhere.”
Another difference between symbolist and connectionist learning is that the former is sequential, while the latter is parallel. In inverse deduction, we figure out one step at a time what new rules are needed to arrive at the desired conclusion from the premises. In connectionist models, all neurons learn simultaneously according to Hebb’s rule. This mirrors the different properties of computers and brains. Computers do everything one small step at a time, like adding two numbers or flipping a switch, and as a result they need a lot of steps to accomplish anything useful; but those steps can be very fast, because transistors can switch on and off billions of times per second. In contrast, brains can perform a large number of computations in parallel, with billions of neurons working at the same time; but each of those computations is slow, because neurons can fire at best a thousand times per second. [Domingos 94]
Robert Morse has persistently maintained, “The main purpose of the rankings is to provide prospective law school students with much-needed—and clearly desired—comparative information to help them make decisions on where to apply and enroll.” The production of rankings is useful, and that in itself justifies them.
Given that rankings are so useful in providing solutions to some practical problems, it is important to remember that useful is not the same as good. One of the primary goals of this book is to question the often unreflective acceptance of and commitment to quantification as the dominant form of valuation. As we have shown, there is often a shadowy underbelly to the numbers used to measure people and social objects. Measures produce unintended consequences, misguided incentives, and misplaced attention. More generally, they transform what they purport to only reflect, altering the social world in unexpected ways: they are constitutive as well as reactive. This means that public measures are not neutral enterprises, nor can they be understood strictly in terms of their utility or technical achievement, as their advocates often claim. They carry with them assumptions about value, merit, and goals, as well as about what is good, normal, and right. [Espeland & Sauder 198]
Traditionally, in the analysis of natural hazards, scientists construct an empirical distribution function from evidence of past events, such as geological evidence of past extraterrestrial impacts, or supernova/γ-burst explosions, or supervolcanic eruptions. In the Bayesian approach, we can dub this distribution function the a posteriori distribution function.
In forecasting future events, we are interested in the “real” distribution of chances of events (or their causes), which is “given by Nature” and is not necessarily revealed in their a posteriori distribution observed in or inferred from the historical record. The underlying objective characteristic of a system is its a priori distribution function. In predicting future events, the a priori distribution is crucial, since it is not skewed by selection effects. [Ćirković, Sandberg & Bostrom 1499–1500]
Usually, these guests went back to their own lives, leaving Neff to hers. But February became March, and Delvey kept showing up. She’d bring food down, or a glass of extra-dry white wine, and settle near Neff’s desk to chat. Some of the other hotel employees found Anna deeply annoying. She could be oddly ill-mannered for a rich person: Please and thank you were not in her vocabulary, and she would sometimes say things that were “Not racist,” Neff said, “but classist.” (“What are you bitches, broke?” Anna asked her and another hotel employee.) But to Neff, it didn’t come across as mean-spirited. More like she was some kind of old-fashioned princess who’d been plucked from an ancient European castle and deposited in the modern world, although according to Anna she came from modern-day Germany and her father ran a business producing solar panels. And despite her unassuming figure — “a sort of Sound of Music Fräulein,” one acquaintance later put it — Anna quickly established herself as one of 11 Howard’s most generous guests. “People would fight to take her packages upstairs,” said Neff. “Fight, because you knew you were getting $100.” Over time, Delvey got more and more comfortable in the hotel, swanning around in sheer Alexander Wang leggings or, occasionally, a hotel robe. “She ran that place,” said Neff. “You know how Rihanna walks out with wineglasses? That was Anna. And they let her. Bye, Ms. Delvey …” [Pressler]
Anna Delvey is the 21st Century’s Stanley Clifford Weyman and I will read every shred of reporting about them both.
After all, GitHub has accelerated trends that are now cliches. It helped make the idea that software is eating the world a reality, and meant that every company is becoming a tech company. It’s already clear that, even in the near future, development teams are going to be the focal points of more and more organizations, and that GitHub will, too. [Upright]
If materialism is true, the reason you have a stream of conscious experience – the reason there’s something it’s like to be you while there’s (presumably!) nothing it’s like to be a toy robot or a bowl of chicken soup, the reason you possess what Anglophone philosophers call phenomenology – is that the material stuff out of which you are made is organized the right way. You might find materialism attractive if you reject the thought that people are animated by immaterial spirits or possess immaterial properties.
Here’s another thought you might reject: The United States is literally, like you, phenomenally conscious. That is, the United States, conceived of as a spatially distributed, concrete entity with people as some or all of its parts, literally possesses a stream of conscious experience over and above the experiences of its members considered individually. In this essay, I will argue that accepting the materialist idea that you probably like (if you’re a typical early 21st - century philosopher ) should lead you to accept some group consciousness ideas you probably don’t like (if you’re a typical early 21st - century philosopher) – unless you choose, instead, to accept some other ideas you probably ought to like even less. [Schwitzgebel 1697]
If all philosophy papers were this strange and well written, I would definitely want to be a philosopher.
“This nameless mode of naming the unnameable is rather good,” Shelley remarked about the creature’s theatrical billing. She herself had no name of her own. Like the creature pieced together from cadavers collected by Victor Frankenstein, her name was an assemblage of parts: the name of her mother, the feminist Mary Wollstonecraft, stitched to that of her father, the philosopher William Godwin, grafted onto that of her husband, the poet Percy Bysshe Shelley, as if Mary Wollstonecraft Godwin Shelley were the sum of her relations, bone of their bone and flesh of their flesh, if not the milk of her mother’s milk, since her mother had died eleven days after giving birth to her, mainly too sick to give suck—Awoke and found no mother. [Lepore]
By propaganda, then, I mean the deliberate attempt to persuade people to think and behave in a desired way. Although I recognize that much propaganda is accidental or unconscious, here I am discussing the conscious, methodical and planned decisions to employ techniques of persuasion designed to achieve specific goals that are intended to benefit those organizing the process. In this definition, advertising thus becomes economic propaganda since the marketing of a product is designed to advance the manufacturer’s profits. It may well be that those at the receiving end of the process also benefit, but in that case the word ‘publicity’ would be a more appropriate label. Public relations is a related communicative process designed to enhance the relationship between an organization and the public and, as such, is a branch of propaganda, albeit a nicer way of labelling it. [Taylor 6]
Virtuosic close readings of individual texts that illuminate their unique qualities dominate these studies, while less attention is given to classifying poems according to generalizable stylistic models or patterns. This is surely a matter of certain critical dispositions holding sway in the field, but we can also partly attribute it to the constraints of close reading itself as an approach. The prospect of consistently sorting poems according to a shared stylistic pattern seems feasible at the level of dozens of poems, but what about at the level of hundreds of poems? If one privileges the notion that every act of reading is inherently subjective, and that the style of a text depends on a host of factors that hold only for that particular instance, then close reading as a form of pattern recognition becomes a highly unwieldy method. There is more to be gained by explicating the singular aspects of a text, or by describing how it deviates from an assumed normative model, rather than by trying to define the model itself. [Long & So 241]
The history of literary study is primarily remembered as a narrative of conflicting ideas. Critical movements clash, then establish a succession: New Criticism, structuralism, deconstruction, New Historicism. Although scholars have complicated this simple history-of-ideas story in recent decades with an emphasis on social and institutional struggle, generational conflict remains a central framework: instead of struggles among ideas there are struggles among genteel amateurs, professionalized scholars, and so on. In emphasizing conflict, these approaches still leave aside important dimensions of the history of scholarship: assumptions that change quietly, without explicit debate; entrenched patterns that survive the visible conflicts; long-term transformations of the terrain caused by social change. To see more of these kinds of history, another approach is needed—a way of discovering and interpreting patterns on a different historical scale. [Goldstone & Underwood 359]
While the natural sciences have been tangled up in laboratories filled with tools and instruments for several centuries (Hacking 1983; Shapin and Schaffer 1985) even the most quantitative methods in linguistics or sociology have only begun to be mechanised with the advent of information-processing technology and more consistently so with the computer. If the humanities are indeed becoming ‘laboratory sciences’ (Knorr-Cetina 1992), we are still in the early stages. The use of computers as instruments, that is as heuristic tools that are part of a methodological chain, may not be the most visible part of the ‘digital revolution’, but its institutional and epistemological repercussions for scholarship are already shaping up to be quite significant. [Rieder & Röhle 69]
Perhaps humanists have been slow to appreciate supervised models because the process sounds circular: since you have to start from labeled examples, it may at first appear that a supervised model could only reproduce categories you already understood. But in practice we can often locate examples of social phenomena without clearly understanding the categories they instantiate. I know how to find a sample of bestsellers, for instance, or a sample of experimental novels reviewed in highbrow magazines, although I can’t necessarily define “the avant-garde” or “mass culture.” Indeed, the difficulty of defining those abstractions may be no accident: Andrew Abbott has argued that the social entities we talk about are in general less substantial than the boundaries that separate them (1995). Whether or not we affirm Abbott’s point as a general principle, it’s certainly true that literary historians often start with observed differences between particular groups of texts. Supervised predictive models allow us to build on that foundation—mapping the literary field by sampling works from different social locations, and modeling the boundaries between them. [Underwood 2–3]
How did this problematic method take hold among the sports science research community? In a perfect world, science would proceed as a dispassionate enterprise, marching toward truth and more concerned with what is right than with who is offering the theories. But scientists are human, and their passions, egos, loyalties and biases inevitably shape the way they do their work. The history of MBI demonstrates how forceful personalities with alluring ideas can muscle their way onto the stage. [Aschwanden & Nguyen]
What is to be done about the problem of the ego in programming? A typical text on management would say that the manager should exhort all his programmers to redouble their efforts to find their errors. Perhaps he would go around talking them to show him their errors each day. This method, however, would fail by going precisely in the opposite direction to what our knowledge of psychology would dictate, for the average person is going to view such an investigation as a personal trial. Besides, not all programmers have managers—or managers who would know an error even if they saw one outlined in red.
No, the solution to this problem lies not in a direct attack—for attack can only lead to defense, and defense is what we are trying to eliminate. Instead, the problem of the ego must be overcome by a restructuring of the social environment and, through this means, a restructuring of the value system of the programmers in that environment. [Weinberg 56]
My waitress is a young woman named Gabby, who has straight black bangs and a long, low ponytail. Gabby tells me she has been off for a few days, and has only just heard the gospel of the TGI Friday’s Endless Apps deal herself. I ask her if I can take advantage of the deal right then at that moment, and she reads aloud from a list of eligible items penciled on her order pad in round, swooping letters. I tell her I would like to order the unlimited mozzarella sticks. She tells me that, according to the restrictions of the promotion, I will only be permitted to receive unlimited quantities of one item, for example: barbecue boneless buffalo wings. I tell her I would like to order the unlimited mozzarella sticks. [Weaver]
This is still the best article ever written about Mozzarella Sticks.
Consider trying to verify that racial considerations did not affect an automated decision to grant a housing loan. We could use a standard auditing procedure that declares that the race attribute does not have an undue influence over the results returned by the algorithm. Yet this may be insufficient. In the classic case of redlining, the decision-making process explicitly excluded race, but it used zipcode, which in a segregated environment is strongly linked to race. Here, race had an indirect influence on the outcome via the zipcode, which acts as a proxy.
Note that in this setting, race would not be seen as having a direct influence (because removing it doesn’t remove the signal that it provides). Removing both race and zipcode jointly (as some methods propose to do) reveals their combined influence, but also eliminates other task-specific value that the zipcode might signal independent of race, as well as leaving unanswered the problem of other features that race might exert indirect (and partial) influence through. [Adler et al. 96]
Removing identifying information is not sufficient for anonymity. An adversary may have auxiliary information about a subscriber’s movie preferences: the titles of a few of the movies that this subscriber watched, whether she liked them or not, maybe even approximate dates when she watched them. We emphasize that even if it is hard to collect such information for a large num- ber of subscribers, targeted de-anonymization—for example, a boss using the Netflix Prize dataset to find an employee’s entire movie viewing history after a casual conversation—still presents a serious threat to privacy. [Narayanan & Shmatikov 118]
That brings us to the crucial question: Just how much healthier could fast-food joints and processed-food companies make their best-selling products without turning off customers? I put that question to a team of McDonald’s executives, scientists, and chefs who are involved in shaping the company’s future menus, during a February visit to McDonald’s surprisingly bucolic campus west of Chicago. By way of a partial answer, the team served me up a preview tasting of two major new menu items that had been under development in their test kitchens and high-tech sensory-testing labs for the past year, and which were rolled out to the public in April. The first was the Egg White Delight McMuffin ($2.65), a lower-calorie, less fatty version of the Egg McMuffin, with some of the refined flour in the original recipe replaced by whole-grain flour. The other was one of three new Premium McWraps ($3.99), crammed with grilled chicken and spring mix, and given a light coating of ranch dressing amped up with rice vinegar. Both items tasted pretty good (as do the versions in stores, I’ve since confirmed, though some outlets go too heavy on the dressing). And they were both lower in fat, sugar, and calories than not only many McDonald’s staples, but also much of the food served in wholesome restaurants or touted in wholesome cookbooks. [Freedman]
While University mission statements are stacked full of commitments to integrity, valuing staff and caring; the USA’s adjunct workforce is one of low pay stress, limited health insurance, and contract precarity. The situation is mirrored in much of the UK, as the number of PhD graduates compete for limited work, with a shrinking pool of permanent posts. Amidst this situation, staff, as well as students, are pointed towards the ever growing portfolio of well-being and resilience programmes, when one might suggest that the source of the stresses causing the need for such interventions also lie within the ability of Universities to fix. Adjuncts don’t need wellbeing courses, they need proper contracts and pay. [Rivers & Webster]
Jurek’s victories in punishing 100-mile races since the late 1990s—plus a starring role in the writer Christopher McDougall’s best seller, Born to Run—have made him a distance-running celebrity. But tackling the Appalachian Trail forced him to dig deeper than he ever had before. Five weeks in, he was down more than a dozen pounds, and his ribs were visible. His eyes bulged, feral and unfocused. His body reeked of apple-cider vinegar as his sweat excreted excess ammonia. And his mind was beginning to crack. Late one night, he was mystified by the lights of a house he spotted on top of a mountain. A running partner had to explain that what he saw was the moon. [Bisceglio]
Michael Jackson might have been dying to be white, but he was not dying alone. There were the rest us out there, born, as he was, in the muck of this country, born in The Bottom. We knew that we were tied to him, that his physical destruction was our physical destruction, because if the black God, who made the zombies dance, who brokered great wars, who transformed stone to light, if he could not be beautiful in his own eyes, then what hope did we have—mortals, children—of ever escaping what they had taught us, of ever escaping what they said about our mouths, about our hair and our skin, what hope did we ever have of escaping the muck? And he was destroyed. It happened right before us. God was destroyed, and we could not stop him, though we did love him, we could not stop him, because who can really stop a black god dying to be white? [Coates]
This canonical account of Shakespearean drama as a fictional space for the eruption of disorder severing social bonds and overthrowing political hierarchies certainly holds true at the level of plot, and Act 5, Scene 2 of Hamlet is one of the most striking examples of this. However, the critical vocabulary of entropy and chaos - incoherence, conflict, waste, violence, destruction, scattering and disproportion - used to describe tragic plot as the unraveling of society and the destruction of human bonds, fails to capture the dramatic technique required in a performance to represent this “scattering” of the social on stage. The network in Figure 3 demonstrates that scenes of a tragic “scattering” disorder and the most disruptive and violent severing of social bonds are precisely the moments where the closest connections between characters are made, and the densest concatenation of network links exists. [Lee & Lee 26]
You can play a male or female character without gaining any special benefits or hindrances. Think about how your character does or does not conform to the broader culture’s expectations of sex, gender, and sexual behavior. For example, a male drow cleric defies the traditional gender divisions of drow society, which could be a reason for your character to leave that society and come to the surface. You don’t need to be confined to binary notions of sex and gender. [D&D 5e PHB 121]
D&D is always comfortingly progressive.
I want to emphasize that distant reading is not a new trend, defined by digital technology or by contemporary obsession with the word data. The questions posed by distant readers were originally framed by scholars (like Raymond Williams and Janice Radway) who worked on the boundary between literary history and social science. Of course, computer science has also been a crucial influence. But the central practice that distinguishes distant reading from other forms of literary criticism is not at bottom a technology. It is, I will argue, the practice of framing historical inquiry as an experiment, using hypotheses and samples (of texts or other social evidence) that are defined before the writer settles on a conclusion.
Integrating experimental inquiry in the humanities poses rhetorical and social challenges that are quite distinct from the challenges of integrating digital media. It seems desirable — even likely — that distant readers and digital humanists will coexist productively. But that compatibility cannot be taken for granted, as if the two projects were self-evidently versions of the same thing. They are not, and the institutional forms of their coexistence still need to be negotiated. [Underwood 5–6]
It’s all dependent on human context. This is what we’re starting to see with del.icio.us, with Flickr, with systems that are allowing for and aggregating tags. The signal benefit of these systems is that they don’t recreate the structured, hierarchical categorization so often forced onto us by our physical systems. Instead, we’re dealing with a significant break – by letting users tag URLs and then aggregating those tags, we’re going to be able to build alternate organizational systems, systems that, like the Web itself, do a better job of letting individuals create value for one another, often without realizing it. [Shirky]
This was written 13 years ago and it’s still true: ontology is overrated.
In its endless capacity to resemanticize reality, advertising early on transformed the West into an imaginary realm to be vicariously experienced by anyone who bought a particular product. As early as the 1870s and 188s posters for tobacco brands such as Westward Ho used the same repertoire of the West as empty space where anyone could be a pioneer for as long as it took to smoke a pipe or cigarette. Such images are early versions of what we now know as ‘Marlboro Country’ and the ‘Marlboro Man.’ This commercial appropriation of the repertoire of the West as the site of human self-reinvention has now become so well known that effective Marlboro advertising need do no more than show gorgeous western scenery with the slogan of ‘Come to Marlboro Country.’ Although there is no cigarette in sight, we get the message. [Kroes 152]
Even though our experiments have shown some interesting and consistent findings about the typical differences between dreams and stories, we need to be careful with our conclusions. The fact that the text classifiers obtained such high scores and the topics were significantly differently distributed over the samples, could indicate that the contrasting data sample was not as representative as we had hoped for. The emerging topic about “anxiety” for example shows that this subreddit had a substantial influence on the results. The importance of conversational expressions to distinguish personal stories in the classification experiment also points to a bias of the source platforms. We suspect that a more careful selection over a much larger set of personal stories, and perhaps an additional check to filter out characteristic internet language is needed to create automatic models that focus on the more subtle differences between reported dreams and personal experiences from real life. [Hendrickx et al. 60]
Rigorous text analysis of self-reports of dreams: roughly as difficult as you might expect it to be.
To live in a restless world requires a certain restlessness in oneself. So long as things continue to change, you must never fully cease exploring.
Still, the algorithmic techniques honed for the standard version of the multi-armed bandit problem are useful even in a restless world. Strategies like the Gittins index and Upper Confidence Bound provide reasonably good approximate solutions and rules of thumb, particularly if payoffs don’t change very much over time. And many of the world’s payoffs are arguably more static today than they’ve ever been. A berry patch might be ripe one week and rotten the next, but as Andy Warhol put it, “A Coke is a Coke.” Having instincts tuned by evolution for a world in constant flux isn’t necessarily helpful in an era of industrial standardization. [Christian & Griffiths 54]
The more I reflected on these diverse conceptions, and worked through my fieldnotes, the more ambiguous and elusive the idea of home became. Home is a double-barreled word. It conveys a notion of all that is already given—the sedimented lives of those who have gone before— but it also conveys a notion of what is chosen—the open horizons of a person’s own life. For Warlpiri, home is where one hails from (wardingki), but it also suggests the places one has camped, sojourned, and lived during the course of one’s own lifetime. Francine and I had seen this poignantly on our trip to Jila and Ngurripatu. When we set out from Lajamanu, Jedda could hardly contain her excitement at the prospect of seeing the country of her warringiyi (father’s father) for the first time in perhaps twenty years. But Jedda was just as elated when we returned to Lajamanu. We were back in the place (ngurra) where she actually lived. Lajamanu was where she had married and raised her children. Lajamanu was the place she had made her own. [Jackson 122]
The original sin of the industry touched everyone: the way the banana men viewed the people and the land of the isthmus as no more than a resource, not very different from the rhizomes, soil, sun, or rain. A source of cheap labor, local color. One definition of evil is to fail to recognize the humanity in the other: to see a person as an object or tool, something to be put to use. The spirit of colonialism infected the trade from the start. [Cohen 66]
Our experience is that intelligence analysts are particularly susceptible to the following five traps: (1) failing to consider multiple hypotheses or explanations, (2) ignoring inconsistent evidence, (3) rejecting evidence that does not support the lead hypothesis, (4) lacking insufficient bins or alternative hypotheses for capturing key evidence, and (5) improperly projecting past experience. [Heuer & Pherson 29]
So a lot like humans, then?
The pickpocket uses psychology in his work, but it is rather the psychology of bodily contact, of tactilism, of strategy in maneuvering his victim into position, of misdirection which distracts the victim’s attention so as to take his mind off his money. The pickpocket likes to perform successive acts of thievery in the same locality without attracting attention; he then likes to move on to a new locality before his victims discover their losses and call upon the law. He moves softly and melts into any crowd with great skill. If he is really good, he dresses so unobtrusively that he is difficult to observe and remember; but at the same time his clothes are of good quality, and, once singled out, it is difficult for the average person—and even the law—to imagine that a man of such substantial appearance could be guilty of trying to steal a pocketbook.
It will also be noted that the argot hardly reflects this gentle art, but rather evokes images of violence and aggression far beyond those elements actually involved. [Maurer 59]
It’s fun to play dead. But when some living, loving thing really dies, or leaves you, right in the middle of the game, right where it used to be so much fun to play together; you discover that you don’t feel like playing any more. Not right then. You don’t have to get off the path entirely. But you do have to stop playing, to take it in. You have to let the grief in. You might even have to let the anger in, the depression, the tears, the screams. Because, player that you are, you understand that you have to give your self over, completely, as totally and freely as a child might: to the grief like to the game, to the pain like to the fun. Player that you are, you embrace it and let it embrace you, naked, without protection. Because when you are com- pletely grieving, like when you are completely playing, you are still complete. [De Koven 27]
Marvel’s first hit, Iron Man, was released in 2008, just as the surge in Iraq was coming to a close. This marked the end of the hot wars of the Bush years and the transition to a cooler state of continuous half-war, characterized less by boots on the ground than by eyes in the sky. The Obama era revealed that the war on terror would be unlike past wars, with beginnings and ends. It was a way of life rather than a discrete event, a chronic condition rather than an acute one. The war was routinized and, with the dramatic surge in the use of drones, mechanized.
To those made uneasy by this, the Marvel movies offer consoling fictions. The threats in the films are not diffuse and vague, as in real life. They’re personal, well-defined, and unambiguously serious. It’s a reality fitted to the rhetoric, a world that actually looks the way politicians describe it. There are terrorists, vengeance-seekers, hostile foreign powers, demented scientists, and aliens, all eager to kill innocents in large numbers. “We live in a world of great threats” (as Iron Man puts it), of “infinite dangers” (Doctor Strange). “Safe is in short supply,” Thor affirms. The message here is much the same as in the TV show Homeland, another fruit of the Obama years. [Immerwahr]
And yet when I say “strange loop”, I have something else in mind — a less concrete, more elusive notion. What I mean by “strange loop” is — here goes a first stab, anyway — not a physical circuit but an abstract loop in which, in the series of stages that constitute the cycling-around, there is a shift from one level of abstraction (or structure) to another, which feels like an upwards movement in a hierarchy, and yet somehow the successive “upward” shifts turn out to give rise to a closed cycle. That is, despite one’s sense of departing ever further from one’s origin, one winds up, to one’s shock, exactly where one had started out. In short, a strange loop is a paradoxical level-crossing feedback loop. [Hofstadter 101–2]
Another less immediately practical aspect of cards, which has taken the attention of those not initially attracted by the card games, is their existence as a rather strange conceptual of hierarchical values and artificial order, which is disrupted, subjected to the laws of chance, and returned to order by the individual players in competition with others. The shuffling of the pack, and the slow re-establishment of some desired ordering by each of the players, provide a system of symbols operating according to chance and human choice. Even the most simple of games involve some element of matching up the ordering which has been dissolved by shuffling for play, This is the aspect which not only intrigued and delighted the surrealists, but inspired the imagination of the occultists and those interested in psychology. [MacPherson]
One day another of Saroyan’s friends, the poet Ted Berrigan, got a look at his latest one-word poem, eyeye, on a sheet of typewriter paper. “He said, ‘What the fuck is this?’” Saroyan recalls, “which I thought was a promising response.” [Daly]
It is also notable that the Feynman lectures (3 volumes) write about all of physics in 1800 pages, using only 2 levels of hierarchical headings: chapters and A-level heads in the text. It also uses the methodology of sentences which then cumulate sequentially into paragraphs, rather than the grunts of bullet points. Undergraduate Caltech physics is very complicated material, but it didn’t require an elaborate hierarchy to organize. A useful decision rule in thinking and showing is “What would Feynman do?” [Tufte]
If literary texts like buildings and objects are dated, then we can ask whether and how a text deals with its own datedness, the pathos of its potential future estrangement. I have already suggested one solution in speaking of ‘creative anachronism,’ which involves a deliberate dramatization of historical passage, bringing a concrete present into relation with a specific past and playing with the distance between them. To represent change through creative imitation, to dramatize in art a survival of the past into an altered present, would seem to provide a text with a certain resilience in confronting its own survival. Against this solution which insists on historicity can be set a pseudo-classical solution that implicitly denies history, that wants to transcend it by attaining a sublime timelessness. That solution of a false neo-classicism seems in fact the most vulnerable to ultimate estrangement. Saul Morson writes that ‘a text will be vulnerable to parody … to the extent that it ignores or claims to transcend its own originating context; parody is most readily invited by an utterance that claims transhistorical authority.’ A text that somehow acknowledges its historicity self-consciously would seem better fitted to survive its potential estrangement than a text that represses history. [Greene 223–4]
When the individual scientist can take a paradigm for granted, he need no longer, in his major works, attempt to build his field anew, starting from first principles and justifying the use of each concept introduced. That can be left to the writer of textbooks. Given a textbook, however, the creative scientist can begin his research where it leaves off and thus concentrate exclusively upon the subtlest and most esoteric aspects of the natural phenomena that concern his group. And as he does this, his research communiqués will begin to change in ways whose evolution has been too little studied but whose modern end products are obvious to all and oppressive to many. No longer will his researches usually be embodied in books addressed, like Franklin’s Experiments . . . on Electricity or Darwin’s Origin of Species, to anyone who might be interested in the subject matter of the field. Instead they will usually appear as brief articles addressed only to professional colleagues, the men whose knowledge of a shared paradigm can be assumed and who prove to be the only ones able to read the papers addressed to them.
Today in the sciences, books are usually either texts or retrospective reflections upon one aspect or another of the scientific life. The scientist who writes one is more likely to find his professional reputation impaired than enhanced. Only in the earlier, pre-paradigm, stages of the development of the various sciences did the book ordinarily possess the same relation to professional achievement that it still retains in other creative fields. And only in those fields that still retain the book, with or without the article, as a vehicle for research communication are the lines of professionalization still so loosely drawn that the layman may hope to follow progress by reading the practitioners’ original reports. Both in mathematics and astronomy, research reports had ceased already in antiquity to be intelligible to a generally educated audience. In dynamics, research became similarly esoteric in the later Middle Ages, and it recaptured general intelligibility only briefly during the early seventeenth century when a new paradigm replaced the one that had guided medieval research. Electrical research began to require translation for the layman before the end of the eighteenth century, and most other fields of physical science ceased to be generally accessible in the nineteenth. During the same two centuries similar transitions can be isolated in the various parts of the biological sciences. In parts of the social sciences they may well be occurring today. Although it has become customary, and is surely proper, to deplore the widening gulf that separates the professional scientist from his colleagues in other fields, too little attention is paid to the essential relationship between that gulf and the mechanisms intrinsic to scientific advance. [Kuhn 19–21]
In terms of aesthetics and performance, the toaster has been devolving for a generation. According to Eric A. Murrell of the Toaster Collectors Association, the Toastmaster 1B14, a handsome hunk of chrome and steel discontinued in 1960, remains “absolutely the end-all-and-be-all toaster there ever was.” Among its charms was a patented timing system that didn’t tick off seconds but used its internal heating mechanism to gauge the bread and produce a consistent shade of brown. Collectors also dote on the Sunbeam T-20 Radiant Control model, which was introduced in 1949 with the slogan “Automatic beyond belief” — a reference to its ability to automatically lower and cook the bread. “It’s still one of the most elegant inventions in the household,” laments the technology-and-design historian Edward Tenner, of the machine that was discontinued in the mid-‘90s. Asked to choose between the T-20 and 1B14, Michael Sheafe, a New York dealer of vintage appliances, said, “It’s like asking which child you love more.” What doomed these classic designs was cost. The original Sunbeam T-20 cost more than $22.50 when it was introduced in 1949, about a third of a week’s wages for the average family. The dark age of the toaster began when consumers started choosing price over functionality, particularly during the 1980s. The market is now glutted with machines that toast unevenly and retail for less than $10. “Mind you,” Sheafe added, “that’s what they’re worth.” [Lasky]
The distinctive characteristic of the Analytical Engine, and that which has rendered it possible to endow mechanism with such extensive faculties as bid fair to make this engine the executive right-hand of abstract algebra, is the introduction into it of the principle which Jacquard devised for regulating, by means of punched cards, the most complicated patterns in the fabrication of brocaded stuffs. It is in this that the distinction between the two engines lies. Nothing of the sort exists in the Difference Engine. We may say most aptly, that the Analytical Engine weaves algebraical patterns just as the Jacquard-loom weaves flowers and leaves. Here, it seems to us, resides much more of originality than the Difference Engine can be fairly entitled to claim. We do not wish to deny to this latter all such claims. We believe that it is the only proposal or attempt ever made to construct a calculating machine founded on the principle of successive orders of differences, and capable of printing off its own results; and that this engine surpasses its predecessors, both in the extent of the calculations which it can perform, in the facility, certainty and accuracy with which it can effect them, and in the absence of all necessity for the intervention of human intelligence during the performance of its calculations. Its nature is, however, limited to the strictly arithmetical, and it is far from being the first or only scheme for constructing arithmetical calculating machines with more or less of success. [Lovelace]
Digital technology used by humanities scholars has focused almost exclusively on methods of sorting, accessing, and disseminating large bodies of materials, and on certain specialized problems in computational stylistics and linguistics. In this respect the work rarely engages those questions about interpretation and self-aware reflection that are the central concerns for most humanities scholars and educators. Digital technology has remained instrumental in serving the technical and precritical occupations of librarians and archivists and editors. But the general field of humanities education and scholarship will not take the use of digital technology seriously until one demonstrates how its tools improve the ways we explore and explain aesthetic works — until, that is, they expand our interpretational procedures. [McGann xi–xii]
A few hundred thousand years ago, in early human (or hominid) prehistory, growth was so slow that it took on the order of one million years for human productive capacity to increase sufficiently to sustain an additional one million individuals living at subsistence level. By 5000 BC, following the Agricultural Revolution, the rate of growth had increased to the point where the same amount of growth took just two centuries. Today, following the Industrial Revolution, the world economy grows on average by that amount every ninety minutes.
Even the present rate of growth will produce impressive results if maintained for a moderately long time. If the world economy continues to grow at the same pace as it has over the past fifty years, then the world will be some 4.8 times richer by 2050 and about 34 times richer by 2100 than it is today.
Yet the prospect of continuing on a steady exponential growth path pales in comparison to what would happen if the world were to experience another step change in the rate of growth comparable in magnitude to those associated with the Agricultural Revolution and the Industrial Revolution. The economist Robin Hanson estimates, based on historical economic and population data, a characteristic world economy doubling time for Pleistocene hunter–gatherer society of 224,000 years; for farming society, 909 years; and for industrial society, 6.3 years. (In Hanson’s model, the present epoch is a mixture of the farming and the industrial growth modes—the world economy as a whole is not yet growing at the 6.3-year doubling rate.) If another such transition to a different growth mode were to occur, and it were of similar magnitude to the previous two, it would result in a new growth regime in which the world economy would double in size about every two weeks. [Bostrom 2]
However else we may choose to describe the urban community of the early Romans, it remains somewhere on the spectrum between tiny and small. Population size in what is effectively prehistory is notoriously difficult to estimate, but the best guess is that the ‘original’ population of Rome – at whatever moment it was when the aggregate of little settlements started thinking of itself as ‘Rome’ – amounted to at most a few thousand. By the time the last king was thrown out, towards the end of the sixth century BCE, according to standard modern calculations, we are probably dealing with something in the region of 20,000 to 30,000 inhabitants. This is only a best guess based on the size of the place, the amount of territory that Rome probably controlled at that point and what population we could reasonably expect it to support. But it is much more likely than the exaggerated totals that ancient authors give. Livy, for example, quotes the very first Roman historian, Quintus Fabius Pictor, who wrote around 200 BCE and claimed that towards the end of the regal period the number of adult male citizens was 80,000, making a total population of well over 200,000. This is a ludicrous figure for a new community in archaic Italy (it is not far short of the total population of the territories of Athens or Sparta at their height, in the mid fifth century BCE), and there is no archaeological evidence for a city of any such size at this time, although the number does at least have the virtue of matching the aggrandising views of early Rome found in all ancient writers.
It is, needless to say, impossible to know anything much about the institutions of this small, proto-urban settlement. But unless Rome was different from every other archaic township in the ancient Mediterranean (or early townships anywhere), it would have been much less formally structured than the stories suggest. Complex procedures involving an interrex, popular voting and senatorial ratification are entirely implausible in this context; at best, they are a radical rewriting of early history in a much later idiom. [Beard]
The point I have been trying to make is a simple one, long familiar in philosophy of science. Debates over theory-choice cannot be cast in a form that fully resembles logical or mathematical proof. In the latter, premises and rules of inference are stipulated from the start. If there is disagreement about conclusions, the parties to the ensuing debate can retrace their steps one by one, checking each against prior stipulation. At the end of that process one or the other must concede that he has made a mistake, violated a previously accepted rule. After that concession he has no recourse, and his opponent’s proof is then compelling. Only if the two discover instead that they differ about the meaning or application of stipulated rules, that their prior agreement provides no sufficient basis for proof, does the debate continue in the form it inevitably takes during scientific revolutions. That debate is about premises, and its recourse is to persuasion as a prelude to the possibility of proof.
Nothing about that relatively familiar thesis implies either that there are no good reasons for being persuaded or that those reasons are not ultimately decisive for the group. Nor does it even imply that the reasons for choice are different from those usually listed by philosophers of science: accuracy, simplicity, fruitfulness, and the like. What it should suggest, however, is that such reasons function as values and that they can thus be differently applied, individually and collectively, by men who concur in honoring them. If two men disagree, for example, about the relative fruitfulness of their theories, or if they agree about that but disagree about the relative importance of fruitfulness and, say, scope in reaching a choice, neither can be convicted of a mistake. Nor is either being unscientific. There is no neutral algorithm for theory-choice, no systematic decision procedure which, properly applied, must lead each individual in the group to the same decision. In this sense it is the community of specialists rather than its individual members that makes the effective decision. To understand why science develops as it does, one need not unravel the details of biography and personality that lead each individual to a particular choice, though that topic has vast fascination. What one must understand, however, is the manner in which a particular set of shared values interacts with the particular experiences shared by a community of specialists to ensure that most members of the group will ultimately find one set of arguments rather than another decisive. [Kuhn 199–200]
The very small, and the very large; these are the forces that shape literary history. Devices and genres; not texts. Texts are certainly the real objects of literature (in the Strand Magazine you don’t find ‘clues’ or ‘detective fiction’, you find Sherlock Holmes, or Hilda Wade, or The Adventures of a Man of Science); but they are not the right objects of knowledge for literary history. Take the concept of genre: usually, literary criticism approaches is in terms of what Ernst Mayr calls ‘typological thinking’: we choose a ‘representative individual’, and through it define the genre as a whole. Sherlock Holmes, say, and detective fiction; Wilhelm Meister and the Bildungsroman; you analyse Goethe’s novel’ and it counts as an analysis of the entire genre, because for typological thinking there is really no gap between the real object and the object of knowledge. But once a genre is visualized as a tree, the continuity between the two inevitably disappears: the genre becomes an abstract ‘diversity spectrum’ (Mayr again), whose internal multiplicity no individual text will ever be able to represent. And so, even ‘A Scandal in Bohemia’ becomes just one leaf among many: delightful, of course – but no longer entitled to stand for the genre as a whole. [Moretti 76]
If a chess expert were to have his most inspired day he would come up with ideas that would blow his mind and the minds of others at his level. But for the master, these inspired creations would be humdrum. They are the everyday because his knowledge of chess allows him to play this way all the time. While the weaker player might say, “I just had a feeling,” the stronger player would shrug and explain the principles behind the inspired move. This is why Grandmasters can play speed chess games that weaker masters wouldn’t understand in hundreds of hours of study: they have internalized such esoteric patterns and principles that breathtakingly precise decisions are made intuitively. The technical afterthoughts of a truly great one can appear to be divine inspiration to the lesser artist. [Waitzkin 231]
The deeper epistemological question of whether Data Science “is a thing” may still be open. But at this point, nobody in their sane mind challenges the fact that the praxis of scientific research is under major upheaval because of the flood of data and computation. This is creating the tensions in reward structure, career paths, funding, publication and the construction of knowledge that have been amply discussed in multiple contexts recently. Our second “big question” will then be, can we actually change our institutions in a way that responds to these new conditions?
This is a really, really hard problem: there are few organizations more proud of their traditions and more resistant to change than universities (churches and armies might be worse, but that’s about it). That pride and protective instinct have in some cases good reasons to exist. But it’s clear that unless we do something about this, and soon, we’ll be in real trouble. I have seen many talented colleagues leave academia in frustration over the last decade because of all these problems, and I can’t think of a single one who wasn’t happier years later. They are better paid, better treated, less stressed, and actually working on interesting problems (not every industry job is a boring, mindless rut, despite what we in the ivory tower would like to delude ourselves with). [Perez]
One of the fascinating things about Edward Snowden’s sudden appearance a little more than two years ago was the inability of many people to comprehend why a young man might give up on the recipe for happiness — high wages, secure job, Hawaiian home — to become the world’s most sought-after fugitive. Their premise seemed to be that since all people are selfish, Snowden’s motive must be a self-serving pursuit of attention or money.
During the first rush of commentary, Jeffrey Toobin, The New Yorker’s legal expert, wrote that Snowden was “a grandiose narcissist who deserves to be in prison.” Another pundit announced, “I think what we have in Edward Snowden is just a narcissistic young man who has decided he is smarter than the rest of us.” Others assumed that he was revealing U.S. government secrets because he had been paid by an enemy country.
Snowden seemed like a man from another century. In his initial communications with journalist Glenn Greenwald he called himself Cincinnatus — after the Roman statesman who acted for the good of his society without seeking self-advancement. This was a clue that Snowden formed his ideals and models far away from the standard-issue formulas for happiness. Other eras and cultures often asked other questions than the ones we ask now: What is the most meaningful thing you can do with your life? What is your contribution to the world or your community? Do you live according to your principles? What will your legacy be? What does your life mean? Maybe our obsession with happiness is a way not to ask those other questions, a way to ignore how spacious our lives can be, how effective our work can be, and how far-reaching our love can be. [Solnit 6]
What people take for relentless minimalism is a side effect of too much exposure to the reactor-cores of fashion. This has resulted in a remorseless paring-down of what she can and will wear. She is, literally, allergic to fashion. She can only tolerate things that could have been worn, to a general lack of comment, during any year between 1945 and 2000. She’s a design-free zone, a one-woman school of anti whose very austerity periodically threatens to spawn its own cult. [Gibson 8]
In 1645, a group of people living in London decided that they would refuse to believe things that weren’t demonstrably true. This is tough; we humans have never been terribly good at subjecting our own beliefs to the kind of withering scrutiny that might disprove them. We do have a related skill, however; we are quite good at subjecting other people’s beliefs to such scrutiny, and this asymmetry gave them an opening. They committed themselves to acquiring knowledge through experimental means and to subjecting one another’s findings to the kind of scrutiny necessary to root out error. This group, which included the “natural philosophers” (scientists) Robert Boyle and Robert Hooke and the architect Christopher Wren, was referred to, in some of Boyle’s letters, as “our philosophical college” or “our Invisible College.”
The Invisible College was invisible relative to Oxford and Cambridge, because the members had no permanent location; they held themselves together as a group via letters and meetings in London and later in Oxford. It was a college because their relations were collegial—they operated with a sense of mutual interest in, and respect for, one another’s work. In their conversations, they would outline their research according to agreed-upon norms of clarity and transparency. Robert Boyle, one of the group’s members and sometimes called the father of modern chemistry, helped establish many of the norms underpinning the scientific method, especially how experiments were to be conducted. (The motto of the group was Nullis in Verba—“Believe nothing from mere words.”) When one of their number announced the result of an experiment, the others wanted to know not just what that result was but how the experiment had been conducted, so that the claims could be tested elsewhere. Philosophers of science call this condition falsifiability. Claims that lacked falsifiability were to be regarded with great skepticism. [Shirky]
Creators resort to hacking bestseller lists, counting social media shares, or raising huge amounts of investor capital far before they have a business model. People claim to want to do something that matters, yet they measure themselves against things that don’t, and track their progress not in years but in microseconds. They want to make something timeless, but they focus instead on immediate payoffs and instant gratification. [Holiday]
Once one recognizes the value of having difficult obstacles to overcome, it is a simple matter to see the true benefit that can be gained from competitive sports. In tennis who is it that provides a person with the obstacles he needs in order to experience his highest limits? His opponent, of course! Then is your opponent a friend or an enemy? He is a friend to the extent that he does his best to make things difficult for you. Only by playing the role of your enemy does he become your true friend. Only by competing with you does he in fact cooperate! No one wants to stand around on the court waiting for the big wave. In this use of competition it is the duty of your opponent to create the greatest possible difficulties for you, just as it is yours to try to create obstacles for him. Only by doing this do you give each other the opportunity to find out to what heights each can rise. [Gallwey 122]
A model of cognitive ecology posits that a complex human activity such as theatre must be understood across the entire system, which includes such elements as neural and psychological mechanisms underpinning the task dynamics; the physical environment(s), including the relationships between playing and audience space; cognitive artifacts such as parts, plots, and playbooks; technologies, such as sound or lighting; the social systems underpinning the company, including the mechanisms for enskillment; the economic models by which the company runs; the wider social and political contexts, including censorship, patronage, and commercial considerations; and the relative emphasis placed upon various elements of the enterprise, including writerly or directorial control, clowning, visuality, and improvisation. [Tribble 151]
The corporeality of the body should be seen as a particular case of a general principle of the theatre medium, which is characterized by imprinting its images on materials similar to the models of these images. This principle includes live actors and extends to the materiality of other objects on stage, such as costume, furniture and lighting. In this sense, the materiality of all objects on stage is an integral component of the text. [Rozik 198]
But while you may feel you’re just pretending that you’re an artist, there’s no way to pretend you’re making art. Go ahead, try writing a story while pretending you’re writing a story. Not possible. Your work may not be what curators want to exhibit or publishers want to publish, but those are different issues entirely. You make good work by (among other things) making lots of work that isn’t very good, and gradually weeding out the parts that aren’t good, the parts that aren’t yours. It’s called feedback, and it’s the most direct route to learning about your own vision. It’s also called doing your work. After all, someone has to do your work, and you’re the closest person around. [Bayles & Orland 26]
It’s not that developers want to rewrite everything; it’s that very few developers are smart enough to understand code without rewriting it. And as much as I believe in the virtue of reading code, I’m convinced that the only way to get better at writing code is to write code. Lots of it. Good, bad, and everything in between. Nobody wants developers to reinvent the wheel (again), but reading about how a wheel works is a poor substitute for the experience of driving around on a few wheels of your own creation. [Atwood]
My ideal reader is a second version of me. […] In my mind, there’s another version of me that is still thinking, now, in 2009, “I oughta do that site where I tell everybody how they’re wrong about everything, and do my little gray background with the white text because I think it looks better, and not have any crap on the page, and all these ideas”. But there’s a version of me that still hasn’t done it, and he’s out there. And he thinks about the same things I think about. And he wishes that people would write about these things in great detail. And that’s who I write for. I just imagine him out there. And he just loves it — and maybe that’s the worst thing possible, because that’s the thing that’s keeping him from actually doing his own site, because my site is so spot on. […] I always think, too, “Is he out there thinking, ‘well, why hasn’t Gruber written about -blank- yet?’” [Gruber + Mann]
It is precisely Shakespeare who gives us, in contrast to his portrayal of vacillating characters inwardly divided against themselves, the finest examples of firm and consistent characters who come to ruin simply because of this decisive adherence to themselves and their aims. Without ethical justification, but upheld solely by the formal inevitability of their personality, they allow themselves to be lured to their deed by external circumstances, or they plunge blindly on and persevere by the strength of their will, even if now what they do they accomplish only from the necessity of maintaining themselves against others or because they have reached once and for all the point that they have reached. [Hegel 1229–30]
On the other hand, a person who undertakes to bullshit his way through has much more freedom. His focus is panoramic rather than particular. He does not limit himself to inserting a certain falsehood at a specific point, and thus he is not constrained by the truths surrounding that point or intersecting it. He is prepared to fake the context as well, so far as need requires. This freedom from the constraints to which the liar must submit does not necessarily mean, of course, that his task is easier than the task of the liar. But the mode of creativity upon which it relies is less analytical and less deliberative than that which is mobilized in lying. It is more expansive and independent, with more spacious opportunities for improvisation, color, and imaginative play. This is less a matter of craft than of art. Hence the familiar notion of the ‘bullshit artist.’ [Frankfurt]
The ideological blackmail that has been in place since the original Live Aid concerts in 1985 has insisted that ‘caring individuals’ could end famine directly, without the need for any kind of political solution or systemic reorganization. It is necessary to act straight away, we were told; politics has to be suspended in the name of ethical immediacy. Bono’s Product Red brand wanted to dispense even with the philanthropic intermediary. ‘Philanthropy is like hippy music, holding hands’, Bono proclaimed. ‘Red is more like punk rock, hip hop, this should feel like hard commerce’. The point was not to offer an alternative to capitalism – on the contrary, Product Red’s ‘punk rock’ or ‘hip hop’ character consisted in its ‘realistic’ acceptance that capitalism is the only game in town. No, the aim was only to ensure that some of the proceeds of particular transactions went to good causes. The fantasy being that western consumerism, far from being intrinsically implicated in systemic global inequalities, could itself solve them. All we have to do is buy the right products. [Fisher 15]
Symbolically, what killing the car industry did was take down the whole idea that there was some level of nobility to work. Your granddad might not have got rich spending a quarter-century with the company, but he got by. It was an honest way of living which guaranteed a person their slice of the shared economic prosperity that has so far defined Australian life. Closing Holden brought an end to all that and marked the moment when Australia ditched the ethic of hard work for the hustle, when it became a nation of freelancers loyal to none but themselves. [Kurmelovs]
And even if there are known undesirable consequences, competition may drive people to brain modification. About one in four elite academic researchers admit to using brain-enhancing drugs. In addition to possibly causing health effects for them individually, this behavior creates a dangerous social dynamic. If one person uses amphetamines to produce work faster, now everyone else is competing with the amphetamine guy for good jobs.
This problem hasn’t become profound yet, partially because it’s not yet clear that the available brain-modification drugs produce an enormous advantage for users. The lady down the hall who only has to sleep two hours a night is probably more productive than you, but she’s not ten times more productive. If new technology starts to create more profound intellectual differences, many people would be more or less obligated to become users. [Weinersmith & Weinersmith]
The question then arises as to whether the true memes are public productions - pots, texts, songs and so on - that are both effects and causes of mental representations or, as Dawkins (1982) argues, mental representations that are both causes and effects of public productions. With both options, however, there are similar problems. To begin with, most cultural items, be they mental or public, have a large and variable number of mental or public immediate ascendants.
Leaving aside mechanical and electronic reproduction, cases of new items produced by actually copying one given old item are rare. When you sing ‘Yankee Doodle’, you are not trying to reproduce any one past performance of the song, and the chances are that your mental version of the song was the child of the mental versions of several people. Most potters producing quantities of near-identical pots are not actually copying any one pot in particular, and their skill is typically derived from more than a single teacher (although there may be one teacher more important than the others, which complicates matters further still).
In general, if you are serious in describing bits of culture - individual texts, pots, songs or individual abilities to produce them - as replications of earlier bits, then you should be willing to ask about any given token cultural item: of which previous token is it a direct replica? In most cases, however, you will be forced to conclude that each token is a replica not of one parent token, nor (as in sexual reproduction) of two parent tokens, nor of any fixed number of parent tokens, but of an indefinite number of tokens some of which have played a much greater ‘parental’ role than others. [Sperber 104]
Learn how to talk about your work. All of the people who say “your work should stand for itself” are wrong and dangerous. No work stands for itself. All work stands within a context. Some work is explained to us by culture or curators or historians or teachers, so it feels self-sustaining, but the reality is that it is not. By not talking about your work you are ceding control of the accessibility and meaning of your work to others. This doesn’t mean you should be stubborn or become upset when people don’t take what you intended from your work — the best art acts as a lens and prompts questions instead of promoting answers — but it does mean you should recognize language as the most portable and potent way to set up a context for your ideas. Language isn’t always the best way to express things, but it is often the most powerful way to prepare the world for your expressions. [Gage]
Words, looks, and laws here lose their meaning; as the surface is scraped off the graveyard, the play exposes the shallowness of its culture’s farbric of denial (however richly brocaded) beneath which it hides its dark obvious secrets. Culture is a shroud. Hamlet expresses surprise that this worker “sings in grave-making,” and Horatio replies that “Custom hath made it in him a property of easiness” (5.1.66–68). This exchange builds on the preceding suggestions of universality, reminding us that, from one perspective, all our works and day constitute singing at grave-making, enabled by habit and by cultural customs that insulate us from the overwhelming facts of death that are always around and ahead of us. [Watson 214–5]
Is the truth depressing? Some may find it so. But I find it liberating, and consoling. When I believed that my existence was a such a further fact, I seemed imprisoned in myself. My life seemed like a glass tunnel, through which I was moving faster every year, and at the end of which there was darkness. When I changed my view, the walls of my glass tunnel disappeared. I now live in the open air. There is still a difference between my life and the lives of other people. But the difference is less. Other people are closer. I am less concerned about the rest of my own life, and more concerned about the lives of others. [Parfit 281]
Here we see that models, despite their reputation for impartiality, reflect goals and ideology. When I removed the possibility of eating Pop-Tarts at every meal, I was imposing my ideology on the meals model. It’s something we do without a second thought. Our own values and desires influence our choices, from the data we choose to collect to the questions we ask. Models are opinions embedded in mathematics. [O’Neil]