Hi, I’m Emma, welcome to my website.

I’m an instructor and PhD student at Virginia Tech, where my research exists at the intersection of media theory, STS and continental philosophy. In my dissertation, I theorize computation as a political force, examining current research on psychedelic drugs as a challenge to the epistemic power of data technologies. A web-based version of my curriculum vitae is at this link, and this one gives you a downloadable PDF.

I am also a writer and musician. Sometimes I post non-academic writing and original music here. The earliest content is from 2014.

Other stuff:

Behind this link I’ve indexed a lot of recent writing, some published.
My academia.edu homepage has more academic content, including teaching materials.
Music: http://stamm.bandcamp.com
Twitter: @turing_tests
I dig it when people reveal themselves here
New for 2018, this site has a blogroll! Yes, like it’s 2003. Check it
And you can contact me via email: stamm@vt.edu.

Thanks for visiting!

— Emma Stamm

writing warnings

[0] Fact: writing is made of words, not ideas.

[1] “Nothing is like an idea so much as an idea” — Bishop Berkeley

[2] Fact: writingideas, and content all refer to different entities.

[3] “I myself prefer an Argentine fantasy. God did not create a Book of Nature of the old sorts Europeans imagined. He wrote a Borgesian library, each book of which is as brief as possible, yet each book of which is inconsistent with every other. For each book, there is some humanly accessible bit of Nature [‘the natural’] such that that book, and no other, makes possible the comprehension, prediction and influencing of what’s going on” — Ian Hacking on Borges and Berkeley

[4] The writing I like cuts through the hell of sameness that is the digital space (and capitalism! Capital writ large)

[5] Sometimes it says nothing …that’s from John Cage’s book Silence which inspired my first website, the silent internet

[6] “All great writers are great deceivers” — Vladimir Nabokov

[7] Magic is stronger when it remains in the occult, and writers have to be careful as they pick from their spellbook. Like the joke about jazz, it’s what you don’t hear that counts.

////note that this post is old, stuck to the top of the site for dark purposes.


machines that morph logic

Very good article by Matteo Pasquinelli — I’ve excerpted a lot of it below, but read the whole thing! — Machines that Morph Logic: Neural Networks and the Distorted Automation of Intelligence as Statistical Inference


The term Artificial Intelligence is often cited in popular press as well as in art and philosophy circles as an alchemic talisman whose functioning is rarely explained. The hegemonic paradigm to date (also crucial to the automation of labor) is not based on GOFAI (Good Old-Fashioned Artificial Intelligence that never succeeded at automating symbolic deduction), but on the neural networks designed by Frank Rosenblatt back in 1958 to automate statistical induction. The text highlights the role of logic gates in the distributed architecture of neural networks, in which a generalized control loop affects each node of computation to perform pattern recognition. In this distributed and adaptive architecture of logic gates, rather than applying logic to information top-down, information turns into logic, that is, a representation of the world becomes a new function in the same world description. This basic formulation is suggested as a more accurate definition of learning to challenge the idealistic definition of (artificial) intelligence. If pattern recognition via statistical induction is the most accurate descriptor of what is popularly termed Artificial Intelligence, the distorting effects of statistical induction on collective perception, intelligence and governance (over-fitting, apophenia, algorithmic bias, “deep dreaming,” etc.) are yet to be fully understood


Any machine is always a machine of cognition, a product of the human intellect and unruly component of the gears of extended cognition. Thanks to machines, the human intellect crosses new landscapes of logic in a materialistic way—that is, under the influence of historical artifacts rather than Idealism. As, for instance, the thermal engine prompted the science of thermodynamics (rather than the other way around), computing machines can be expected to cast a new light on the philosophy of the mind and logic itself. When Alan Turing came up with the idea of a universal computing machine, he aimed at the simplest machination to calculate all possible functions. The efficiency of the universal computer catalyzed in Turing the alchemic project for the automation of human intelligence. However, it would be a sweet paradox to see the Turing machine that was born as Gedankenexperiment to demonstrate the incompleteness of mathematics aspiring to describe an exhaustive paradigm of intelligence (as the Turing test is often understood).


Within neural networks (as according also to the classical cybernetic framework), information becomes control; that is, a numerical input retrieved from the world turns into a control function of the same world. More philosophically, it means that a representation of the world (information) becomes a new rule in the same world (function), yet under a good degree of statistical approximation. Information becoming logic is a very crude formulation of intelligence, which however aims to stress openness to the world as a continuous process of learning.


Rosenblatt stressed that artificial neural networks are both a simplification and exaggeration of nervous systems and this approximation (that is the recognition of limits in model-based thinking) should be a guideline for any philosophy of the (artefactual) mind. Ultimately Rosenblatt proposed neurodynamics as a discipline against the hype of artificial intelligence.

The perceptron program is not primarily concerned with the invention of devices for “artificial intelligence”, but rather with investigating the physical structures and neurodynamic principles which underlie “natural intelligence.” A perceptron is first and foremost a brain model, not an invention for pattern recognition. As a brain model, its utility is in enabling us to determine the physical conditions for the emergence of various psychological properties. It is by no means a “complete” model, and we are fully aware of the simplifications which have been made from biological systems; but it is, at least, as analyzable model.

In 1969 Marvin Minsky and Seymour Papert’s book, titled Perceptrons, attacked Rosenblatt’s neural network model by wrongly claiming that a Perceptron (although a simple single-layer one) could not learn the XOR function and solve classifications in higher dimensions. This recalcitrant book had a devastating impact, also because of Rosenblatt’s premature death in 1971, and blocked funds to neural network research for decades. What is termed as the first ‘winter of Artificial Intelligence’ would be better described as the ‘winter of neural networks,’ which lasted until 1986 when the two volumes Parallel Distributed Processing clarified that (multilayer) Perceptrons can actually learn complex logic functions. Half a century and many more neurons later, pace Minsky, Papert and the fundamentalists of symbolic AI, multilayer Perceptrons are capable of better-than-human image recognition, and they constitute the core of Deep Learning systems such as automatic translation and self-driving cars.


Rosenblatt gave probably one of the first descriptions of machine intelligence as emergent property: “It is significant that the individual elements, or cells, of a nerve network have never been demonstrated to possess any specifically psychological functions, such as ‘memory,’ ‘awareness,’ or ‘intelligence.’ Such properties, therefore, presumably reside in the organization and functioning of the network as a whole, rather than in its elementary parts.” [Gestalt overtones here]


Current techniques of Artificial Intelligence are clearly a sophisticated form of pattern recognition rather than intelligence, if intelligence is understood as the discovery and invention of new rules. To be precise in terms of logic, what neural networks calculate is a form of statistical induction. Of course, such an extraordinary form of automated inference can be a precious ally for human creativity and science (and it is the closest approximation to what is known as Peirce’s weak abduction), but it does not represent per se the automation of intelligence qua invention, precisely as it remains within ‘too human’ categories.

Peirce said that “man is an external sign.” If this intuition encouraged philosophers to stress that the human mind is an artifactual project that extends into technology, however, the human mind’s actual imbrication with the external machines of cognition happened to be rarely empirically illustrated. This has produced simplistic poses in which ideas such as Artificial General Intelligence and Superintelligence are evoked as alchemic talismans of posthumanism with little explanation of the inner workings and postulates of computation. A fascinating aspect of neural computation is actually the way it amplifies the categories of human knowledge rather than supersedes them in autonomous forms. Contrary to the naïve conception of the autonomy of artificial intelligence, in the architecture of neural networks many elements are still deeply affected by human intervention. If one wants to understand how much neural computation extends into the ‘inhuman,’ one should discern how much it is still ‘too human.’


The issue of over-fitting points to a more fundamental issue in the constitution of the training dataset: the boundary of the categories within which the neural network operates. The way a training dataset represents a sample of the world marks, at the same time, a closed universe. What is the relation of such a closed data universe with the outside? A neural network is considered ‘trained’ when it is able to generalize its results to unknown data with a very low margin of error, yet such a generalization is possible due to the homogeneity between training and test dataset. A neural network is never asked to perform across categories that do not belong to its ‘education.’ The question is then: How much is a neural network (and AI in general) capable of escaping the categorical ontology in which it operates?


Charles S. Peirce’s distinction between deduction, induction and abduction (hypothesis) is the best way to frame the limits and potentialities of machine intelligence. Peirce remarkably noticed that the classic logical forms of inference—deduction and induction—never invent new ideas but just repeat quantitative facts. Only abduction (hypothesis) is capable of breaking into new worldviews and inventing new rules.

The only thing that induction accomplishes is to determine the value of a quantity. It sets out with a theory and it measures the degree of concordance of that theory with fact. It never can originate any idea whatever. No more can deduction. All the ideas of science come to it by the way of Abduction. Abduction consists in studying facts and devising a theory to explain them.

Specifically, Peirce’s distinction between abduction and induction can illuminate the logic form of neural networks, as since their invention by Rosemblaat they were designed to automate complex forms of induction.

By induction, we conclude that facts, similar to observed facts, are true in cases not examined. By hypothesis, we conclude the existence of a fact quite different from anything observed, from which, according to known laws, something observed would necessary result. The former, is reasoning from particulars to the general law; the latter, from effect to cause. The former classifies, the latter explains.

The distinction between induction as classifier and abduction as explainer frames very well also the nature of the results of neural networks (and the core problem of Artificial Intelligence). The complex statistical induction that is performed by neural networks gets close to a form of weak abduction, where new categories and ideas loom on the horizon, but it appears invention and creativity are far from being fully automated. The invention of new rules (an acceptable definition of intelligence) is not just a matter of generalization of a specific rule (as in the case of induction and weak abduction) but of breaking through semiotic planes that were not connected or conceivable beforehand, as in scientific discoveries or the creation of metaphors (strong abduction).

In his critique of artificial intelligence, Umberto Eco remarked: “No algorithm exists for the metaphor, nor can a metaphor be produced by means of a computer’s precise instructions, no matter what the volume of organized information to be fed in.” Eco stressed that algorithms are not able to escape the straitjacket of the categories that are implicitly or explicitly embodied by the “organized information” of the dataset. Inventing a new metaphor is about making a leap and connecting categories that never happened to be logically related. Breaking a linguistic rule is the invention of a new rule, only when it encompasses the creation of a more complex order in which the old rule appears as a simplified and primitive case. Neural networks can a posteriori compute metaphors but cannot a priori automate the invention of new metaphors (without falling into comic results such as random text generation). The automation of (strong) abduction remains the philosopher’s stone of Artificial Intelligence.


Statistical inference via neural networks has enabled computational capitalism to imitate and automate both low and hi-skill labor.Nobody expected that even a bus driver could become a source of cognitive labor to be automated by neural networks in self-driving vehicles. Automation of intelligence via statistical inference is the new eye that capital casts on the data ocean of global labor, logistics, and markets with novel effects of abnormalization—that is, distortion of collective perception and social representations, as it happens in the algorithmic magnification of class, race and gender bias. Statistical inference is the distorted, new eye of the capital’s Master.

encountering aliens

In September I will be giving a talk at the Technical Universitat Dresden (in Dresden, Germany) about Bertolt Brecht’s theater politics, social media, and anti-fascist aesthetics:

Encountering Aliens: Digital Verfremdungseffekt and The Theater of the Self

In this talk, I will argue that self-reflexivity in social writing exists beyond the normative confines of social media. I depart from the observation that self-reflexivity in social media has the effect of distancing or alienating both producers and audiences from the naturalized context of social media applications. This in turn renders the normativity of social media visible, strengthening the capacity of the alienated parties to inquire into and critique social media as generative of ineluctably contrived and deterministic experiences of selfhood.

I maintain that when individuals use social media to signal their awareness of its constructed nature, they are operating beyond the conventions of its etiquette. This affords users a degree of sovereignty from the highly deterministic mechanisms by which self-made content comes to be interpreted as unmediated proferrances of subjectivity. My assessment implicates structural features of social media for their role in dissimulating “authenticity” to uphold the pretense that it bears no phenomenological difference from non-digital communicative contexts.

To substantiate my argument, I scrutinize examples of content from two major social media platforms, Facebook and Twitter, for features that engender an “alienation effect” (Verfremdungseffekt), explained as the outcome of an aesthetic technique developed by Bertolt Brecht. When a-effects are achieved successfully, the audience does not identify with the performed character or narrative; their reactions are not the result of the content of the script, but a technical maneuver. The Verfremdungseffekt effectively breaks the illusion of theater through the actors’ signals that the theater world is of a different experiential (phenomenological) regime of reality than that of the audience.

Joining media theory and the writings of Brecht with an interrogation of my case studies informed by philosophy of technology, I demonstrate that the digital Verfremdungseffekt allows for the production of subjectivities that defy the ontological and phenomenological delimitations of social media.


For any who are interested, the conference proceedings will be published in http://www.azimuthjournal.com/ in the following months.

SPECTRA Journal 7.1 Call for Papers

The editors of SPECTRA: The ASPECT Journal invite scholarly work in all areas of social, political, ethical, and cultural thought for the Fall 2018 issue.

We invite the submission of academic articles, book reviews, and original artwork for publication in volume 7.1. Submissions may speak to individual social science or humanities fields, or apply an interdisciplinary lens to contemporary theoretical, critical, empirical, or policy-oriented subjects.

We publish bold and eclectic contributions. Past articles have focused on sovereignty in the city, Afghan and Iraqi refugee crises, cultural colonization in Mongolia, financial governance and debt, applied Marcuse and Foucault to The Purge, and explored the relationship between Hip-Hop, globalization, and identity construction, among many others.

SPECTRA is an online, peer-reviewed scholarly journal established as part of the ASPECT (Alliance for Social, Political, Ethical, and Cultural Thought) program at Virginia Tech. The journal features work of an interdisciplinary nature and is designed to provide an academic forum to showcase research, explore controversial topics, and take intellectual risks. SPECTRA welcomes submissions for publication by way of scholarly refereed articles, book reviews, essays, and other works that operate within a problem-centered, theory-driven framework. Full submissions are due by Monday, September 17, 2018.

Here it is in full — SPECTRA CfP. The website, where all past issues are indexed, is www.spectrajournal.org.

Please consider submitting! I want to be your editor.

epistemic black markets; algorithmic governance; psychedelics; the future

Lately I’ve been reading the work of Sun-Ha Hong, a scholar whose work “examines how new media and its data become invested with ideals of precision, objectivity and truth – especially through aesthetic, speculative, and otherwise apparently non-rational means.”  That bio statement is taken from his website: sunhahong.wordpress.com.

He writes about the future as a cultural motif:

On Futures, and Epistemic Black Markets

The future does not exist: and this simple fact gives it a special epistemic function.

The future is where truths too uncertain, fears too politically incorrect, ideas too unprovable, receive unofficial license to roam… The future is a liminal zone, a margin of tolerated unorthodoxy that provides essential compensation for the rigidity of modern epistemic systems. This ‘flexibility’ is central to the perceived ruptures of traditional authorities in the contemporary moment. What we call post-fact politics (David Roberts), the age of skepticism (Siebers, Leiter), the rise of pre-emption (Massumi, Amoore), describe situations where apparently well-established infrastructures of belief and proof are disrupted by transgressive associations of words and things. The future is here conceptualised as a mode for such interventions.

This view helps us understand the present-day intersection of two contradictory fantasies: first, the quest to know and predict exhaustively, especially through new technologies and algorithms; second, heightened anxiety over uncertainties that both necessitate and elude those efforts.

So the future, as Hong conceptualizes it, is almost an episteme — an ” ‘apparatus’ which makes possible the separation, not of the true from the false, but of what may from what may not be characterized as scientific.”  (Foucault, Power/Knowledge). The possibilities of prediction now structure the research and development of all sorts of important tools. If the future, the idea of it, doesn’t strictly determine scientific knowledge, it at least assists in its production.

In a talk titled The Digital Regime of Truth: From the Algorithmic Governmentality to a New Rule of Law, philosopher Antoinette Rouvroy discusses that which defies capture by the digital:

Another remnant that escapes digitisation is the future. Spinoza said we do not know what a body can do. This conditional dimension about what a body could do, it is the conditional dimension in itself. Previously I wrote that the target of algorithmic governmentality is precisely this unrealised part of the future, the actualisation of the virtual. But of course, there is a recalcitrance of human life to any excessive organisation (Manchev 2009). I think that this unrealised in the future is effectively a source of recalcitrance, and even if we believe that we can predict everything (and this comes under the Big Data ideology: ‘crunching numbers is the new way to be smart’).

There’s a clear connection between the future and capital. We need the future as a valve for production. The insights of Rouvroy comport with Sun-ha Hong’s.

The future is the eminent epistemic black market, the general category of the subject of algorithmic governmentality. Unpredictability ought to be exterminated, or at least meticulously controlled, under this program. Psychedelic experiences — which are by nature speculative and unpredictable, and whose efficacy as therapeutic tools may come from their tendency to break predictable psychological patterns — are an important point of intersection here. Psychedelic experiences are wild and unruly; they tend to dig new tunnels into the infinitesimally small, elusive spaces of their own ontological and phenomenological continuities. Thus psychedelic science is a useful case for affirming (if not articulating) the unique character of the unrealized dimension of the future — an element of life itself which resists digital control.

(painting by Guy Billout)

come back trish keenan

I’ll show you for example / a situation that’s like winter / and I’m not complaining about night time / it’s harder in the morning / my room’s too small for parties / too spacious when you’re lonely / so books can make us friends / that’s as long as we are reading /  turn the lights off when you`re leaving / I want to watch the car park empty / it’s easy when they’re strangers / to wave goodbye / my brother’s back off holiday / he’s been chasing girls in Spain / he said he’d bring me a guitar / which I said would bring me fame / I remember your excitement / choosing pictures for your wall / and now you’ve seen them oh so often / you hardly see them anymore / turn the lights off when you’re leaving / I want to watch the car park empty / it’s easy when they’re strangers / to wave goodbye / I remember your excitement / choosing pictures for your wall / and now you’ve seen them oh so often / you hardly see them anymore / turn the light’s off when you’re leaving / I want to watch the car park empty / it’s easy when they’re strangers / to wave goodbye


Tomorrow I return to New York to visit family and for this, “Uncomputable”, by Alexander Galloway and Taeyoon Choi. Held at the School for Poetic Computation. I like Galloway’s thoughts about data:


Data comes from the Latin dare, meaning to give. But it’s the form that’s most interesting. First of all, it’s in the neuter plural, so it refers to “things.” Second, data is a participle in the perfect passive form. Thus the word means literally “the things having been given.” Or, for short, I like to think of data as “the givens.”

…as data are defined in terms of their givenness, their non-immanence with the one, they also display a relation with themselves. Through their own self-similarity or relation with themselves, they tend back toward the one (as the most generic instance of the same). The logic of data is therefore a logic of existence and identity: on the one hand, the facticity of data means that they exist, that they ex-sistere, meaning to stand out of or from; on the other hand, the givenness of data as something means that they assume a relationship of identity, as the self-similar “whatever entity” that was given.

The true definition of data, therefore, is not simply “the things having been given.” The definition must conjoin givenness and relation. For this reason, data often go by another name, a name that more suitably describes the implicit imbrication of givenness and relation. The name is information.

Information combines both aspects of data: the root form refers to a relationship (here a relationship of identity as same), while the prefix in refers to the entering into existence of form, the actual givenness of abstract form into real concrete formation.

Heidegger sums it up well with the following observation about the idea: “All metaphysics including its opponent positivism speaks the language of Plato. The basic word of its thinking, that is, of his presentation of the Being of beings, is eidos, idea: the outward appearance in which beings as such show themselves. Outward appearance, however, is a manner of presence.” In other words, outward appearance or idea is not a deviation from presence, or some precondition that produces presence. Idea is precisely coterminous with presence. To understand data as information means to understand data as idea, but not just idea, also a host of related terms: form, class, concept, thought, image, outward appearance, shape, presence, or form-of-appearance.

As Lisa Gitelman has reminded us, there is no such thing as “raw” data, because to enter into presence means to enter into form. An entity “in-form” is not a substantive entity, nor is it an objective one. The in-form is the negentropic transcendental of the situation, be it “material” like the givens or “ideal” like the encoded event. Hence an idea is just as much subject to in-formation as are material objects. An oak tree is in-formation, just as much as a computer file is in-formation.

All of this is simply another way to understand Parmenides’s claim about the primary identity of philosophy: “Thought and being are the same.”

Alexander Galloway, “From Data To Information”



Other stuff: My colleague Robert Flahive and I are stepping up as editors-elect for SPECTRA, the official peer-reviewed journal of our doctoral program. SPECTRA features interdisciplinary scholarship from the social sciences and humanities, with a strong bent toward the critical & theoretical. We’ll be posting a new call for papers some time this summer.

Work on SPECTRA will occupy a lot of my time over the next two years, although I have a devilish desire to convene a small conference on psychedelics and technology in late 2019 or 2020. This is kinda selfish, I mean, at least half-motivated by my interest in being the same room as a lot of scholars working on the same topics (including smart friends, but also those whose research I’ve been following and admiring from a distance). It’s a long-standing dream in hibernation for the time being, but I like the idea so much I may have to make it reality — not that throwing conferences is easy, but I’m up to the task —

From 7/9-7/19 I’ll be taking my doctoral preliminary exams. Writing ~75 pages in ten days. Here I am with the little creature I babysat for the first half of June- he offered me psychic support in the early weeks of my studying.

Happy solstice, yall!


Sights and sounds a little mystical,

Sensoria, tactile apparatuses for the brain. If the verses for the second aren’t proof of a psychedelic protocol in literature — see the second paragraph  — I don’t know what is.

“Only a palace with interior doors / well painted well gargoyled with multiple floors / two windows let free this projector machine and the magical world here appears on the screen / my servants attend me with tricks of the senses / the past and the future and similar tenses and on platters of air they convey me my measure both gladness and sorrow, I lack not for treasure  / the lord and his lady are seated within / in the court of the mind where the song does begin / the song is as fine is as fine is as follows  / the song does continue through measureless hollows that sink from the level of personal being through caverns of darkness where dragons are dwelling  /the mountains above them are raised at my calling /  there the apples are ripe or the rain is a-falling / in ships of white vision I sail the horizon / where three spinners stand beyond the horizon / under the tree of the apples of beauty / I watch them arranging my days and tomorrows / The song is as fine is as fine as it follows / I stood on the beach where the moon was a-curling / Laughed on the wings of the sea birds calling / I loved when sweet Venus a lover did bring me / I cried when sweet Saturn and Jupiter moved us and all of my servants were fighting their brothers / And the lord and the lady they hated each other / Till the spinners arose with their work on their fingers / Commanding the presence of Heavenly singers / That spoke of the silence so soon to be coming / When all would be still in the wonderful palace / The peace is not stillness but peacefully changing / This hope is the hope of the man on the gallows / The song is as fine is as fine is as follows / The infant I was in the womb of my mother White sperm I was in the loins of my father / Before that I swam in the oceans of nowhere / Where the fish are as fine as the colour of colours / Where waves are the message of centuries rolling / Where wind is the breath of the Holy Creator / Where no ship sails but only the ocean / Where all the rivers grow mighty with showing / And crowned with the gifts of the myriad valleys / Return with a sigh to the sea of the coming / Forever and ever and ever and ever / be glad O be Glad for the song has no ending.” The Incredible String Band, The Head