about

Hi, I’m Emma, welcome to my website.

I’m an instructor and PhD candidate at Virginia Tech, currently based at TU-Darmstadt and the Goethe University in Frankfurt-am-Main, Germany. My research is interdisciplinary and spans critical theory, media theory, data studies and the digital humanities. In my dissertation, I posit computation as a force which shifts our conditions for knowledge, examining the use of non-digital methods in research on psychedelic drugs as an intervention in the epistemic power of data technologies. A web-based version of my curriculum vitae is at this link, and this one gives you a downloadable PDF.

I am also a writer and musician; I post a lot of non-academic creative writing and original music here. The earliest content is from 2014.

Otherwise:

Behind this link I’ve indexed a lot of recent writing, some published.
My academia.edu homepage has more academic content, including teaching materials.
Music: http://stamm.bandcamp.com
Twitter: @turing_tests
I dig it when people reveal themselves here
New for 2018, this site has a blogroll! Yes, like it’s 2003. Check it
And you can contact me via email: stamm@vt.edu.

Thanks for visiting!

— Emma Stamm

writing warnings

[0] Fact: writing is made of words, not ideas.

[1] “Nothing is like an idea so much as an idea” — Bishop Berkeley

[2] Fact: writingideas, and content all refer to different entities.

[3] “I myself prefer an Argentine fantasy. God did not create a Book of Nature of the old sorts Europeans imagined. He wrote a Borgesian library, each book of which is as brief as possible, yet each book of which is inconsistent with every other. For each book, there is some humanly accessible bit of Nature [‘the natural’] such that that book, and no other, makes possible the comprehension, prediction and influencing of what’s going on” — Ian Hacking on Borges and Berkeley

[4] The writing I like cuts through the hell of sameness that is the digital space (and capitalism! Capital writ large)

[5] Sometimes it says nothing …that’s from John Cage’s book Silence which inspired my first website, the silent internet

[6] “All great writers are great deceivers” — Vladimir Nabokov

[7] Magic is stronger when it remains in the occult, and writers have to be careful as they pick from their spellbook. Like the joke about jazz, it’s what you don’t hear that counts.

////note that this post is old, stuck to the top of the site for ★ dark purposes ★

For what I really think about writing see Mark Fisher

 

Hi from Germany. Frankfurt is unseasonably warm, hotter than New York right now I think. It feels like Southern Virginia at the time I left. I was in Berlin and Poland (Sczcecin) last weekend. It was not much colder in either city. Trying not to feel despair at climate change feels like trying not to be human.

Even if I could measure time by shifts in weather, it would still be hard to believe a month has passed since I got here. My German hasn’t improved.

o-culus needs some work on the administrative end so may be down for a day or two in October. For that reason, this is probably going to be the last post for a while — that is until I can devote a whole day to tinkering with it (/praying I don’t lose half a year’s worth of content in a shift over to a new hosting system)

When the site is back I’ll start posting more regularly. Have been taking a lot of photos and videos. I’d like to show them off (& of course I still deny the convenient solution, Instagram is psychic death).

I’ll be speaking at two conferences in October, some information here:

***

Oct 1-2 —> Intelligent Futures: Automation, AI and Cognitive Ecologies, at the University of Sussex in Brighton, England. Here is the conference site. Talk title and abstract:

Psychedelic Science and The Question of Artificial Intelligence

In this paper, I argue that qualitative research on the medical application of psychedelic drugs problematizes the positivist, generalizing and inductive principles of machine learning as a basis for artificial intelligence. I draw from interdisciplinary scholarship that uses qualitative methods, and in particular interpretative phenomenological analysis, as a hermeneutic device for research on the use of psychedelics in psychiatry. I combine precepts of machine learning with developments in psychedelic research to explore the inherent problems of generalizing psychedelic verbal reports data in the classification systems of A.I. classification. In doing so, I demonstrate that the use of qualitative methods in psychedelic drug research may envelop an immanent critique of the notion that machine-learning based predictive systems can be intelligent. I begin with an overview of the “psychedelic renaissance,” the recent resurgence of interest in the medicinal use of psychedelics. This includes an emerging paradigm which recognizes the need for qualitative and abductive theorization, including methods from phenomenology, poetics and critical theory as tools to interpret the deeply subjective narrative data that is evaluated in psychedelic studies. From there, I explore axioms of machine learning and artificial intelligence that emphasize the ways in which generalization and inductive reasoning are essential to algorithms that effectively “predict” the future. Assessing dynamics from psychedelic research that stand against pure inductive reasoning alongside the empirics of machine learning as a basis for A.I., I offer that the former can work toward a theorization of the possible epistemic limitations of artificial intelligence. I conclude with an overview of primary points and a restatement of my argument.

Oct 15-16 —> Deep Learning and Exploration in Cognitive Science, at the Institute of Philosophy in Prague, Czech Republic. No conference website or program available yet. Talk title and abstract:

Screens of Perception: Psychedelic Science, Machine Learning and Artificial
Intelligence

In this talk, I will argue that qualitative research on the medicinal use of psychedelic drugs problematizes the development of data models, which in turn presents challenges for the predictive functions of machine learning and artificial intelligence. I draw from interdisciplinary scholarship that uses qualitative methods to interpret research on psychedelic substances, such as lysergic acid diethylamide (LSD) and psilocybin mushrooms, as assistive devices for psychotherapy. I combine precepts of machine learning with developments in psychedelic research to explore the complexities of generalizing research in contemporary psychedelic science. This includes subject-reported accounts from those undergoing ineffable and difficult-
to-predict experiences. In doing so, I demonstrate that the use of qualitative methods in psychedelic drug research may offer a critique to machine-learning based predictive systems based on classification.

I begin with an overview of the “psychedelic renaissance,” the recent resurgence of interest in the medicinal use of psychedelics. Here, I offer a brief history of medicinal experiments with psychedelic drugs that begins in the twentieth century. I note that for legal reasons, 2014 marked the first LSD study approved by the US Food and Drug Administration in forty years, and that several related developments have occurred within the past five years. This includes an emerging paradigm which recognizes the need for qualitative, hermeneutic and deductive modes of theorizations. These includes interpretive methods inspired by phenomenology, poetics and aesthetic philosophy. I directly cite published research which speaks to their efficacy as interpretive devices on data from psychedelic psychotherapy.

From there, I explore axioms of machine learning and artificial intelligence that emphasize the ways in which generalization and inductive reasoning are essential to algorithms that effectively “predict” the future. Evaluating dynamics from psychedelic research that stand against pure inductive reasoning alongside the empirics of machine learning as a basis for artificial intelligence, I offer that the former can work toward a theorization of the possible philosophical limitations of the latter. One outcome would be a general critique of the notion that mentality may be replicated in data and algorithmic systems that stipulate predictive functions. However, I provide more nuanced, actionable means of conceiving the implication of my work. I
conclude with an overview of my major points and an enumeration of these avenues for future research based on my preliminary investigations.

These talks will be similar, although the latter more narrowly focused on machine learning.

 

***

I’ll also be traveling a bit to other places — Scotland, after the conference in Brighton, to see family and Amsterdam at the beginning of November for research. If anyone is reading this and wants to give me a good excuse to buy another train ticket to Berlin, my email is open..

acid communism

Yet another mainstream news article about Silicon Valley’s fetish for microdosing LSD. I’m kind of sick of hearing about this trend. (Article posted here only for the sake of reference; see thumbnail below).

Meanwhile I’ve been getting more into Mark Fisher, reading Ghosts Of My Life for the first time. I just learned that at the time of his death he was working on a book called Acid Communism. Some words about this from Plan C, a crew of anti-authoritarian communists based in the UK:

Alongside feminist consciousness raising [Mark Fisher] also identifies the various ways in which class consciousness was raised. To these he then adds the consciousness changing effects of psychedelia, which worked through pop culture to embed a notion that reality is plastic and changeable. Wow, what a move. Before his death Mark was writing a book on post-capitalist desire called Acid Communism so we can see that this was no mere digression but an opening up of whole new areas of enquiry. Where can we find post-capitalist desire expressing itself today? How can we help that desire to be realised? ( Full article here, https://www.weareplanc.org/blog/towards-acid-communism/ )

It seems intuitive enough to me that, although it may manifest itself in the culture industry, psychedelia is a vector for post-capitalist desire. You don’t even need the chemical experience. Psychedelic music, art and sentiment encapsulates so much that shouldn’t technically have any life under digitized capitalism. That capital wants to claim LSD as its own via the tech industry is no surprise, and something tells me we shouldn’t bother preparing for what happens if acid-dropping entrepreneurs turn onto communism.

These days I’m writing about psychedelics not only as palliatives, but producers of knowledge applicable beyond the scope of curative medicine. As in Stanislav Grof’s remark that LSD and other psychedelics may be for psychology what the microscope was for biology or the telescope for astronomy — not just a bandage, but a magnifying lens. If psychedelics can illuminate hidden aspects of cognition, maybe they can tell us something about cognitive-computational capitalism, about which Matteo Pasquinelli is writing a monograph. I can infer from his essay on Glass Bead that he connects the theories of mind writ large in artificial intelligence to a distinctly economic logic. At the very least it seems the notion of a fully-computable consciousness, naturalized as Real Objective Science in the field of A.I., expedites technologized capitalism.

Silicon Valley is going to do its thing whether it has acid or not. Just like how cryptocurrency folks are getting chummy with psychonauts although right now it looks like acid needs Bitcoin more than Bitcoin needs acid. (Fwiw, I assume Bitcoiners are funding psychedelic research in the interest of their productivity). Positioned at the margins of all these discourses, I’ve worried about alienating myself from one camp or another. I don’t want to fly the flag of technology, psychedelic science, or leftism too high, since those groups normally keep their distance from one another, and my work relies on all of them. This mutual exclusivity is nonsense, of course; their interests coalesce in streams of politics and culture fed anew every day.  Mark Fisher taught me.

I like that Fisher identifies desire to be the grail of post-capitalist inquiry. It takes power of an erotic magnitude to slice through the banality of politics / culture in their current forms. An almost sexual desire to live in a pro-social world. To explore consciousness in peace and fascination, to develop perspectives that don’t prioritize competition and metrical contrast with other people. To not only give according to our abilities and take according to our needs, but to provide everything for everyone. Everything for everyone.

One week after arriving. The worst piece of advice I got for Germany is that language isn’t a problem because everyone speaks English. You hear it all the time in the states. It’s not true.

The second worst is that people are reserved. That’s also false. I don’t understand them, of course, so I wish I could let myself believe that the strangers who approach me — usually asking for directions; I look like a local since I bleached my hair — are always being friendly. I’m too cynical to assume they’re not sometimes just judging me.

So, yeah, the language barrier is a problem. Matteo Pasquinelli says creativity is a rupture of semiotic planes and of course I want to create here, but it’s not easy to break into higher language dimensions when you have so little mastery of the simple ones.

Maybe as kids we become literate through semiotic breaks. Applying stress tests to consensus word-worlds to see how little conformity we can get away with. I was attracted to rule-agnostic writing before I accepted that writing for general audiences is mostly a law abiding thing. And also there’s a big heaving gap between art and expression. When my writing was at its most unintentionally surreal I wouldn’t have thought to call it so.

But there’s also poetry in the act of accepting one’s limits. I think I like broken German more than Germans like broken English

I took these next 2 photos in my home district. Appealingly ungentrified and unpopular with tourists, very pretty, but it smells like New Jersey. I have no problem with the rats in the river but wish I could better handle bad urban aromas and tobacco smoke —  can confirm that German cliché

Tägliche Schmerz, and I’m not into recalibrating my senses. (All the USA stereotypes are true, like the smoke gets to me).

I think this first travelogue will also be the last one. Don’t really want to write about my real life, but posts about work forthcoming 🙂

Really want to re-record this. Lyrics, forward and back, from As You Like It:

I was seven of the nine days out of the wonder before you came,
for look here what I found on a palm tree;
I was never so berhymed since Pythagoras’ time,
that I was an Irish rat,
which I can hardly remember—

machines that morph logic

Very good article by Matteo Pasquinelli — I’ve excerpted a lot of it below, but read the whole thing! — Machines that Morph Logic: Neural Networks and the Distorted Automation of Intelligence as Statistical Inference

***

The term Artificial Intelligence is often cited in popular press as well as in art and philosophy circles as an alchemic talisman whose functioning is rarely explained. The hegemonic paradigm to date (also crucial to the automation of labor) is not based on GOFAI (Good Old-Fashioned Artificial Intelligence that never succeeded at automating symbolic deduction), but on the neural networks designed by Frank Rosenblatt back in 1958 to automate statistical induction. The text highlights the role of logic gates in the distributed architecture of neural networks, in which a generalized control loop affects each node of computation to perform pattern recognition. In this distributed and adaptive architecture of logic gates, rather than applying logic to information top-down, information turns into logic, that is, a representation of the world becomes a new function in the same world description. This basic formulation is suggested as a more accurate definition of learning to challenge the idealistic definition of (artificial) intelligence. If pattern recognition via statistical induction is the most accurate descriptor of what is popularly termed Artificial Intelligence, the distorting effects of statistical induction on collective perception, intelligence and governance (over-fitting, apophenia, algorithmic bias, “deep dreaming,” etc.) are yet to be fully understood

***

Any machine is always a machine of cognition, a product of the human intellect and unruly component of the gears of extended cognition. Thanks to machines, the human intellect crosses new landscapes of logic in a materialistic way—that is, under the influence of historical artifacts rather than Idealism. As, for instance, the thermal engine prompted the science of thermodynamics (rather than the other way around), computing machines can be expected to cast a new light on the philosophy of the mind and logic itself. When Alan Turing came up with the idea of a universal computing machine, he aimed at the simplest machination to calculate all possible functions. The efficiency of the universal computer catalyzed in Turing the alchemic project for the automation of human intelligence. However, it would be a sweet paradox to see the Turing machine that was born as Gedankenexperiment to demonstrate the incompleteness of mathematics aspiring to describe an exhaustive paradigm of intelligence (as the Turing test is often understood).

***

Within neural networks (as according also to the classical cybernetic framework), information becomes control; that is, a numerical input retrieved from the world turns into a control function of the same world. More philosophically, it means that a representation of the world (information) becomes a new rule in the same world (function), yet under a good degree of statistical approximation. Information becoming logic is a very crude formulation of intelligence, which however aims to stress openness to the world as a continuous process of learning.

***

Rosenblatt stressed that artificial neural networks are both a simplification and exaggeration of nervous systems and this approximation (that is the recognition of limits in model-based thinking) should be a guideline for any philosophy of the (artefactual) mind. Ultimately Rosenblatt proposed neurodynamics as a discipline against the hype of artificial intelligence.

The perceptron program is not primarily concerned with the invention of devices for “artificial intelligence”, but rather with investigating the physical structures and neurodynamic principles which underlie “natural intelligence.” A perceptron is first and foremost a brain model, not an invention for pattern recognition. As a brain model, its utility is in enabling us to determine the physical conditions for the emergence of various psychological properties. It is by no means a “complete” model, and we are fully aware of the simplifications which have been made from biological systems; but it is, at least, as analyzable model.

In 1969 Marvin Minsky and Seymour Papert’s book, titled Perceptrons, attacked Rosenblatt’s neural network model by wrongly claiming that a Perceptron (although a simple single-layer one) could not learn the XOR function and solve classifications in higher dimensions. This recalcitrant book had a devastating impact, also because of Rosenblatt’s premature death in 1971, and blocked funds to neural network research for decades. What is termed as the first ‘winter of Artificial Intelligence’ would be better described as the ‘winter of neural networks,’ which lasted until 1986 when the two volumes Parallel Distributed Processing clarified that (multilayer) Perceptrons can actually learn complex logic functions. Half a century and many more neurons later, pace Minsky, Papert and the fundamentalists of symbolic AI, multilayer Perceptrons are capable of better-than-human image recognition, and they constitute the core of Deep Learning systems such as automatic translation and self-driving cars.

***

Rosenblatt gave probably one of the first descriptions of machine intelligence as emergent property: “It is significant that the individual elements, or cells, of a nerve network have never been demonstrated to possess any specifically psychological functions, such as ‘memory,’ ‘awareness,’ or ‘intelligence.’ Such properties, therefore, presumably reside in the organization and functioning of the network as a whole, rather than in its elementary parts.” [Gestalt overtones here]

***

Current techniques of Artificial Intelligence are clearly a sophisticated form of pattern recognition rather than intelligence, if intelligence is understood as the discovery and invention of new rules. To be precise in terms of logic, what neural networks calculate is a form of statistical induction. Of course, such an extraordinary form of automated inference can be a precious ally for human creativity and science (and it is the closest approximation to what is known as Peirce’s weak abduction), but it does not represent per se the automation of intelligence qua invention, precisely as it remains within ‘too human’ categories.

Peirce said that “man is an external sign.” If this intuition encouraged philosophers to stress that the human mind is an artifactual project that extends into technology, however, the human mind’s actual imbrication with the external machines of cognition happened to be rarely empirically illustrated. This has produced simplistic poses in which ideas such as Artificial General Intelligence and Superintelligence are evoked as alchemic talismans of posthumanism with little explanation of the inner workings and postulates of computation. A fascinating aspect of neural computation is actually the way it amplifies the categories of human knowledge rather than supersedes them in autonomous forms. Contrary to the naïve conception of the autonomy of artificial intelligence, in the architecture of neural networks many elements are still deeply affected by human intervention. If one wants to understand how much neural computation extends into the ‘inhuman,’ one should discern how much it is still ‘too human.’

***

The issue of over-fitting points to a more fundamental issue in the constitution of the training dataset: the boundary of the categories within which the neural network operates. The way a training dataset represents a sample of the world marks, at the same time, a closed universe. What is the relation of such a closed data universe with the outside? A neural network is considered ‘trained’ when it is able to generalize its results to unknown data with a very low margin of error, yet such a generalization is possible due to the homogeneity between training and test dataset. A neural network is never asked to perform across categories that do not belong to its ‘education.’ The question is then: How much is a neural network (and AI in general) capable of escaping the categorical ontology in which it operates?

***

Charles S. Peirce’s distinction between deduction, induction and abduction (hypothesis) is the best way to frame the limits and potentialities of machine intelligence. Peirce remarkably noticed that the classic logical forms of inference—deduction and induction—never invent new ideas but just repeat quantitative facts. Only abduction (hypothesis) is capable of breaking into new worldviews and inventing new rules.

The only thing that induction accomplishes is to determine the value of a quantity. It sets out with a theory and it measures the degree of concordance of that theory with fact. It never can originate any idea whatever. No more can deduction. All the ideas of science come to it by the way of Abduction. Abduction consists in studying facts and devising a theory to explain them.

Specifically, Peirce’s distinction between abduction and induction can illuminate the logic form of neural networks, as since their invention by Rosemblaat they were designed to automate complex forms of induction.

By induction, we conclude that facts, similar to observed facts, are true in cases not examined. By hypothesis, we conclude the existence of a fact quite different from anything observed, from which, according to known laws, something observed would necessary result. The former, is reasoning from particulars to the general law; the latter, from effect to cause. The former classifies, the latter explains.

The distinction between induction as classifier and abduction as explainer frames very well also the nature of the results of neural networks (and the core problem of Artificial Intelligence). The complex statistical induction that is performed by neural networks gets close to a form of weak abduction, where new categories and ideas loom on the horizon, but it appears invention and creativity are far from being fully automated. The invention of new rules (an acceptable definition of intelligence) is not just a matter of generalization of a specific rule (as in the case of induction and weak abduction) but of breaking through semiotic planes that were not connected or conceivable beforehand, as in scientific discoveries or the creation of metaphors (strong abduction).

In his critique of artificial intelligence, Umberto Eco remarked: “No algorithm exists for the metaphor, nor can a metaphor be produced by means of a computer’s precise instructions, no matter what the volume of organized information to be fed in.” Eco stressed that algorithms are not able to escape the straitjacket of the categories that are implicitly or explicitly embodied by the “organized information” of the dataset. Inventing a new metaphor is about making a leap and connecting categories that never happened to be logically related. Breaking a linguistic rule is the invention of a new rule, only when it encompasses the creation of a more complex order in which the old rule appears as a simplified and primitive case. Neural networks can a posteriori compute metaphors but cannot a priori automate the invention of new metaphors (without falling into comic results such as random text generation). The automation of (strong) abduction remains the philosopher’s stone of Artificial Intelligence.

***

Statistical inference via neural networks has enabled computational capitalism to imitate and automate both low and hi-skill labor.Nobody expected that even a bus driver could become a source of cognitive labor to be automated by neural networks in self-driving vehicles. Automation of intelligence via statistical inference is the new eye that capital casts on the data ocean of global labor, logistics, and markets with novel effects of abnormalization—that is, distortion of collective perception and social representations, as it happens in the algorithmic magnification of class, race and gender bias. Statistical inference is the distorted, new eye of the capital’s Master.

encountering aliens

In September I will be giving a talk at the Technical Universitat Dresden (in Dresden, Germany) about Bertolt Brecht’s theater politics, social media, and anti-fascist aesthetics:

Encountering Aliens: Digital Verfremdungseffekt and The Theater of the Self

In this talk, I will argue that self-reflexivity in social writing exists beyond the normative confines of social media. I depart from the observation that self-reflexivity in social media has the effect of distancing or alienating both producers and audiences from the naturalized context of social media applications. This in turn renders the normativity of social media visible, strengthening the capacity of the alienated parties to inquire into and critique social media as generative of ineluctably contrived and deterministic experiences of selfhood.

I maintain that when individuals use social media to signal their awareness of its constructed nature, they are operating beyond the conventions of its etiquette. This affords users a degree of sovereignty from the highly deterministic mechanisms by which self-made content comes to be interpreted as unmediated proferrances of subjectivity. My assessment implicates structural features of social media for their role in dissimulating “authenticity” to uphold the pretense that it bears no phenomenological difference from non-digital communicative contexts.

To substantiate my argument, I scrutinize examples of content from two major social media platforms, Facebook and Twitter, for features that engender an “alienation effect” (Verfremdungseffekt), explained as the outcome of an aesthetic technique developed by Bertolt Brecht. When a-effects are achieved successfully, the audience does not identify with the performed character or narrative; their reactions are not the result of the content of the script, but a technical maneuver. The Verfremdungseffekt effectively breaks the illusion of theater through the actors’ signals that the theater world is of a different experiential (phenomenological) regime of reality than that of the audience.

Joining media theory and the writings of Brecht with an interrogation of my case studies informed by philosophy of technology, I demonstrate that the digital Verfremdungseffekt allows for the production of subjectivities that defy the ontological and phenomenological delimitations of social media.

//

For any who are interested, the conference proceedings will be published in http://www.azimuthjournal.com/ in the following months.

SPECTRA Journal 7.1 Call for Papers

The editors of SPECTRA: The ASPECT Journal invite scholarly work in all areas of social, political, ethical, and cultural thought for the Fall 2018 issue.

We invite the submission of academic articles, book reviews, and original artwork for publication in volume 7.1. Submissions may speak to individual social science or humanities fields, or apply an interdisciplinary lens to contemporary theoretical, critical, empirical, or policy-oriented subjects.

We publish bold and eclectic contributions. Past articles have focused on sovereignty in the city, Afghan and Iraqi refugee crises, cultural colonization in Mongolia, financial governance and debt, applied Marcuse and Foucault to The Purge, and explored the relationship between Hip-Hop, globalization, and identity construction, among many others.

SPECTRA is an online, peer-reviewed scholarly journal established as part of the ASPECT (Alliance for Social, Political, Ethical, and Cultural Thought) program at Virginia Tech. The journal features work of an interdisciplinary nature and is designed to provide an academic forum to showcase research, explore controversial topics, and take intellectual risks. SPECTRA welcomes submissions for publication by way of scholarly refereed articles, book reviews, essays, and other works that operate within a problem-centered, theory-driven framework. Full submissions are due by Monday, September 17, 2018.

Here it is in full — SPECTRA CfP. The website, where all past issues are indexed, is www.spectrajournal.org.

Please consider submitting! I want to be your editor.

epistemic black markets; algorithmic governance; psychedelics; the future

Lately I’ve been reading the work of Sun-Ha Hong, a scholar whose work “examines how new media and its data become invested with ideals of precision, objectivity and truth – especially through aesthetic, speculative, and otherwise apparently non-rational means.”  That bio statement is taken from his website: sunhahong.wordpress.com.

He writes about the future as a cultural motif:

On Futures, and Epistemic Black Markets

The future does not exist: and this simple fact gives it a special epistemic function.

The future is where truths too uncertain, fears too politically incorrect, ideas too unprovable, receive unofficial license to roam… The future is a liminal zone, a margin of tolerated unorthodoxy that provides essential compensation for the rigidity of modern epistemic systems. This ‘flexibility’ is central to the perceived ruptures of traditional authorities in the contemporary moment. What we call post-fact politics (David Roberts), the age of skepticism (Siebers, Leiter), the rise of pre-emption (Massumi, Amoore), describe situations where apparently well-established infrastructures of belief and proof are disrupted by transgressive associations of words and things. The future is here conceptualised as a mode for such interventions.

This view helps us understand the present-day intersection of two contradictory fantasies: first, the quest to know and predict exhaustively, especially through new technologies and algorithms; second, heightened anxiety over uncertainties that both necessitate and elude those efforts.

So the future, as Hong conceptualizes it, is almost an episteme — an ” ‘apparatus’ which makes possible the separation, not of the true from the false, but of what may from what may not be characterized as scientific.”  (Foucault, Power/Knowledge). The possibilities of prediction now structure the research and development of all sorts of important tools. If the future, the idea of it, doesn’t strictly determine scientific knowledge, it at least assists in its production.

In a talk titled The Digital Regime of Truth: From the Algorithmic Governmentality to a New Rule of Law, philosopher Antoinette Rouvroy discusses that which defies capture by the digital:

Another remnant that escapes digitisation is the future. Spinoza said we do not know what a body can do. This conditional dimension about what a body could do, it is the conditional dimension in itself. Previously I wrote that the target of algorithmic governmentality is precisely this unrealised part of the future, the actualisation of the virtual. But of course, there is a recalcitrance of human life to any excessive organisation (Manchev 2009). I think that this unrealised in the future is effectively a source of recalcitrance, and even if we believe that we can predict everything (and this comes under the Big Data ideology: ‘crunching numbers is the new way to be smart’).

There’s a clear connection between the future and capital. We need the future as a valve for production. The insights of Rouvroy comport with Sun-ha Hong’s.

The future is the eminent epistemic black market, the general category of the subject of algorithmic governmentality. Unpredictability ought to be exterminated, or at least meticulously controlled, under this program. Psychedelic experiences — which are by nature speculative and unpredictable, and whose efficacy as therapeutic tools may come from their tendency to break predictable psychological patterns — are an important point of intersection here. Psychedelic experiences are wild and unruly; they tend to dig new tunnels into the infinitesimally small, elusive spaces of their own ontological and phenomenological continuities. Thus psychedelic science is a useful case for affirming (if not articulating) the unique character of the unrealized dimension of the future — an element of life itself which resists digital control.

(painting by Guy Billout)