intention and its discontents

In light of the DAO hack (here’s a tl;dr for those not in the know), a lot of people have been talking about intention — the role that conscious will, something we generally ascribe to humans, should play in code-based organizations that are designed to be run without centralized human oversight. The logic goes that if all members of a DAO agree on the terms of the smart contract it runs, there should be little (if any) need  for the intervention of human judgment during its period of operation.

In a rather dreamy moment I started thinking about a totally different institution in which the concept of intention is diminished: Eastern religion. Non-intention, surrendering to processes much larger than oneself, and participation in collaborative efforts that minimize the role of individual are cornerstones of Buddhist and Taoist thought.

I was riffing on this cool code-philosophy parallel for a little while, trying to see things from the perspective of a hypothetical autonomous-code purist — a technologist who would never, under any circumstances, support the role of intention in a blockchain program or organization.

Unlike code, intention isn’t scientific. In fact it’s deeply subjective, open not only to varying interpretation by others but to revision and even misunderstanding by the original bearer of that intention, at a later date. There’s something pretty Zen about “trust in the code,” or even “trustless technology” (two terms that essentially mean the same thing, though I prefer the former). And maybe, through some mental gymnastics, the analogy between non-intention in spirituality and non-intention in code could help me understand the appeal of entirely removing humans from processes that may have very deep effects on them (such as the loss of ~$50M).

The only thing is that… drumroll, please… machines aren’t humans!

Non-intention is a beautiful idea when we’re talking about minimizing the role of the ego, of desire and selfishness, in humans — in order to alleviate their suffering (see: Buddhism 101). It’s an interesting way to think about humans creating art, too (see: John Cage).

To get really spiritual here for a second, I think that [insert your higher power of choice] gifted humans with something we can’t give to machines. Or at least we haven’t gotten there yet. You could call it consciousness, though that word doesn’t feel quite right—”soul” and “spirit”  would have to be included in the definition. As part of this ineffable God-given whatever, we have the ability to set forth a will and intention that touches various factors and dimensions of our existence. These may include factors of which we’re not consciously aware.

Since machines are pretty far away from being humans (including the most advanced AI, though I know that this is a contentious opinion among some), I don’t think that what is necessarily good for humans on the level of metaphysical principles works in the realm of computers. In fact, I know it doesn’t. Not yet, anyway.

As it is, this is a very abstract rationale for the same argument  that Primavera de Filippi advances in much more concrete and practical terms here.

 

 

 

Leave a Reply