|
META TOPICPARENT | name="FirstEssay" |
Your profile defines your future. | | If you are a white male studying law at Columbia, it means you are also likely to vote Democrats, travel to Europe, eat salads for lunch, be interested in wine, play tennis, and listen to rock music. If you were born in Harlem and you are a black unemployed male, you will also be likely to vote Democrats, but you will rather be likely not to travel, to eat junk food for lunch, be interested in drinking beers or sodas rather than wine, play soccer or video games rather than tennis, listen to hip-hop instead of rock music. And this is what will be suggested to them as well as to their alike friends. The algorithm locks people into their profiles. It makes people become what you were likely to become and, in this sense, prevents individuals to freely realize themselves and reproduces existing social patterns.
| |
> > |
Begin by telling people what they will get from reading. Starting out with two paragraphs of exposition before we even begin to find out what your contribution is will lose readers you could have kept.
Because you have seen deeply so far, advice on the improvement of the essay involves specifics.
You attribute to "algorithms" the various effects in "guiding" behavior you describe. This error is becoming so cheap and easy that reality will never disturb it. Not only does it distract people from actually thinking about technology, it introduces into policy the idea of "algorithmic transparency." which is a favorite recommendation of tech-adjacent rather than technically expert people.
An algorithm is, strictly speaking, a procedure for making a computation, a computer program strictu sensu. An efficient mode for sorting a list or computing the transcendental arccosine of floating-point data is an algorithm.
Generally speaking, the algorithms involved in the ad-tech targeting and the public-order "nudging" are pattern recognition algorithms. They are simple and general: if you look at them and you're a proficient reader of program code you can see everything about them very easily and in a short time. Those programs are exposed to "training data." This data trains the program to recognize patterns. Any given set of training data will cause the simple pattern recognition program being trained on it to behave in the recognition of its relevant patterns in slightly different (in the concrete technical sense, in differently biased ways).
The pattern-recognition program as it has been trained then is given a flow of "real" input, on which it is trained in turn, so that it "improves" its ability to find the pre-establish patterns as they are modified by the training process. The whole state of the model at any moment is the product of all its previous states. Knowing "the algorithm" is more or less useless in knowing what is actually occurring in the model.
Think of a spam filter, trying to do an intelligent job filtering your email for you. It begins by training on lots of spam, presumably including the things you too consider spam, so that when you begin receiving email for it to filter, it catches a bunch of spam, not much of which you considered not-spam (called "ham" in hacker jargon). Over time, as you mark spam you didn't want to receive and ham you regret was sidelined in transit, the filter gets better at doing the job for you. (Because "spam" is actually just mail you didn't want and "ham" is actually just stuff you did, this simple Bayesian probability calculator that could be whipped up in an afternoon does a pretty good imitation of an impossible job, namely mirroring your semi-conscious subjective decisions about what email you want to see.)
So, to be brief about it, it's not the algorithms, it's the data over time and the evolving state of the model. To be even briefer, if less comprehensible, it's the Gestalt of neural networks. Algorithmic transparency is mostly useless. People who are essentially applying our free software philosophy to this problem are unaware of the differences that matter.
Behavior guiding, then, is based not on some "algorithm for guiding
people" that we should be studying, but on another simple principle,
the basic life-rule of the Parasite With the Mind of God: reinforce
patterns that benefit you, and discourage patterns that might benefit
the human, but don't benefit you. This is the basic life rule of all
parasites. This also leads to reinforcing patterns that benefit the
human but also benefit the parasite. That's why parasitism can be
evolutionarily advantageous. We have eukaryotes and photosynthesis
for this reason, after all.
In this instance, the parasite guides first of all by reinforcing patterns of engagement. Where those patterns of engagement reduce anxiety responses in the human, the patterns reinforced are experienced as "convenient" by the human.
The parasite guides secondarily by reducing patterns that reduce engagement. The negative reinforcement structure is accomplished by transferring anxiety back to the human: this is experienced by the human as FOMO, or fear of social isolation, encouraging negative internal feedback experienced as depression that can be alleviated by re-engaging.
It's only at the tertiary level that the platforms to which humans
allow themselves to be connected then guide behavior by presenting
particular stimuli known to elicit specific consumptive
responses—a process variously described as "advertising," or
"campaigning," or "activating." Whether this is democratic depends on the defiinition of democracy that you do not give. But one might not inquire whether the inquiry results from a category error or a tautology: both predation and parasitism are processes to which the concept of democracy is not applicable.
Blaming the existence of pattern-matching software for the patterns is also a category error, known colloquially among humans as "shooting the messager." If human behavior is calculable and can be nudged in these ways, then our Enlightenment account of human-ness is incomplete, or perhaps our account of the Enlightenment and its relation to our existence after the phenomena we call Freud, Lenin, Bernays, Hitler, Skinner, Mao Zedong, Pablo Picasso and the King of the Undead Now Dead is not quite perfect. I knew that there were problems in our conception of free will before the Apple ][ existed, let alone Facebook.
But there is a prison being built. You just don't say anything about how we can walk out from it while it is still unfinished. Now that would be one hell of an essay.
| | \ No newline at end of file |
|