Paul Revere for our time?
On my reading, Yuval Noah Harari has made telling a deadly serious cautionary tale his mission. In his books, Sapiens, and Homo Deus,—full disclosure: I’m still reading through Homo Deus (I’m processing my provisional understandings of Harari’s thought here on the blog in real time)—Harari paints a troubling picture of a future threatened by both human and technological overreach. Some implications that Harari points to are existential and demand immediate attention.
After I finished reading Sapiens, I began exploring Harari’s presence on YouTube. He has done at least a couple of Ted Talks (e.g., facism vs nationalsim), and he engages with many interviews and media conversations (YouTube, podcasts, etc.). I have recent video of Harari raising some prime concerns developed in his books, so we’ll try some video-rich blogging this week.
Chatting with Zuck
Shortly into a ninety-minute conversation with Facebook founder/C.E.O. Mark Zuckerberg (in April, 2019), Harari raises his first concern—the unprecedented global inequality that Artificial Intelligence [AI] and machine learning technologies will almost surely generate:
Harari raises another alarming concern in terms of national polities and AI tipping the balance of power between competing political philosophies in favor of authoritarian, totalitarian schemes.
By experience we feel that distributed decision making is clearly superior to a centralized approach. However, Harari’s observation shows how AI technology may well turn the Starfish and Spider organizational paradigms on their head.
Finally, Harari summarizes the most pressing concerns he’d raised in their discussion:
Not “Redcoats,” but, “Algorithms are coming!”
Oh, let’s not fool ourselves, the complex technology dimension of this does not free us from Pogo’s wisdom:
We have met the enemy and he [sic] is us.
I feel Harari’s concern is with respect to novel complications the new technology presents our already often problematic human-to-human dynamics, not to mention our frequently troubled human-to-other-life dynamics.
In Homo Deus Harari writes:
If we want to understand our life and our future, we should make every effort to understand what an algorithm is, and how algorithms are connected with emotions. [pg. 83]
The “emotions” piece of this is intriguing, we’ll try to get back to that next week.

Algorithms?
Well, let’s begin with the opening definition that Harari gives for ‘algorithm’ in Homo Deus:
An algorithm is a methodical set of steps that can be used to make calculations, resolve problems and reach decisions. An algorithm isn’t a particular calculation, but the method followed when making the calculation. [pg. 83]
A more complex example [of an algorithm] is a cooking recipe. [pg. 84]
After describing how a beverage machine makes tea though the direction/oversight of an algorithm, Harari goes on to say:
Over the last few decades biologists have reached the firm conclusion that the man pressing the buttons and drinking the tea is also an algorithm. [pgs. 84-5]
Everything is an algorithm
I note that nearly a century earlier Alfred North Whitehead pioneered essentially the same notion only in a philosophical form, e.g., Process and Reality.
Growing vulnerability
Harari wonders what will result when AI and machine learning technologies (through saturation of data acquisition) develop enough to figure out how to hack the human process? Harari argues AI is very near a tipping-point on this technological development right now.
This is one terribly significant place the problem of free will enters in. Research reveals evidence that aspects of human decisions precede cognitive consciousness of the decision. Some would argue this alone renders Enlightenment understandings of free will impotent. Leaving our agency—plus our questions and narrative/philosophical understandings of it—aside, doesn’t an environment of AI algorithms far more sophisticated than the human algorithm make us increasingly vulnerable to being manipulated and controlled by AI? As Harari argues, AI needn’t be perfect, just better than us.
When AI understands the outcome of our processing before we’re aware of our processing, won’t behind-the-scenes manipulation of our algorithm become child’s play for the AI system? Additionally, won’t the manipulation be completely invisible to us because the intervention/interference will occur within the pre-cognitive portion of our processing?
For convenience sake?
This is the complete opposite of a coercive power play. As Harari states in the video above, people won’t resist or have any apparent reason to resent any of this because, ultimately, the manipulation will occur pre-cognition.
You might say, “I’d never willingly hand my agency over to AI!”
Perhaps.
Wait.
Haven’t you already willingly handed your privacy over to AI?
More on algorithms and their implications next week.
Your thoughts?
I never know what I’ve said till I hear the response. What did you hear me say?
Note: This blog page has employed a serial approach to outline, Spiral Dynamics, a helpful developmental anthropology based on the research of Clare Graves. Introduction (June 30, 2018), first in series (July 1, 2018).
3 thoughts on “No free will?”