
Diminishing return…
Historian Yuval Noah Harari has sounded an alarm regarding the power of some of the new technologies to harmfully disrupt human life as we know it. Key disruptions include serious economic impacts on life as humans steadily lose their utility and instrumental value as a factor in the means of production.
Perhaps causing even more destructive impact, some of the new technologies will increasingly disrupt human identity formation, interrupt social harmonization, and prevent political reconciliation. Social media algorithms designed to increase user engagement (and user addiction) also produce partisan polarization. While I grant that social division is an unintended by-product of ‘surveillance capitalism,’ it is devastating nonetheless. Harari offers his audience a couple of examples. First, the former, regarding economic impact.

UBI?
As the new technologies displace human labor, a new way of understanding economics will be needed, along with new means to distribute resources and the fruits of production. You’ve likely heard Universal Basic Income [UBI] mentioned. In a Q&A session of a September 2016 talk on Home Deus, Harari discusses:
Obviously, Harari’s concern regarding how to define “universal” gets to the heart of the present political turmoil over “open borders” and “nationalism” vs. “globalization.” Any significant UBI would be an existential threat to many localized systems of privilege.

Real time AI example [Artificial Intelligence]
This week I had occasion to speak with the energy company that provides my residence with electricity. I phoned and was greeted by a pleasant AI voice saying:
“Thanks for calling _________”—followed by the same greeting in Spanish, with an option for choosing that language—and then prompting me: “Please tell me in detail how I can help.”
A bit flustered, realizing I’m entering into “conversation” with a machine, I tried a generic category that related to my issue: “Billing question,” I said. Funny, like a caricature of what one might do with a foreigner, I think I even spoke a good bit louder than usual to make sure I was clearly understood.
The pleasant AI voice responded with a list/menu of possible related options.
Now, mind you, it was quite unlikely my somewhat odd issue could be anticipated in a menu of options. So, failing in that, the AI voice again prompted me:
“Please tell me in detail how I can help.”
I’m thinking, “tell me?” A machine is saying “me” about itself?
Now, the postmodern part of me is pretty much OK with that. I do confess, though, hearing a machine use the pronoun “me” about itself did trigger dissonance in me with respect to language and narrative commitments I’ve made.
And, of course, I fell right in the trap. Short of any other idea, and so believing it to be the fastest way for me to get a human agent on the line, I played along. I tried to tell the machine in detail how it could help me. It only took me a couple of failed repetitions to discover the ‘magic’ words: “Human agent.” It will be interesting to see how long that ‘trick’ lasts as the AI will evolve.
The company has made the decision that incorporating the AI answering system has future benefits worth the uncomfortable feelings it generates when customers are greeted by an awkward marginally efficient machine. Making the AI the first ‘entity’ to deal with all incoming calls is a policy that reflects the company’s willingness to invest in the AI by providing the system a crucial in-context opportunity to do its machine learning process.
Harari talks about how we are nearing a tipping point when AI ascends to dominance transcending the need for human labor in many instances. So, how will we know when we’re there? That will vary greatly depending on the application. With regard to my energy provider, it will be a very simple matter that the present customer service agents will feel first. The tipping-point is reached when the AI learns how to handle any/every call as well as a human agent and simply stops referring calls. It won’t be long until AI will be able to far outperform human customer service agents. AI will be much more efficient, and, remember, the company has already proven it’s willingness to use AI even though it fails the “Touring test.” With zero labor cost, AI doesn’t need to be perfect, it only needs to be a little better than cost-heavy humans.
Resist
My new policy—while the opportunity lasts—when dealing with all AI customer service systems is to only use two words, “human agent.” For now, the only thing I want the machine to learn from my interactions with it is that the first and only action the AI needs to take is to provide a human agent.

Who are we?
Harari also raises the ‘human identity’ concern, and that is the far greater problem, on my view:
As noted at the top, Harari is a historian. History is very helpful in knowing how humans have approached different conditions/opportunities in the past. Our proven willingness to exploit ‘others’ of all sorts, and in endless ways, is a perennial problem. Important here to recall, too, that Harari has suggested money is the most universally accepted story humans have ever invented:
Finally, Harari provides the relevant history and the grounding of the concern that creates the “danger” he mentions above that is inherent in any loss of human instrumentality and utility:
Next week
A thousand pardons, I’m two hundred words too long (ordinarily 800), and so we fell well into the 4 minute read designation this week. Apologies, I hope you still feel the piece, and time spent engaging it, to be worth your time/trouble.
Continuing with Harari next week.
Your thoughts?
I never know what I’ve said till I hear the response. What did you hear me say?
Note: This blog page has employed a serial approach to outline, Spiral Dynamics, a helpful developmental anthropology based on the research of Clare Graves. Introduction (June 30, 2018), first in series (July 1, 2018).
One thought on “The (human) “useless class””