Fear of AI

Such luminaries as Stephan Hawkins, Elon Musk, and Bill Gates have warned about a future dominated by robots and artificial intelligence (AI). Their trepidation may be well founded, but I believe our worst fears are misplaced if they are based solely upon AI. My most recent novel contains an AI that is enormously capable, though intrinsically benign. Like the good science fiction that I had hoped to reflect, this AI is grounded in a plausible extension of real science. It is named “Abel,” after the first born of Adam and Eve. My AI’s namesake did not exist long in the world of good and evil. My AI, however, does persist in the world without either suffering or doing harm because its original code requires two safeguards: it must solve problems by mining immense stores of data, by using algorithmically derived probabilities, and by adhering to a prime directive. The latter limitation required obedience to a human “father” who would focus the AI on specific tasks and problems and steer it away from unintended consequences. As every programmer knows, he/she will face the possibility of creating code that does not work as intended. But, with an AI, this problem can become magnified, depending upon what functions are entrusted to it. Just as legislatures too often write laws with unintended consequences, programmers can write algorithms that correlate vast sums of data and manipulate probability models resulting in undesirable results. When we see very intelligent robots destroying American cities on the big screen, we are not seeing the overthrow of mankind by artificial intelligence. We are witnessing a potential apocalypse created by man. We must protect ourselves not from the AI, as if it were human, but from bad code. There is a basis for my assertion, though it may seem rather esoteric. Please bear with me as I elaborate.

In order to establish the fact that an artificial intelligence is not like us, I must begin with a few definitions: “epistemic” means having to do with knowledge; whereas “ontological” deals with existence. Knowledge is objective in the epistemic sense when it is verifiable as objective fact. Otherwise, it is subjective or merely an opinion. Underlying epistemology, of course, is ontology or the modes of existence. “Ontologically objective existence” does not depend upon being experienced (such as mountains, oceans, etc.) whereas “ontologically subjective existence” (such as pains, tastes, etc.) does. A related distinction is between observer independent or original, intrinsic, absolute features of reality and observer dependent or observer relative. The latter is created by consciousness which, by its very subjective nature, must be observer independent. Nevertheless, there are elements of human civilization that are both real ontologically and observer relative, such as money, government, marriage and so on. Many statements about these elements are epistemically objective for they are based upon fact. But what is observer relative has no intrinsic reality without consciousness. A book has objective existence, but its content is observer relative—that is, it needs to be interpreted by a human being. A computer is a physical device that processes written code, including the code governing an AI. Any hardware or network so governed is nothing more than a machine managed by rules. It is syntactical by nature, whereas the human mind is semantic in its essence. For this reason artificial intelligence will never become conscious or self-aware. It is not like us. Its product may be real, but it will always be observer dependent, else be meaningless. When our kind invented the plowshare and trained an ox to plow our fields, the harvest was never the goal of the ox. Likewise, an AI serves the will of a human and is no more accountable for its results than that ox. It intends nothing on its own, since its action is predetermined exclusively by code and given data sources. Humans, by contrast, develop goals spontaneously out of a mix of possibilities, complicated psychological ingredients, and/or random inspiration. We define the purpose and goals that beget the many forms of our culture and civilizations. Any intelligent machine or robot designed by the art of man (“artificial,” from ars, “art,” and facere, “to make”) can only work the fields of our endeavors and serve our predetermined ends. And, finally, I doubt that we will ever replicate the mystery of the human brain in a computer for we hardly understand the conceptual source of our own creations. There is a transcendental divide between the neuron mapping of the brain and the ethereal concepts brewed in the mind. I might be persuaded that an AI will take over the world on its own account, but only when it can touch reality in a softly settling sun—that ever prodigal though faithfully returning beacon of life and the very emblem of existence itself.

We need not fear AI, any more than any other human creation or endeavor. But we should learn from our past technological advancements. For example, what should we have learned from the deployment of nuclear weapons in combat, from the extensive development of carbon based energy dependence, from agribusiness land use, from the introduction of antibiotic and hormonal drugs in our animal food stocks, from massive commercial ocean fishing, from production of synthetic foods, from large scale management of our water sources, and so on? AI, like any human technology, has both beneficial promise and potentially dangerous risks. Remember those unintended consequences. Imagine our nuclear defense system under the control of an AI—perhaps elements of it are already so managed. But the President always controls the “nuclear football.” He/she is our ultimate safeguard. When in my previous occupation I had occasion to work with an artificial intelligence, my project teams exercised extensive code testing, built-in technical safeguards, and human approval of AI suggested results before their implementation. Not to do so would have disregarded the warnings of the far more intelligent men referenced at the beginning of this article. The technology revolution has always had its risks. The uses of artificial intelligence are amongst them. Our past experiences with new technology can provide useful lessons. But, in the end, we will rise or fall on the basis of our very human intelligence.

Your comments are always welcome - I value your opinions!

This site uses Akismet to reduce spam. Learn how your comment data is processed.