top of page

Human intuition and instinct in the era of big data and artificial intelligence



Since the dawn of the Age of Reason and the modern era of science people have been trying to use the scientific method to make leadership decisions. This has created great successes, like our modern levels of agricultural production ... and atrocities like the Holocaust. Over the past 200 years, data acquisition and analysis have become increasingly sophisticated and our mathematical models of how the world works have become more and more accurate and reliable.

With the advent of advanced machine learning, our ability to gather and analyze data to make decisions has exceeded our individual understanding of how these systems work. In many cases, even when we cannot possibly understand how decisions are being made inside the "black box" of a machine learning algorithm the gains in productivity or performance of our artificially intelligent systems are so astounding that it is easy to dismiss criticism of these systems. Machine learning augmented processes outperform the obsolete humans they replace, reducing operating costs and increasing revenue. The benefit to the bottom line outweighs other considerations, and the voices of critics are drowned out by record profits.

I am far from being a technophobe or neo-luddite. Some critics of artificial intelligence systems decry them as a catastrophically disruptive technology. They make prophecies of massive universal unemployment or point to doomsday scenarios of AI depicted in science fiction. This is not my argument in the slightest, I am a strong proponent of implementing AI systems across the business and public spheres. What I fear, is that as these systems become more and more ubiquitous we will become more and more dependent on them to make our executive choices for us absent any critical analysis of their net effects.


It's not hard to think of examples of how in many ways this is already taking place and show how easily humans abdicate responsibility for decision making. Take street navigation apps, many people use them to get from Point A to Point B without thinking and would be hopelessly lost without them. I recently heard an anecdote by a podcast guest that his wife trusts Google to navigate more than she trusts him and overruled him when he wanted to take a route he knew would be faster than Google's suggestion. In that example, perhaps it means getting to your destination a few minutes later than if you'd followed your instincts ... but what about when you're talking about deployment of an AI system across an entire industrial sector? Our natural tendency to become mentally lazy and let the computer make our decisions for us could have catastrophic macro-effects.


The first factor that may lead to catastrophe I can imagine is that no matter how much data we gather and how complex the computer models we generate to describe the world the computer does not describe the real world as a 1:1 representation. This is a fundamental principle of any scientific model, models describe and help us understand reality but they are not reality. Just as a map of Kansas is not the actual state of Kansas. However, I frequently see a mentality amongst software engineers and programmers bordering on religious zeal and blind faith in the accuracy of their datasets and realism of their models. I encountered this in numerous industries when it came time to invest in new software to facilitate business processes. Inevitably, the executives in the organizations I worked were dazzled by slick interfaces and buzzword laden sales pitches ... and inevitably their were insurmountable flaws in the software when it failed to account for unexpected real world events. The most common unexpected event being that the employees using the software didn't think like the engineers who built it, didn't populate data fields properly, got frustrated, and started working around the software completely. Ultimately, this meant that in every case that the business in question spent massive amounts of money to invest in the software, recurring licensing fees, and required IT infrastructure only to find a year or two later that it was essentially not being used at all. In several cases, this capital outlay crippled the business and subsequently failed.

Although in my experience, and in the experience of hundreds of workers I have asked about this phenomena, it is nearly universal, software engineers and programmers scoff at it and dismiss my critique whenever I broach the subject. Programmers believe that if the software isn't working right it is the fault of the user for using it incorrectly ... whereas I would say if this is such a widespread issue, then it's a fault in how programmers are taught to view their place in how the world works. Engineers are supposed to create solutions to fix real-world problems, the world is not supposed to conform itself to the solution the engineer built. A perfect example of this is social media, which has created widespread social problems and societal division to which the engineers who created it respond, "Well, the users are using it wrong, we intended for them to post pictures of their happy families, not argue about politics."


The next potentially catastrophic factor I envision is the critical lack of technological literacy at the executive leadership level across society. Executives, often given their roles because of social status and pedigree more than any actual hard skills, are often grossly naive about technology in general. Their choice of what IT sytems and software to buy and why often boils down to wanting "the best," which is usually translated as "the most expensive" or "the one my competitor uses." Combine this with my previous observation about how programmers and their salesmen present their products, slick brochures, and catchy pitches win over analysis of net outputs or consideration of whether or not their workforce is competent enough to adapt to the new systems. In the era when the accountant could work around the shortcomings of her software suite with a clever spreadsheet, this was a costly annoyance ... when AI systems replace the accountant herself there will be no person in the loop to apply such common sense, to stop and say "Wait a minute ... that can't be right."

Machine learning systems have the potential to, and no doubt will, transform the human experience, perhaps more so than any other technology in human history. They are like all technologies, tools. Whether the net effects will be a boon to humanity or a blight depends entirely on the way they are deployed. If they are deployed haphazardly, with blind faith, without intuition and judgement, and without understanding of their potential for disruption, inevitably the results will be catastrophic. If wielded by leadership that fails to treat them with the respectful caution they deserve, the results will be catastrophic.


A cross-disciplinary educational process needs to take place. Programmers, software engineers, IT specialists, and others in the tech industry need to be educated in such a way that they understand the value of human intuition and judgment. They need to learn how to listen to people who aren't technologically savvy but will be expected to use the tools they create and to take their input seriously. Conversely, the leaders in public and private organizations who make the decisions to buy and implement these systems must attain a certain level of technological literacy in order to be able to make rational choices about what systems should be acquired and why. The average age of a Fortune 500 CEO is almost 60 years-old and out of the hundreds of CEOs and executives with whom I have worked over the years, precious few could troubleshoot a problem with their email. It has been immensely profitable for IT firms to keep executives ignorant.

Although IT has been mission critical for 30 years, many executives still lack basic understanding of how systems work. IT firms don't want any employees of their clients to really understand IT, certainly not the leadership. IT is supposed to fade away into the background, so they don't have to worry about it ... just call if something doesn't work, it'll be billed to your lucrative support contract. Short-term gain for specific firms has outweighed consideration of long-term net effects for society. This could spell large scale economic devastation as it applies to implementation of machine learning systems in business and the public sector. Ultimately, such economic disruption will spell disaster for these same tech sector. So, it is in the mutual interest of all parties that these issues be taken seriously.

The ethical considerations of any given tech firm now having within its capabilities to deploy an AI system that could disrupt the global economy cannot be understated, look no further than how algorithmically managed social media disrupted our past election cycle. The tech sector's argument, "If we can do it and turn a profit we're entitled to do so and ethics be damned" when applied to a technology that could potentially destabilize entire nation-states is patently ludicrous ... as ludicrous as the idea that any garage inventor should have the right to make and sell nuclear weapons if he can make them. Unfortunately, ethics and philosophy is not a subject much stressed in IT tech schools or university computer science programs.


If ever there was an argument as to the value of a well rounded education in the humanities regardless of your primary field of study, I would think this qualifies. This is the reason why I myself turned from wanting to start a marketing consulting firm and a casual hobby-level interest in military history and technology, to an interest in artificial intelligence and robotics (initially as it applied in a military context), to investigation of the applications of machine learning in business and marketing, to a study of philosophy, ethics, morality, and consciousness itself. While I mentally grappled with these concepts over the past few years I increasingly found myself turning to the field of philosophy as a source of answers for many of the extrapolated outcomes that occured to me. Perhaps, this helps connect some dots for anyone who has followed my work, which before I began this blog mostly took the form of disjointed ramblings on social media.

My intention, from the outset was to position myself in a rapidly changing job market by preparing myself for a career path that did not yet exist. Well, one thing led to another, and here I am co-hosting a podcast about spirituality and consciousness. I have been trying desperately to communicate to an audience that typically dismisses philosophy and such subjects as useless (the scientific community, the tech sector, business leadership, our elected officials and public servants) that this is not a naval gazing exercise. While trying to raise awareness of the importance of verified scientific thought within the metaphysical and creative communities. The solution lies at the intersection of these valid, but disparate worldviews.

0 views0 comments

Recent Posts

See All
bottom of page