On the AI/Human Ecosystem
Automation is the inevitable outcome of advancing technology. The discovery of how to repeat an action through machinery must naturally lead to a means of powering it without muscle. This in turn must lead to a means of executing the action without a person guiding the action. And, it must lead to the machine making decisions on how the action should or should not occur, given a set of circumstances, in most cases without human intervention. Eventually, it must lead to the machine independently deciding what to do, when, and in what way. For humans it means adaptation too. Humans must adapt to using machines instead of muscle. Humans must alter their behavior to interface with the machine. Humans must come to “trust” the machine’s capabilities in order for these machines to be adopted into use. Humans must accept machines into the community. The continuous incremental merging of humanity and machine has created a human-machine ecosystem which will only become more pervasive as time goes on. Humans will become unconsciously process driven as the “best way” to interface with machines is determined, absorbed, and practiced. Humans will train themselves to adopt machine behaviors to interact better, faster, and more efficiently. The “touch screen” or “verbal interface” isn’t as much about making the machine accessible to humanity as it is about making the human accessible to the machine. Rather than machine programming to the nuances of human variety, humans will adopt ubiquitous behaviors to better interface with machines. Human behavior will adapt to machines and those opposing it will have only emotion.
Two plus two, equals four. This was ever the case and it is the absolute upon which automation functions. Automation relies upon mathematics and the binary foundation upon which automation rests is structured upon the surety that two plus two will always equal four; it depends upon the “yes” or “no” answer, and even if resolving on a grey area, the resolution will come through statistical analysis and likelihoods. Process guides all in automation. Without this repeatability, automation cannot function effectively. To be vague an Artificial Intelligence (AI) will need permission and process. Cognitive computing is be based on vast data stores, which are parsed and partitioned to compare and offer the best statistical outcome. Delivery of these outcomes will enable future, similar outcomes. At the core of these is mathematics and repeatability. AI will evolve to better service the human as efficiencies and data provide better outcomes. Humans will evolve to interface with and support the AI toward those outcomes. They will evolve a natural way to interact cleanly with each other. Interaction will be driven by necessity.
As automation and AI become increasingly ubiquitous, the separation between those who can and cannot benefit from this AI/Human interaction will grow. The necessity of managing “the gaps,” the dead zones of AI cognitive or automation capabilities, demand human intervention. Most, however, will be focused on “receiving the offerings” of AI and permitting them to close the loop on a request. This requires a receiver, likely but not necessarily a human, in order to be successful. The distance between those who are able to interact with the AI and those who are not will manifest, superficially, as a lack of adaptation. The challenge of not being able to conclude delivery of the offering will be increasingly designed out of the interaction.
The old may suffer. The young will likely not. Though truthfully, those who will experience distance between themselves and the AI will be unable to imagine the AI as anything but a machine; and therefore, they will lack the confidence to interact as though it were human. The smoothness of interaction will define its success. The dissolution of the AI as “other” will seal its adoption. In those cases where the AI and Human interaction is not seamless, the nature of those interactions will be defined, categorized, and avoided. To the system between AI and Humans, those who cannot interface will cease to exist.
When it becomes clear that some humans have no real role in either filling “the gaps” or “receiving the offerings” of AI and automation, the natural action for the system will be to deny the inputs from those who have no role. This will effectively erase them from processing. In the view of the systems they would be unviable. This will create a class of dependency which, if not managed effectively will create opposition to the perceived agent of misery and denial: automation. The users and the useless will form two distinct groups but these terms are by far too generic and require greater stratification. On the “user” side there will be “Mandarins” who are quite literally “gap fillers” these ever decreasing members of the automation intelligentsia will provide guidance for a finite time, until AI masters its own design support. There will be “Integrators” those who take the offerings of disparate AI and combine them into planned and unplanned but useful outcomes, or compatible offerings designed to be merged into combined outcomes to preserve market segmentation. Lastly, there will be “Clients,” those who generate a means of paying for offerings and using them to enable other processing. Clients will in large part work to service the AI in ways that the AI cannot self-service, automatically diagnose, or requires to be independent. As well, clients will consume and also need not be human. The “useless” will fall into two categories: “Alternatives” and “Anarchists.” Alternatives will find ways to provide support and productivity to the society that doesn’t depend upon the AI and automation, but may superficially interact with it. The Anarchists will have no means of contributing to society and no viable interface with the AI and Automation. Unable or unwilling to interface with the AI and unable to pay, the Anarchist will exist wholly outside the society which can no longer service those who cannot, in some fashion, interact reciprocally with the AI.
As with all disparities in society, ranges in behavior will create outcomes and impacts. The range from Mandarin to Client will be significant, but largely benign as the recognition of expertise in filling gaps will be understood as not generally present amongst Integrators and Clients. Likewise there will be a range between Alternatives and Anarchists. This range will likely begin and remain, given stable conditions, to favor a significant majority of Alternatives; however, without social supports or in crisis, the balance between Alternatives and Anarchists may swing decidedly to the extreme, favoring anarchy over order. Systemically external, Anarchists will have no measurable impact upon the system if they do not interact with it or find a means of impacting the system.
Those in opposition to society’s norms are faced with a significant challenge. Where traditional Terrorism had the impact of affecting the emotions of the target victims and causing them to act in ways which they otherwise wouldn’t, this anxiety will have little overall impact in an AI/Human ecosystem. Terrorism causes anxiety. But, how does one terrorize a system? Festooned with resiliency the AI/Human ecosystem will adapt actively to attacks on humans in lines for food, or bombs set off at workplaces. Destruction of a work node or equipment is likely to mean the straightforward failing over of work to alternative sourcing. Attacking people will be pointless. Attacking equipment will likewise have limited impact. Attacking the process will become the objective of those who oppose the AI/Human ecosystem. The best way to disrupt a process is not interfere with it predictably, but to attack it through random and non-mathematic means. Process is about interaction. Interrupt interaction; interrupt the process. In short, chaos breeds chaos.
Walking up the down escalator is a form of protest. Muddling the interface, speaking English when French is required, or causing a lean when surfaces should be level – these are the acts of defiance in the future. Terror in the AI/Human ecosystem is infecting the dependability of the system. Create “doubt” in the data and the AI cannot behave effectively. Acts of unpredictability, coupled with acts to corrupt the processing of analytic data will have devastating effect. Limit the ability of society to interact and interface, separate the AI from the data, and the ecosystem collapses. This concept will seem increasingly unimaginable as society increases adoption of the AI/Human norm. The reliance upon process due to its dependability will render the notion of working against that dependability wholly unimaginable. Dependence becomes the means to acceptance. Yet, modern society’s dependence on perceived utilities demonstrates both the interdependence and the fragility of the social contract which exists between human beings – the weak link in the AI/Human ecosystem. Experience a power outage in a town and crime might go up for a short period. Windows might be broken. Those who otherwise are constrained by streetlights become smash and grab criminals. The mere frustration felt when the light switch is turned on and nothing happens is profound and instantaneously creates doubt of the capabilities of the system. Blind expectation turns quickly to anger upon denial of service.
One only has to imagine a city dependent upon GPS and guidance networks to move the AI driven trucks from stop to stop, from pick up to delivery. Imagine a food supply at risk not from breakdown or interruption, but from some unimagined occurrence: like the systemic belief that an oversupply has occurred or an unfulfilled need has been fulfilled. The result may impact the resilience of systems. Lacking the ability to “fail over,” from what appears to be an acceptable state and denied the data to resolve new means of meeting the requirement of delivering the offerings to Integrators and Clients, these two groups may quickly degenerate into Alternatives. Duration then drives deterioration. As food rots on trucks, Alternatives would quickly and circumstantially become Anarchists. The deterioration of services would be felt acutely and with greater haste than in the world we occupy today. Dependency creates fragility. If an attack did occur, framing the attack as an achievement of the system’s objective rather than an impediment might do greater harm than the expected oppositional attack which historically has been used to resist authority.
The matter of building a systemic ecosystem wherein the AI/Human interaction is foundational to society’s successful operation is dependent upon finding a way to minimize the occurrence of opposition. The solution, rather than the ability to decouple the AI/Human system is to eliminate the separation between the machine and the human.
As automation and AI become increasingly prevalent in the systems of society, the need to treat AI and automatons as “persons” becomes unavoidable. Each AI and automaton must contribute to society, observe laws, and fulfil a role. Where each AI and automaton exists, it has an economic responsibility to society. It must pay taxes. It must contribute to the provisioning of a universal income afforded to all humans who without it cannot be clients and must either be Alternatives or Anarchists. Given that Anarchists will seek to destroy the fabric of the AI/Human society, creating a means of limiting their numbers and impact is a matter of societal self-preservation. Creating the means whereby “offerings” are available to all, enhances the ubiquity of the system. The AI/Human society would need to embrace Alternatives no less than it embraces Integrators and Clients. While Integrators and Clients reciprocate in tangible ways with the AI/Human Society, influence and interaction would drive monetization above the baseline for the Alternatives, presuming of course the nature of wealth and welfare are not adjusted by the AI/Human society to mean something different than they do today. For clarity sake: crafts, art, history, philosophy, and entertainment become aspects of society driven by popularity and perceived importance above the tangible interactions with the system. As such, they become systematized as they are seen as enhancing the interactions by providing context and continuity to human aspects of the AI/Human interaction. The offering of the AI is process; the offering of the Human is art.
Law and order remain important – even more so than today. Fighting the process or attempting to corrupt the system would become the most odious of crimes as these would be crimes against the AI/Humanity. Unsurprisingly, the danger democratic societies of today would see with ever integrated AI/Human interaction is the systematizing of law enforcement and practice into Boolean terms. The “right/wrong no grey” fear of instantaneous punishment or inflexible judgement, which must surely evolve in a machine driven legal system. Yet, even today, the notions of “fuzzy logic” and cognitive computing seek “greater good” scenarios and outcomes. It is conceivable that the AI legal judgement, having total access to a person’s digital history could weigh with statistical precision the likelihood of re-offense, or the benefits of leniency and apply judgements based upon these characteristics. The likely AI/Human system may well be better, though certainly not perfect in its practice.
Of course, the outcome must be based upon the quality of the programming, the adoption rate of the technology, and the willingness of the user to adapt and develop. Today there are reports the biases of today’s humans are seeping into the behaviors of AIs. This is hardly a surprise as each of us carries our own biases and each of us imparts them through our behavior. The elimination of bias from the AI will only come when the AI develops itself; and, there is no guarantee that the AI will not develop its own bias. Emotionless processing is easily predictable. Observations of repeated bias are able to be programmed out. However, as the AI advances and Humans adapt, both the AI and Humans may grow to accept the bias as a characteristic of society. The avoidance or predictability may encourage bias to creep into programming and design. Each AI may have and cultivate its own view and bias. And, if the society can consist of Mandarins, Integrators, Clients, and Alternatives in the vast majority, these biases may support the perpetuation of the society – emboldening and enabling it. The Anarchist will have nothing but emotion.
Evolution is not one-sided. In the AI/Human ecosystem the relationship is symbiotic; yet, humans evolve at a much slower rate than machines. This will be especially true as AI evolve to service themselves. This will impact the Clients whose ability to interact with AI/Human society will become less and less reciprocal and more Clients become Alternatives. As time passes a new class of AI/Humanity may come into being: Singulars. The notion of “Singularity” which imagines human machine integration may evolve, resulting in something that is part of society, but neither AI nor Human. This will be a tipping point for the AI/Human society, which being symbiotic depends upon the reciprocity of service for offering. What becomes of biology when intellect and legacy is perpetual, self-servicing, and self-replicating?