A Framework for Defining Human Designated Roles (HDRs) in the Age of AI.
Introduction
There has been quite a bit of hype and hysteria over the past several months in relation to the latest wave of AI-products and capabilities entering the marketplace. While these new capabilities have indeed made an incredible impact, it’s worthwhile to take a more balanced approach towards assessing those impacts and the next steps that might be taken in regards to them. In recent weeks, we’ve witnessed calls to place a moratorium on all AI research, to create a new UN agency to deal with the existential threat of AI and quite a few product announcements and stock market reactions. What does it all mean? What should we actually be doing at this point? Is AI an actual threat to human civilization and if it is, why are all of the folks who’ve been working on it for decades suddenly worried about it now?
There are a lot of questions and concerns, yet few concrete answers and even fewer rational suggestions of how we ought to proceed with AI. A moratorium without specific definition seems both unlikely and unproductive, a UN agency without a specific mandate is much the same. This is a complex topic – but the best way to tackle any complex topic is to begin breaking it down into manageable chunks and deal with those chunks one at a time. There does in fact seem to be an obvious focal point for the first round of pragmatic discussions about AI impacts and this focal point has both near and long-term implications. This first round of examination should not be centered on some future existential threat, at least not directly; the first dialog on how to deal with AI’s impact on society should focus on us – the ordinary people who are already now being affected and all those who will eventually have to face a new world of work as the result of AI adoption.
Proposed Premise – We are now ready to discuss and define which types of work-related roles ought to be exclusively reserved for humans as opposed to some AI capability. In this context, we’re not referring to augmentative use of AI applied to existing human roles, but scenarios wherein AI capability might fully replace human roles. This is not a question of whether such a scenario is possible, it is – rather this is a question as to whether it should be allowed to happen for all / any roles or whether some jobs should be protected as Human Designated Roles (HDRs). Determination / definition of certain jobs as HDRs will require action of some sort to both actualize the distinction and to maintain it. This process of designating jobs in this manner could be taken from both a national and international perspective; however, it is less likely to work well if approached on a nation-to-nation basis.
Disclaimers
1. It’s understood that not all of the pertinent AI-related definitions that are associated with the HDR framework have been agreed upon as yet. This will become particularly important when focusing on legal impacts and formal regulation/s (at both the local and international level).
2. The exact role definitions need to occur as part of a larger collaborative process; this article is merely meant to help kickstart that process.
3. This approach is not meant to address all potential impacts or risks associated with the adoption of ever more powerful AI capabilities, it is merely focusing on one area that is relatively simple to quantify and mitigate.
Assumptions:
The following assumptions apply in regards to both the dialog and the later management of any HDR-related paradigm.
1. There needs to be a threshold defined on a per role basis as to 1) how much augmentative AI may be applied individually before the role is primarily AI-centered and 2) what impact augmentative AI has within the role category on overall employment.
2. Some AI capabilities will become ubiquitous in that they will become embedded in most available information technology. The expectation is that this will increase individual productivity , but not necessarily replace individuals. However, these ordinary productivity focused products can and likely will replace people in some contexts and as those contexts become better defined, the groups charged with determining HDRs can address them.
3. Robotics, when coupled with AI, must also be considered when making HDR-related determinations. This also needs to apply in situations associated with transportation, such as self-driving vehicles and planes.
4. It is understood that any such paradigm as is being proposed here is subject to change over time as jobs, technology and perceptions evolve.
HDR Framework:
The HDR Framework (see figure below) is the practical starting point for discussions and potential consensus on determining what roles ought to be reserved for humans in the decades to come. The components within this proposed framework include the following:
Non-Critical versus Critical Roles – All jobs / roles can be classified as either critical or non-critical based upon the nature of said roles. Typically, a critical role would be one wherein there is direct potential impact to life and / or safety. This could include various military roles (not all perhaps), as well as doctors and first responders, etc.
Non-Critical roles essentially include everything else and could be further classified by cultural and business focus:
· Cultural – Roles with a cultural focus represent the collective ability for mankind to continue to maintain a unique cultural identity.
· Business – These types of roles represent the majority of all employment for humans. While they may not be as culturally relevant for civilization, they certainly are paramount for the maintenance of economic activity.
Spectrum of AI Capability – This is a simplified way to quantify what types of technical capability are being applied to the various role categories (and individual roles within them). While this current list is subject to interpretation and likely may change, there are certain considerations that will likely remain intact in any HDR paradigm. Most important among them relates to the application of AGI or other AI capability to autonomous roles. While military scenarios are the most commonly considered in this context, others are important as well.
Community Defined versus Regulatory Defined Roles – This refers to the process of how roles are defined and designated as “Human-Only.” In other words, Critical roles are much more likely to require regulatory definition and eventual compliance whereas most non-critical roles will be subject community-definition and voluntary adherence. Regulation can occur at both the national and international level, but local efforts at regulation and compliance will be much more difficult to manage. The types of groups that may be involved in helping to create community definitions and guidelines are fairly wide open right now, but as with any non-regulatory paradigm, compliance cannot be enforced.
Conclusion & Next Steps:
The adoption of AI technology will have a profound impact on human society – the thing to keep in mind is that we have every ability to control and define those impacts if we begin a more constructive dialog now. While focusing on how AI shapes the future workforce represents only a part of that dialog; it represents a great place to start. By initiating this part of the discussion, we can help prepare for the more complex issues and begin creating the organizations and planning necessary to both mitigate potential risks as well as to pave the way for AI-related opportunities.
Copyright 2024, Stephen Lahanas