I’ve addressed Reasoning in this substack before, but only in passing. There is a lot of talk now about Reasoning models or Reasoning-related capability (in the context of Agents or even through Model Orchestration etc.) lately. Why the focus on it? Because almost universally, Reasoning seems to be the major characteristic that most experts believe has been missing to date in the current generation of LLM-based AI models; thus the logic goes that if we could achieve something akin to Reasoning, then we’d have the missing part or foundation for AGI, right? Well, maybe. Let’s step back for a moment and cover some of the basic definitions again:
Reasoning - the action of thinking about something in a logical, sensible way. (wow, that’s too vague, let’s try…) thinking in which logical processes of an inductive or deductive character are used to draw conclusions from facts or premises. (There yet? maybe not, how about…) Reasoning involves using more-or-less rational processes of thinking and cognition to extrapolate from one’s existing knowledge to generate new knowledge, and involves the use of one’s intellect. The field of logic studies the ways in which humans can use formal reasoning to produce logically valid arguments and true conclusions.[5] Reasoning may be subdivided into forms of logical reasoning, such as deductive reasoning, inductive reasoning, and abductive reasoning.
That last one is from Wikipedia. So, we go from incredibly abstract and completely ambiguous to the Kitchen Sink approach. But, I think even here at the top-level we’re probably missing the mark with a lot of a priori assumptions and recursive definition (using the same thing to define itself). I think one thing in particular is probably way off here - and that’s the idea that Formal Logic is at the heart of things. Let’s take a look at that for a moment.
Most people have never studied formal logic and don’t know what most of those terms mean. Well, they don’t need to you say, because those terms simply describe natural processes, though. But the thing is, they don’t. Here’s an example from the Deductive Logic definition (also on Wikipedia):
Deductive reasoning is the process of drawing valid inferences. An inference is valid if its conclusion follows logically from its premises, meaning that it is impossible for the premises to be true and the conclusion to be false. For example, the inference from the premises “all men are mortal” and “Socrates is a man” to the conclusion “Socrates is mortal” is deductively valid.
There are lot’s of folks who wouldn’t necessarily apply this type of logic (knowingly and perhaps even unknowingly) and of course this the most basic example of the various types of formal logic being referred to. This isn’t knocking those people or formal logic per se, it’s merely a recognition that the everyday process of Reasoning likely doesn’t involve what we’ve defined as formal logic in most cases - and that makes sense given that humans have been Reasoning for 100’s of thousands and perhaps millions of years, but this type of formal logic didn’t actually get defined until about 2000 years ago. The thing that we’re trying to get at is the basic, underlying capability, not its formalization and synthesis into higher and perhaps more abstract representations. Now let’s look at AGI (I wrote an article on this last year); some have tried to encapsulate it as one definition and others have presented it as a set of levels (something I agree with)…
AGI (Artificial General Intelligence) refers to a theoretical form of AI that possesses human-like cognitive abilities, capable of understanding, learning, and applying knowledge across a wide range of tasks, unlike narrow AI, which is specialized for specific functions. An AGI system would exhibit abstract thinking, common sense, creativity, and the ability to sense and act in the world, matching the general intelligence of a human being. Wikipedia
Interestingly, this generic definition from Wikipedia fails to include Reasoning which perhaps just goes to show that the industry views on what AGI and Super-intelligence are is simply all over the map right now.
A New Top-Level Definition for Reasoning
In order for us to get to the point where we can quantify what Reasoning means in a measurable way, we’ll probably need a more pragmatic top-level definition. In order for the definition to work (in terms of supporting a more accurate and measurable definition of AGI) the new definition for Reasoning will need to have the following characteristics:
It cannot be defined recursively (e.g. using synonyms of itself in the definition).
It must declare the key component elements of the concept. It’s also important to recognize here that the concept is in itself - a process (in other words, an action or set of actions more than it is a thing - verb vs noun).
It must also include within each of those components some measurable or quantifiable elements. And those components should necessarily be dynamic (e.g. capable of change).
It must be generic enough to apply to both human and computational (e.g. Artificial) Reasoning.
It must also likely acknowledge in some fashion that Reasoning can occur in various ways and at various levels, (and that there is necessarily a Threshold or Thresholds associated with the various levels).
In other words, this definition can be neither abstract, nor all over the map - it needs to be constructed carefully in order that any system being designed to achieve it can thus conform with a set of concrete expectations. Well, that’s a lot. Let’s give it a try…
Reasoning (New Definition) - The related set of processes wherein a human or non-human intelligence is able to identify and resolve specific problems or challenges using a variety of dynamic techniques or capabilities. Reasoning typically involves the following components in helping to solve any given problem: a) pattern identification, b) analyses of alternatives, c) application of learned experience (intuition), d) application of learned knowledge, and e) application of learned behaviors. And Reasoning can occur at multiple intensities (depths and levels) and across various durations (instant versus extended, etc.).
Now that we’ve got this new definition, in my next article I’m going to address how we can; 1) apply it to AI or AGI design and 2) Measure it.
Copyright 2025, Digital Perspectives