I don't want to add more AI blah blah to your feed, but I think I have a trick for cutting through the hype and understanding what's real.
The trick? Understanding and thinking in probabilities.
When we talk about Generative AI, what we are really talking about under the hood is algorithms.
For example, you may already know that the "T" in ChatGPT stands for "Transformer."
In this context, a Transformer is not a fancy space robot that can shape itself into a semi-truck, but an algorithm.
This type of algorithm determines what words the system will "say" next when generating text from language models that resemble the natural language of humans.
The "thinking" and the resulting output is based on probability - i.e. the probability that one of a number of words in a set has a high enough degree of confidence (weight) that it should be the one that comes next in the output.
Don't hate on the oversimplification - I know there's way more happening under the hood - where I'm going with this is we are dealing with a technology that is not "deterministic" in nature, it is based purely on "probabilistic" outcomes.
Our human brains are wired for causality and deterministic thinking - i.e. if this happens, then this will happen, resulting in this outcome.
Because of this hard-wired determinism, some of us humans look at the output of a Generative AI process as being "right" or "wrong" rather than the output of a complicated set of calculations to find the most probable of a number of different possible outcomes.
So if you're waiting on GenAI until it can get things "right" every time, you will most likely be waiting for a long time for that concept to come to fruition.
Conversely, if you believe that the technology in its current state is going to take human jobs that require high degrees of tacit knowledge and creative thinking, then you are essentially believing what amounts to science fiction.
Why? Again, we're talking about algorithms. These algorithms can only "know" what they've been trained to know. They calculate the most *likely* outcomes based on probabilities, and not an absolute golden source truth (yet).
Wherever you are on the GenAI adoption spectrum from "Healthy Skeptic" to "I Believe So Hard I Replaced My Web3 Tattoo With An OpenAI Logo," understanding the concept of thinking in probabilities will help you see the technology for what it is in its current iteration.
How does one learn to think probabilistically? Well, that's another story, but I would start with searching for or asking your favorite chatbot about the concept of Bayesian probability as a primer.
From there move on to Bayesian networks to understand how probability applies across chains of interactions and events.
When you're ready to do a cannonball into the deep end of the pool, dust off Daniel Kahneman's book Thinking, Fast and Slow, focusing on Part IV when you eventually get to it.
Anything to add?