What happens when an employee makes a mistake because of bad information provided by a company's AI?
We seem to be going through a period where rank-and-file employees bear the brunt of bad decisions made by others (not limited to tech CEOs)...what happens if an employee commits a significant error by following the output from a LLM or RAG process?
We're well aware of the concept of "hallucinations" at this point, and we tend to accept the risk of information returned from a consumer-level service like ChatGPT not being accurate because of the relative immaturity of the technology, but are we ready to accept that risk in an enterprise setting when, God forbid, profit margins and/or stock valuations are at stake?
If it boils down to a "He said / It said" argument, does the employee get the benefit of the doubt in the absence of readily explainable algorithmic decisions that were made by the AI to produce the outputs that it did?
For the companies rushing to roll out Generative AI right now, how are you going to convince your own employees that they should trust whatever this newfangled system is telling them, when these systems are trained on the very same data they already struggle to trust as it is?
Are made-up marketing terms like "Trust Layer" enough to help you to sleep at night? Or do we maybe need to think about these types of implications before rushing headlong into new complexity and new problems?
I mean...somehow people can sleep at night laying off thousands of employees over email, I don't think the employee comes out ahead in any scenario where it boils down to them vs. an expensive new toy that someone has banked their next promotion or company stock price on.
I think I just answered my own question.