In 1979, IBM published an internal training manual containing: A computer can never be held accountable. Therefore, a computer must never make a management decision.

47 years later, that sentence is back - precisely when the business world is debating how much decision-making to delegate to AI.

The Problem With AI That Hallucinates - and Still Works

Large language models are semi-non-deterministic. The same input can yield different outputs depending on tiny parameter variations. This makes them powerful but unpredictable — and this unpredictability is the core of the accountability challenge.

Accountability Doesn't Disappear - It Shifts

When we delegate decisions to AI, we don't eliminate accountability - we transfer it. In every case where AI causes harm - a bad credit decision, an unjust termination, flawed medical advice - there will always be a human bearing the consequences.

What This Means for Organizations Today

The framework is simple: the more irreversible the decision - in layoffs, resource allocation, client relationships - the greater the required human presence. AI can assist, analyze, and suggest. But final accountability must remain with a person. IBM's 1979 sentence described not the limits of technology - but the limits of accountability. Those limits haven't changed.