What’s an AI black box?
If you’re looking into an AI solution for your business, you may start seeing the term ‘black box’ pop up. But what’s an AI black box, and why should you care? Why are companies like ours making a point of our radical transparency and explainability with no black boxes? Today, we’ll explain what this is and why you might want to steer clear.
What is an AI black box?
IBM provides a great explanation: “A black box AI is an AI system whose internal workings are a mystery to its users. Users can see the system’s inputs and outputs, but they can’t see what happens within the AI tool to produce those outputs. […] Many of the most advanced machine learning models available today, including large language models such as OpenAI’s ChatGPT and Meta’s Llama, are black box AIs. These artificial intelligence models are trained on massive data sets through complex deep learning processes, and even their own creators do not fully understand how they work.” When creators use non-linear models, handle massive data sets and allow the system to provide non-interpretable outputs, you get a black box.
Why black box AI is a problem
Users can’t trust decisions they can’t understand. Imagine you go to your GP and she tells you, “You have cancer.” And when you ask, “What!? How!?” and all the other totally understandable questions given the circumstances, she refuses to explain how she came to that diagnosis. You’d be sceptical and you’d probably want a second opinion. Needing verification is less efficient. It’s also hard to assign responsibility when outcomes go wrong when using a black box system. That’s because you can’t follow the logic trail and see if the training data, algorithms or something else is to blame for an incorrect response. And that’s notwithstanding that hidden logic can also embed and further amplify societal bias; just look at Grok. But emerging laws are targeting this lack of transparency, like the EU AI Act and others. So, black box AI is probably already on its way out.
What is radical transparency in AI?
As such, we’re recommending starting your AI journey from a place of radical transparency where you can see every step of the process and understand the logic from input to output. The benefits are obvious to trust, safety, collaboration and innovation. And transparency makes AI more human-centred and future-proof by giving human users more agency and understanding.
There are two parts to radical transparency: explainability and visibility.
Explainability is the ability to describe how a model makes decisions.
Visibility is full transparency into the data, design, and logic that comprises it.
LIME, SHAP, interpretable models and open-source documentation all have a part to play in radical transparency. And companies like ours have seen the writing on the wall for black boxes and have already built transparent AI systems.
Black box checklist
If you see these red flags in your AI products and vendors, you might be making a black box mistake.
- Outputs are presented without explanation or insight.
- Users don’t know what data is prioritised in decision-making.
- IP protection or “trade secrets” is used to avoid explaining how it works.
- No logs of how the system reached a decision, and users cannot correct outputs.
- No access to model cards, risk assessments or documentation of intended use.
- No responsible AI policy, bias mitigation or ethical guidelines.
- No willingness to allow external audits.
- Undisclosed, scraped or sensitive unowned data used to train models.
- Models are changed without notice.
- No accountability mechanism for harmful outputs.
It’s clear why we all should be demanding AI that’s open, auditable and understandable. Systems like ours should be the standard, not the exception. And when you’re looking for your next AI solution, remember to reference the list above to help exclude black boxes from your shortlist so you can pick an AI you can trust instead.