Insights Jeroen Baert
“AI recognises patterns, but it does not understand them. Use it cleverly as a tool, not as a replacement for your entire toolbox.”
AI can now generate stunning images and videos that, at first glance, appear realistic. The visual output can even seem better than what humans could produce in a short amount of time. However, the problem is that AI has no intrinsic understanding of the physical world. It does not know how objects would move logically or how human anatomy works.
AI also lacks fundamental reasoning capabilities. A striking example from Baert’s presentation is the river crossing puzzle: a simple problem where a farmer must transport a sheep across a river by boat. AI, however, struggles with this, precisely because it can’t reason on its own.
All AI does is recognise patterns in existing data and combine those patterns into something coherent. This means that subtle errors will always be there. It is therefore essential to critically evaluate the output rather than blindly trust the results of an AI model.
AI can now generate stunning images and videos that, at first glance, appear realistic. The visual output can even seem better than what humans could produce in a short amount of time. However, the problem is that AI has no intrinsic understanding of the physical world. It does not know how objects would move logically or how human anatomy works.
AI also lacks fundamental reasoning capabilities. A striking example from Baert’s presentation is the river crossing puzzle: a simple problem where a farmer must transport a sheep across a river by boat. AI, however, struggles with this, precisely because it can’t reason on its own.
All AI does is recognise patterns in existing data and combine those patterns into something coherent. This means that subtle errors will always be there. It is therefore essential to critically evaluate the output rather than blindly trust the results of an AI model.
“AI is only as good as the data it is trained on.”
Every AI you use carries a certain bias, based on the data it was trained on. For example, if an AI is asked to draw a ‘doctor’, it will statistically produce a man more often, whereas a ‘flight attendant’ is more likely to be shown as a woman. This often reflects existing societal stereotypes in the data, not reality.
Bias is a problem that cannot be easily eliminated, because AI learns patterns from historical data and generalises them. This can lead to discrimination, misrepresentation, and ethical complications, particularly in contexts such as HR, law, or public communication.
That’s why it is essential to always be aware of this bias and use AI thoughtfully. It’s not just about technical optimisation; it’s also about social responsibility. The technology itself is neutral, but how it is used can reinforce prejudices if it’s not applied critically.