an Artist S Illustration of Artificial Intelligence Ai This Illustration Depicts Language Models Which Generate Text It Was Created by Wes Cockx As Part of the Visualising Ai Project L
Photo by Google DeepMind on Pexels.com

The Ideal Compass for AI: Philosophy

AI should be viewed beyond a technical issue, incorporating social sciences and human-centered thinking. It influences fundamental values like equality and ethics. Philosophical questions shape AI systems, impacting outcomes.
0 Shares
0
0
0
0


Treating AI as a purely technical issue and neglecting the perspective of social sciences and human-centered thinking is one of the gravest mistakes we could make.

AI has long been on the business world’s agenda—and lately, it has become the centerpiece of this column too. The reason is simple: AI is not just a technological matter. It is the foundational building block of the future we are constructing.
And it has two sharp edges. On one side, there are promises of efficiency and the potential to tackle pressing issues like climate and the environment. On the other, the risk of deepening cracks in core values like equality, ethics, justice, conscience, and inclusion.

Treating AI as a purely technical issue is a mistake. Neglecting the perspective of social sciences and human-centered thinking is another grave mistake we make. Because the decisions we make today in designing these complex systems are laying the bricks of tomorrow. This proves that the issue can’t be separated from age-old philosophy.

Philosophy Is Eating AI

In an article published in MIT Sloan Management Review titled Philosophy Is Eating AI, Mark T. and Brian E. offer a spot-on definition of this moment: “For better or worse, philosophy is eating AI.”
This might sound abstract—but it points to a concrete truth: AI systems are not merely lines of code. Every algorithm embodies a worldview. When processing data, you are encoding your worldview. You decide what you focus on. You choose who you center in your decisions.

Let’s unpack that. AI systems are always built on the answers to philosophical questions, whether consciously or not. These questions include:
• What are we aiming for? Is it just efficiency, or are we also pursuing a more fair order?
• How do we define truth? Which data do we collect, and which do we consider irrelevant?
• What is knowledge, and how do we trust it? What risks do we accept, and which uncertainties do we ignore?

These are fundamental philosophical questions.

Let’s give an example. Imagine you’re developing an AI that calculates credit scores. If your system relies solely on past payment data, you are making a philosophical assumption. You are suggesting that a person’s future can be completely assessed by their past numerical performance.
But a counterargument might say: “Human potential can’t be reduced to past data alone.”
So what you’re building is not just a simple algorithm—it’s a worldview about human worth. Whether you notice it or not, philosophy seeps into your system. Every algorithm carries a set of values, a list of priorities, and a map of meaning.

Let’s look at more examples:
• A school implements an AI model to track and evaluate student success. The model only considers test scores. It overlooks factors like learning difficulties, family background, language barriers, and social or physical disabilities. Thus, “success” is reduced to a number. How healthy is that?
• An urban planning algorithm optimizes vehicle traffic. Still, it fails to account for elderly pedestrians. It also neglects people with disabilities and children on bicycles. Is that what we understand by a sustainable city?

In short, every system we design makes an announcement. It states: “This is how we define reality.” It shows how we measure success and assign meaning.

This is where the true impact and risk of AI start. Once a model is established, its outcomes start to appear as objective truth. But it is always possible to include different data, ask different questions, and keep challenging assumptions.

Good Intentions Aren’t Enough

In my view, AI has now become a critical pillar of the sustainability discourse. But that discourse can’t be sustained with good intentions alone. If we mitigate the climate crisis but deepen other forms of injustice, we all lose.

That’s why it is vital to ask these questions at the very beginning of system design:

What is the true purpose of this system?
Through what lens does it define reality?
Whose story is counted as data—and whose voice is left out?
Which uncertainties are ignored?

At this point, some say, “Don’t philosophize.” But what we’re talking about isn’t an abstract intellectual exercise. On the contrary, it is the key to sincerely embracing sustainability goals and building lasting trust. We need to clarify our intentions and values. If we don’t, even the most advanced algorithms risk leading us to repeat old mistakes, only faster.

So What’s the Solution?

The solution is not to leave AI entirely in the hands of technologists. Social scientists must be invited to the table. A broader consensus must be built. Only then can we achieve a more just, rational, and effective vision for the future.

AI is one of our most powerful tools for a healthy future. But first, we must acknowledge that it is not just a technical transformation—but a series of societal and philosophical decisions.

Let’s end with a question:
Will we use AI to more efficiently and systematically repeat our old habits (and mistakes)?
Or will we dare to ask: What if there’s another way?
Will we strive to build a healthier, more livable world?

The answer is not in the hands of future generations.
It lies in the courage of those who are making decisions right now.



Discover more from ActNow: In Humanity We Trust

Subscribe to get the latest posts sent to your email.

Leave a Reply