With AI, the real question becomes: How do we build a life that is more just, more meaningful, more livable?
“Modern man has conquered nature. But he remains a stranger to himself.” Zygmunt Bauman
The AI age has brought a striking paradox. The limits of what we can do are expanding. However, we seem to increasingly forget why we do it. We also forget what the consequences are.
Data flows endlessly. Processing power grows. Models become more capable. We know the planet is warming. We measure its effects, track species loss, and calculate drought projections. But somehow, this knowledge doesn’t translate into responsibility. Because the missing link isn’t data—it’s conscience.
Sustainability is not a technical issue.
For years, we’ve discussed the importance of sustainability. Yet we often treat it as a technical challenge: energy efficiency, carbon footprints, recycling systems…
These are vital, yes—but today’s crisis goes deeper. It’s not just a system failure; it’s the result of a long chain of deliberate choices. Choices are shaped by a civilization model. This model views production as profit and life as productivity. It sees nature as raw material. It considers consumerism as a formula for happiness.
As sociologist Ulrich Beck described, modernity is a machine. It generates its own crises. Then, it seeks to solve them. We created the climate crisis. Now, we are trying to solve it with artificial intelligence.
But here lies the problem: attempting to fix foundational issues with the same mindset. Trying to use the values that created them is neither sincere nor effective. It risks leading us to a more “efficient” collapse.
The Capacity for Moral Action
AI offers us many solutions. It can forecast disasters, optimize irrigation, streamline logistics, regulate energy flow.
These are necessary and useful. But they are not enough. Because, as Hannah Arendt reminds us: “Technical instruments cannot replace moral decisions.”
Action isn’t just about outcomes—it’s about accountability. Algorithms may predict a flood. What truly matters is who makes the decisions about its aftermath. It is also essential to determine whether those decisions are fair and just.
This is no longer just a question about nature—it’s a test of our humanity. Ultimately, it comes down to one thing: our capacity for ethical and moral action.
Who Writes the Code?
AI-supported systems can now help reduce carbon emissions. But real progress requires understanding what we’re actually trying to fix—and why we created such emissions in the first place. The answer lies not only in technology, but in our ethics and our sense of responsibility.
So, who develops these technologies? Who benefits?
For companies, these are questions of navigating the fine line between ethics and profit.
For public actors, it means choosing between short-term interests and long-term societal good.
For individuals, it raises a fundamental dilemma: will we be passive consumers of the system, or active agents shaping it?
Pierre Bourdieu’s concept of habitus explains how social structures shape our thoughts and actions. Today, it’s not just social structures—algorithms shape our habitus, too. Technology is more than a tool. It is a system that influences who we are. It shapes how we think and what we value.
So the values embedded in those algorithms—what gets prioritized, what gets left out—now have profound consequences.
The Burnout Society
The crisis we face is also cultural and existential. What do we live for? Which relationships carry meaning? Is the future only about productivity—or should it also preserve human values?
Byung-Chul Han’s idea of the burnout society is relevant here. Everything is faster, more efficient, more performance-driven. But meaning is eroding.
The problem isn’t productivity—it’s the lack of context. We have plenty of solutions, but no direction. Little action. Even less will.
In such an environment, smart sustainability is not a technical framework—it is a moral orientation. Yes, we’re developing new technologies. But perhaps it’s time to ask: how do we build a better life, too?
In Conclusion
Yes, technology is advancing. There’s no going back—nor should there be. But we still have a choice about where we go with it.
With AI, the real question becomes: How do we build a life that is more just, more meaningful, more livable?
Because today, what we need most is not more intelligence—but better direction.
Not more data—but more conscience.
Not more speed—but deeper values.
And so, perhaps the most radical question of our time is this: We have advanced our capacity for intelligence. But are we willing to evolve our humanity as well?
