Weekly ArticlesInterviews

Bridging Research and Reality: Tommaso Colombo on AI That Matters

Tommaso Colombo leads the AI division at Spindox, where applied research meets client delivery. In this interview, he reflects on the cultural shift around AI, the risks of abstraction, and the value of hybrid thinking—reminding us that true innovation is never detached from reality.

Image showing a robot going down stairs

We’re used to discussing science, technology, and business—but today’s guest might just be the most representative of all three worlds. Please welcome Tommaso Colombo, Head of AI Research at Spindox.

Thank you, I’m really glad to be here. It’s important to take time to reflect on innovation—not just how we do it, but also how we talk about it. Explaining things clearly to others can actually help us see them differently ourselves.

At Spindox, I lead the AI division, a team of over 50 people. We work on artificial intelligence, but it’s important to clarify what that means for us. The same team handles both industrial projects and applied research: from European-funded initiatives to regional or national research programs.

We believe this hybrid approach is essential. Real innovation happens when research and development solve concrete, real-world problems. But to know which problems matter, you need to work directly with clients on operational projects. That’s why our R&D efforts are always aligned with what clients need and value.

At the same time, working with Italian clients isn’t always easy. They often want innovation and AI delivered in two days. But complex technologies—and the change management they require—can’t be rushed. That’s where our R&D plays a key role: it allows us to bring to the table prototypes and solutions that are already advanced, accelerating time to market.

So yes, my job is to guide this team, but also to develop new opportunities—industrial, commercial, and research-based. I’m constantly working with clients to explore how AI can solve real problems and uncover untapped opportunities.

Let’s talk about the evolution of AI. From being a niche research topic to becoming a headline-driving technology, AI has gone mainstream. What’s changed in your work—and in your perception?

We’ve definitely reached a turning point. For years, we worked in relative quiet—developing algorithms, experimenting, publishing, applying AI to niche industrial use cases. Then, with the explosion of generative models and ChatGPT, everything changed.

Suddenly, everyone had an opinion about AI. Clients started asking questions. Boards wanted to understand what was coming. Decision-makers could no longer ignore it.

This has been both a challenge and an opportunity. The challenge is the usual one: expectation versus reality. Many believe that AI can do anything. But that’s simply not true. What we can do is often remarkable—but also very specific. Context, data quality, and objectives matter.

The opportunity, on the other hand, is enormous. We now have a cultural and strategic window where people are open to talking about intelligence—about how decisions are made, about what knowledge is, and what it isn’t.

It’s no longer just about technology. It’s about rethinking how we work, how we create, how we decide. And that’s a much deeper transformation.

You mentioned expectation versus reality. Let’s talk about risks—not just technical ones, but also cultural and strategic. What do you see as the biggest risk in how we’re approaching AI today?

The biggest risk is overconfidence.

There’s a kind of narrative seduction happening: the idea that AI is not only powerful, but also neutral, objective, and even infallible. That’s dangerous.

AI systems are trained on data. And data reflects our world, with all its biases, gaps, and contradictions. If you don’t understand that, you end up making decisions based on models that seem objective—but are actually just reproducing historical distortions.

Another risk is abstraction. People use AI without knowing how it works. They rely on outputs without understanding the process. That creates a disconnect.

That’s why I believe in hybrid thinking. AI doesn’t eliminate human decision-making—it changes it. It augments it. But for that to happen, we need to develop new skills: critical interpretation, model literacy, and contextual awareness.

AI is not a magic box. It’s a system that requires care, oversight, and constant questioning.

Let’s go back to your model. You lead a team that combines applied research and client delivery. How do you keep that balance? What’s the key to making research actually useful for business?

It’s all about integration. For research to be useful, it has to be connected to reality.

We don’t believe in innovation labs that are detached from operations. We want research that gets tested on the ground, with real users, real constraints, real feedback.

That’s why our team works closely with clients—not just in delivering solutions, but in co-designing them. We involve them early. We show them prototypes. We ask questions, even uncomfortable ones. And we bring back what we learn into our research process.

The other side is just as important: we need to protect time and space for deep thinking. Research isn’t just an accessory. It’s what allows us to anticipate, to explore, to prepare for what’s next.

So we alternate. Sometimes we’re hands-on, pragmatic, fast. Other times, we step back and ask: what’s emerging? What’s shifting? What are we not seeing yet?

That’s the only way to keep innovation alive—and honest.

One last question we always ask . When you face a difficult decision—one where data and logic aren’t enough—what do you rely on?

I rely on coherence.

For me, a good decision isn’t just about solving a problem. It’s about staying aligned—with your values, your vision, your context.

Sometimes, the data tells you one thing, but your experience tells you something else. Sometimes, the best choice isn’t the most efficient, but the one that builds trust or opens future possibilities.

So I ask myself: does this decision make sense not just today, but in the long run? Will it still make sense if the context shifts? Will I be able to explain it—to others, and to myself?

That’s what guides me. Not perfection, but coherence.