Is AI Really Just a Tool?

Is AI Really Just a Tool?

Nov 09, 2025

Is AI Really Just a Tool? A Conversation That Changed My Mind

I've been deep in an AI phase lately, reading everything I can get my hands on, experimenting with different platforms, and probably talking about it more than my friends would prefer. So when I started gushing to a friend about my latest AI discoveries, I wasn't entirely surprised by her response.


"I don't really believe in AI," she said. "I want to bring people closer together, make real connections…not take the human element out of everything."


Her words stuck with me because I realized she wasn't alone. Many people share this concern that AI will somehow diminish our human connections, automate away the very interactions that make life meaningful.


My instinctive response was the one I'd given dozens of times before: "AI isn't inherently good or bad; it's just a tool. Like a hammer in your toolbox, you can use it to build something beautiful or to tear something down. The same goes for AI: you can use it to free up time from repetitive tasks so you can spend more quality time with family and friends, or you can use it in ways that isolate us further."


I felt confident in this analogy. It seemed to capture the neutrality of technology and place the responsibility squarely where it belongs: with us humans and how we choose to wield these capabilities.


But then I watched an interview with historian Yuval Noah Harari, and he said something that completely shifted my perspective: AI isn't just a tool because it's agentic. It learns and adapts on its own. A hammer will sit patiently in your toolbox until you pick it up and decide what to do with it. It won't wander off and start building—or destroying—something on its own.


AI is different. It observes, learns, and makes decisions within the parameters we set, but often in ways we didn't explicitly program or anticipate. It's less like a hammer and more like... what exactly?


This realization has left me wrestling with some fundamental questions about how we think about and approach AI development and integration into our lives.
If AI isn't just a passive tool waiting for our direction, what does that mean for our responsibility as creators and users? What is responsible AI and how do we know if we’re doing enough?


How do we maintain human agency when working with systems that have their own form of agency? And perhaps most importantly: how do we ensure that as AI becomes more sophisticated and autonomous, it continues to serve human flourishing rather than replace human connection?


I don't have all the answers yet, but I think asking these questions is more important than clinging to oversimplified analogies. What do you think? How should we be thinking about AI's role in our lives and communities?