← Back to Blog Technology & AI Dec 12, 2025

5 Surprising Truths About AI (That Have Nothing to Do With Killer Robots)

Featuring insights from Rohit Malnkar, AI Strategist & Data Engineer

🎥

Watch the Full Conversation

From The Curiosity Room podcast with Rohit Malnkar

Introduction: Cutting Through the Hype

Artificial Intelligence. The term feels like it's everywhere, plastered on every new app and gadget, making us think of either a utopian future or a world run by self-aware machines. It can sound futuristic, mysterious, and maybe even a little scary. The constant hype makes it feel both incredibly complex and impossibly out of reach for the average person.

But what if the reality is far more understandable? What if, behind the technical jargon and marketing buzzwords, AI is built on principles we can all grasp? The truth is, many of the core ideas are surprisingly relatable.

To cut through the noise, we sat down with AI strategist and data engineer Rohit Malnkar. This article distills five of his most surprising and counter-intuitive takeaways to help you separate the reality from the hype.


1. It's Not Magic, It's Just Math with Good Marketing

The first and most important truth is that what we call "AI" is not a thinking brain or a digital soul. At its heart, it's a collection of advanced algorithms—step-by-step instructions—running on powerful computers. These algorithms are designed to analyze huge amounts of data and spot patterns, spotting them faster than a human can.

Many products labeled "AI-powered" are really just following a fancy set of rules built by a human developer. Even sophisticated tools like voice assistants reveal these limitations. As Malnkar noted, asking a simple yes-or-no question can result in a list of web results, making the experience feel like "dating someone who is emotionally unavailable" where you keep repeating yourself and they just don't get you. It's pattern recognition, not consciousness.

"if I have to tell you a proper naked truth it is just math and there is no magic involved right... it has just some statistics with good marketing"

2. New Tech, Same Old Fear: AI is in its "Vaccine Phase"

It's a common human tendency to "resent things that they do not understand." History is filled with examples of major innovations that were initially met with fear and confusion before becoming indispensable. AI is simply the latest technology to go through this predictable cycle.

Consider these historical parallels Malnkar shared:

Vaccines: When first introduced, many people feared they would cause harm, with some even believing they might "turn into zombie."
Cars: It was once believed that if you "go faster than a horse it will cause problems to your lungs."
Credit Cards: The concept was so alien that people wondered if it was "plastic money or is it black magic."

In each case, what began with fear and confusion eventually led to widespread adoption. AI is currently in its "vaccine phase"—a period of misunderstanding and apprehension before it becomes a fully integrated and accepted part of our lives.

3. AI Learns Like a Child... or a Student Cramming for an Exam

So how does a machine "learn"? One of the simplest analogies is that of a parent teaching a child. Imagine a parent repeatedly showing a child an apple in different contexts—in a market, on a bus, hanging from a tree. Eventually, after seeing enough examples, the child learns to recognize an apple on its own.

AI does the exact same thing, but instead of fruits, it learns from thousands or millions of examples in a dataset to identify patterns. Whether it's the shape and color of a fruit or the characteristics of a spam email, the principle is the same.

Another powerful analogy from Malnkar is that AI training is like a student cramming for an exam. The student's main goal isn't deep understanding, but to "memorize the pattern... and write the answer how I remembered it." The AI model does this by absorbing vast amounts of information to recognize patterns and produce a correct output.

The crucial difference?

"except unlike us AI doesn't forget everything the next morning."

4. Are You Using AI or Just Fancy Automation? Here's the Difference.

This confusion is understandable, especially when you remember our first truth: much of what's labeled "AI" is just math with good marketing. Many companies rebrand simple automation as AI, but as Malnkar explained, there's a fundamental difference. Think of automation as an "obedient servant." You give it a set of fixed, predefined rules, and it follows them perfectly every single time, without learning or adapting.

The Email Filter Example

🤖 Automation:

You create a simple rule: "If the subject line contains the word 'invoice,' move the email to the finance folder." This system is blindly obedient. If a colleague sends a prank email with "invoice" in the subject, it will still land in the finance folder. Automation is effective, but "clueless."

🧠 AI:

An AI-powered filter goes much further. It doesn't just look for a single keyword. It analyzes thousands of signals—sender behavior, past interactions, the tone of the message, timing—to form a judgment about whether an email is spam, important, or something else. It learns from your actions and adapts over time.

To put it another way, Malnkar likens automation to a chef who perfectly follows a recipe every time. AI, in contrast, is the chef who understands the principles of cooking, experiments with ingredients, burns a few dishes, and eventually invents a new dish after learning from both successes and failures.

"automation follows instructions and AI learns from experience."

5. The Real Danger Isn't Skynet—It's Outsourcing Your Brain

The most pressing risk of AI isn't a sci-fi scenario of robots taking over the world. The more immediate and realistic danger is that humans become too dependent on it, outsourcing their own critical thinking, judgment, and curiosity.

This pressure to adopt AI is often more about psychology than technology. As Malnkar explains, "the people who build AI tools want you to feel like you are falling behind... they will create this urgency." This projects a value onto AI that may not reflect its reality.

Malnkar illustrated this with a powerful story about a museum guide who hands a "priceless" artifact to a visitor, who then accidentally drops and breaks it. The visitor is overcome with guilt and shame, only to be told the artifact was a fake. The object had no intrinsic value; its value was based entirely on the perception projected onto it. The fear and hype around AI often work the same way.

The moment we stop using AI as a tool and start relying on it to make our decisions for us is the moment we lose something essentially human.

"the moment we outsource our judgment our curiosity our decision making to AI that's when we go from using AI as a tool to depending on like a crutch"

Conclusion: A Tool, Not a Crutch

When you strip away the marketing, AI is revealed to be what it has always been: a tool. It's an incredibly powerful tool, capable of processing data at a scale humans cannot, but a tool nonetheless. It is meant to augment our intelligence, not replace it.

The path forward is to use AI to "automate the boring bits" and speed up your work, but not to "switch off your own thinking." Let it assist you, not define you. As you navigate the ever-evolving landscape of technology, keep this final thought from Malnkar in mind, as it perfectly captures the true challenge ahead:

"the future isn't AI versus humans but it's humans who know how to use AI versus humans who don't."