Briefs


Illusion of insight

April 16, 2025

Google’s Gemini Deep Research, much like OpenAI’s version we critiqued in February, suffers from the same core failure: it cannot deliver intelligence that matters. The problem isn’t the prose. It’s the purpose. These systems output language that sounds informed but consistently fails to identify the underlying objective, prioritize relevant evidence, or deliver usable conclusions.

As a test, we ran Gemini Deep Research on the topic of recent Chinese sanctions. It’s a topic our system is primed to tackle well. Here’s a representative sentence from its final output:

“Interestingly, China’s apparent willingness to invest in countries that are facing US sanctions, as noted in some analyses, could lead to shifts in geopolitical alliances and offer alternative economic development pathways for certain nations, potentially impacting the US’s global influence.”

Deep Research collapses under the weight of its own vagueness. The sentence is padded with qualifiers and clichés. “Interestingly” signals a lack of confidence. “Apparent willingness” restates the obvious. “Some analyses” offers no source or synthesis. The phrase “shifts in geopolitical alliances” ignores that we are already in a second Cold War. “Alternative economic development pathways” means nothing without examples. “Potentially impacting U.S. global influence” avoids the real question: how? In what domain? With what consequence?

Enterprise users don’t need summaries of what might be true. They need structured insight: what happened, why it matters, what happens next, and what to do about it. Gemini Deep Research—like its OpenAI cousin—misses the mark because it doesn’t know what it’s solving for. It has no embedded goal, no prioritization mechanism, and no ability to distinguish boilerplate from intelligence. It is, functionally, a language model that has been told to sound like an analyst.

The sentence above is a case study in what we warned about on February 17: models that lack motivation, instinct, and persistence will fail to produce actionable analysis. Gemini Deep Research picked sources based first-and-foremost on technical availability, not analytical signal. It leaned heavily on academic papers and derivative think tank summaries while largely ignoring harder-to-reach but more relevant materials like official customs notices, corporate filings, or policy briefings from foreign government agencies. That’s because it has no instinct for where signal resides.

Radiant Intel takes a different approach. Our system begins with clearly defined goals and a curated corpus of sources handpicked by experts. Our proprietary agentic architecture encoded with decades of analyst expertise ensures the model does not guess what matters—it follows human intent, bringing scale and speed to trusted methods rather than simulating them. Lastly, we ground the analysis in the specifics of our customer and their unique operational context, regulatory environment, and strategic priorities. The latest advances from OpenAI (such as longer context windows and lower inference costs) help us immensely, but they reinforce our thesis, not replace it. The future of intelligence isn’t AI-only—it’s AI-scaled, human-guided.

Gemini Deep Research, like OpenAI’s earlier attempt, works well for casual users trying to explore a topic. But in the enterprise, where decisions depend on context, depth, specificity, and credibility, these models remain little more than sophisticated noise generators. The challenge isn’t linguistic; it’s epistemic.

Intelligence is about knowing what to ignore, not just what to include.

Are you sure?