I consider myself to be a skeptic when it comes to generative artificial intelligence (AI), and I’m certainly not alone.
The internet is overrun with debate about AI, from pro-AI fanatics hyping up how it totally transformed their lives to people who absolutely refuse to use it because of very valid concerns, such as data privacy or the environmental impacts.
While I tend to agree more with the skeptics, I find it interesting and surprising how many people (both online and off) have told me they use AI regularly and find it helpful. Part of my job is trying out new technology, and almost every time I try AI, it doesn’t work, which pushes me back into my skeptic stance. But according to Seth Juarez, VP of Product, AI Platform at Microsoft, skepticism might be the magic ingredient to effective AI use.
“I am also an AI skeptic,” Juarez told me when I sat down for a chat with him at Microsoft’s 2025 Build developer conference. “And that’s the reason why I can make them actually do what I want them to do.”
Juarez went on to explain that it comes down to understanding how large language models (LLMs) work, and by understanding that, people can learn to use them properly. So, how do these things work? Juarez described LLMs as “a machine that cranks out language.”
Seth Juarez, VP of Product, AI Platform at Microsoft.
He explained that LLMs effectively just take human language and break it up into pieces of words, which are mapped to vectors and “fed into this giant math machine.” The machine then gives a probability of what the next word will be.
“Because I know it works that way, I basically stripped it of any of the magic,” Juarez said. “And because I’ve stripped it of the magic, I know that it’s a probabilistic process. I need to make sure that my prompt that I put into it is going to maximize the ability for it to return the right thing… Because I approach it with skepticism, I know exactly how to hone the prompts to get it to exactly what I want every time.”
Narrow your focus to get AI to work
The issue here, however, is there’s a disconnect between what people think AI can do, what it actually can do, and how people can get those results. As I said up top, when I tried LLMs, they didn’t work, and I told Juarez exactly that. He said it’s because people generally try AI in an “open, huge way,” but the more effective approach is to narrow the scope.
“You have to tune the prompt, you have to get the right tools, and you have to have the right model. That is an engineering process that we need to do, not a consumer process, which is what we’ve done. We’ve basically unleashed to the consumers an engineering problem, and they go ‘well, it doesn’t work,’ and then others go ‘well, you’re holding it wrong,’ and to some extent, both are right.”
For the people out there like me who haven’t had good experiences with AI, Juarez suggests avoiding general AI tools, at least until they’re ready to put in the work to get better results.
“What I would tell consumers is, look for the AIs that do specific things that help you out in your life. And those are the ones you start with,” Juarez said. “General ones can also be good, but only if the legwork for each of the individual pieces is put in. It’s like expecting the software that does everything to work. No one would buy that.”

Seth Juarez (left) and Kedasha Kerr (right) on stage at Build.
To an extent, I think Juarez has a point here. My experience using general AI tools, like asking Microsoft Copilot or Google Gemini to do things for me, hasn’t gone well. But after reflecting, there were some narrower use cases I’ve tried that did work better.
For example, I tried using Lex.page, a web-based, AI-powered writing tool, which surprised me with how helpful it was. To be clear, I didn’t use Lex to generate content (I actually enjoy the writing part of my job and can’t fathom asking a machine to do it for me). Instead, Lex offers custom AI prompts that you can use to get feedback on your human-written content, and it was these prompts that surprised me with genuinely helpful feedback I could take and use to improve my human writing. I’ve also had some success using Gemini in Google Sheets to help figure out a formula for a budget tracker I was using. Sure, I could have just Googled it, but Gemini was able to make a formula specific to what I was working on and insert it directly into where it needed to go in my spreadsheet.
Overhyped by the industry
At the same time, I have some qualms. Environmental impact is chief among them. But it also bothers me that these systems, often positioned by their makers as capable of doing anything users want, require so much work to be able to do anything remotely helpful. I don’t have an issue with putting in some effort — the problem is that tech companies don’t make it clear that it’s required for a good result. Juarez gave me a sense that he’s aware of this disconnect between what companies say AI can do, and what people can actually do with it.
“One of the problems I think we’ve done is the industry has overhyped this. That has provided skepticism,” Juarez said. “I’m like, under-hyping it in a way that provides value, and that’s what I think we need to start doing.”
Juarez was genuinely excited about some of what’s happening in the AI space and about what’s to come. For example, a lot of what Microsoft talked about during Build was AI ‘agents’ — as Juarez described them, agents are basically a way of mapping human queries onto computer execution. The company showed off a lot of agents interacting with agents, which opens up a ton of potential.
He’s also quite excited about what developers and engineers will do with the AI and agent tools Microsoft has available. Juarez believes developers with a “healthy dose of skepticism” are going to come into the space and “unleash measured creativity.”
“I am surprised every day by some of the things that I’ve seen… where it’s like, oh, you can make an LLM do that,” Juarez said. “I think the core understanding for me is that anything having to do with language to convert into some kind of execution, if it can do that, what can’t it do with the healthy skepticism and the tools to do it. And that’s what we aim to provide.”
MobileSyrup may earn a commission from purchases made via our links, which helps fund the journalism we provide free on our website. These links do not influence our editorial content. Support us here.