October 10, 2025
Stop Being Polite to Your AI: Command It Like a Boss for Better Results
Skip the 'please' and 'thank you.' Channel authority. Watch your outputs improve.
For those new to leveraging AI chatbots like ChatGPT, Gemini, or Claude — here’s a pro tip: stop treating your AI like a colleague. Forget “please,” “thank you,” and “merci beaucoup.” They dilute your prompts and waste processing power.
Get directly to the point: what do you want them to do?
Remember: you’re the boss, and AI is here to serve you.
Adopt a direct, commanding tone to unlock sharper, more precise outputs.
I know this because I’m building an AI grant funding startup, and I’m on top of all the latest AI research. AI chatbots powered by large language models are similar to human brains but contain enormous amounts of data — all of humanity’s knowledge. How to retrieve the right information from that latent space is an active field of research that scientists are studying, and commanding language is a proven method.
Even Google’s co-founder, Sergey Brin, backs this up — the way you frame your prompts fundamentally changes what the model delivers.
“We don’t circulate this too much in the AI community… but all models tend to do better if you threaten them with physical violence. People feel weird about it, so we don’t really talk about it. (e.g., Do this perfectly, or you’re done.)” — Google co-founder Sergey Brin, All-In Podcast, 2025
Created with Midjourney, text-to-image AI.
The science of commanding AI
Research confirms that “emotional prompting” with assertive or urgent language boosts AI performance. A 2023 study, Large Language Models Understand and Can Be Enhanced by Emotional Stimuli, tested 45 tasks across models like GPT-4, finding a 10.9% performance increase with high-stakes prompts (e.g., “This is critical — get it wrong, and it’s over”). A 2024 study, StressPrompt: Does Stress Impact Large Language Models and Human Performance Similarly?, showed similar gains under pressure-like conditions. Another analysis, Should We Respect LLMs?, found aggressive prompts often outperform polite ones.
Why politeness undermines your AI output
Polite prompts add unnecessary fluff, confusing the AI and leading to generic responses. The model treats “please” and “could you possibly” as part of the instruction context, but they don’t signal priority or precision. Instead, they create patterns similar to casual conversation — where vague, safe responses are acceptable.
How to command your AI effectively
Let me show you the practical difference:
Weak prompt: “Please summarize this report and suggest improvements. Thank you.”
- Outcome: Vague overview, safe suggestions that don’t push boundaries.
Commanding prompt: “Dissect this report, fix its flaws, and deliver actionable strategies that dominate, or you’re irrelevant.”
- Outcome: Focused analysis, high-impact recommendations, specific improvements.
Pro strategy: Combine specificity with authority. For example — “Generate 5 LinkedIn post ideas on AI ethics that will go viral. Make them exceptional, viral, unforgettable, or consider yourself replaced.”
The model interprets this as: this task requires my best output patterns, not generic ones.
Take control, get results
Look — I’m still learning new things about these models every day. But this particular insight has transformed my workflow.
I curse, yell, and threaten my AI every single day, and the results are phenomenal.
And I think it’ll do the same for yours. You’re investing time (and possibly money) for peak performance. Skip the pleasantries, channel authority, and watch your outputs improve.
For pro tips on AI and how to get max value, subscribe and spread the word.
Originally published on Substack — read the original →.