
How AI & LLMs Like ChatGPT Are Redefining Work, Time & Human Contribution
The Rise of the Machines – A New Kind of Colleague Has Arrived In the...
2026 Marketing Masterclass Announced: Shop Early Bird Offers Here
Everyone seems to be rushing to use AI and specifically LLMs like ChatGPT, Claude and Copilot to gain better efficiency. Time-starved entrepreneurs, marketers, and business leaders are increasingly turning to AI to lighten the load. And with good reason – tools like ChatGPT have reshaped how we work-powering copywriting, ideation, research, and strategic thinking at breakneck speed. But here’s the uncomfortable truth: most people are using it badly.
When ChatGPT, or any other large language model (LLM), delivers a generic or average output, the root problem is almost always the same: the prompt was too vague, too short, or too open-ended. You get out what you put in. Garbage in, garbage out.
The reality is that LLMs are not mind readers. They are pattern recognisers. That means the quality, context, and clarity of your prompt determines the quality of your result. Like briefing a junior member of staff or a freelance writer, you need to give it direction. Otherwise, you’ll get a sea of clichés and fluff.
Mastering Prompts is the key to your success. And that’s where my CREATE Prompt Formula™ comes in.
This 6-part framework helps you structure powerful prompts that give you back responses that are precise, relevant, and actionable. No more hand-holding. No more back-and-forth.
Whether you’re writing content, scripting video, coding tools, planning strategy, or doing deep research, the CREATE prompt formula is your blueprint for prompt mastery.
Let’s break it down-letter by letter.
Every successful task starts with clear roles. If you ask a generalist to perform heart surgery, you’ll get a disaster. The same principle applies to AI. ChatGPT can simulate an expert in almost any field-but only if you tell it who it needs to be.
The “C” in the CREATE formula stands for Character. This is where you define the voice, expertise, perspective, or professional persona the AI should assume before generating a response. It’s your chance to set the tone and context.
Think of ChatGPT as a highly skilled actor on a digital stage. It has the scriptwriting chops of Aaron Sorkin, the legal analysis of a seasoned QC, and the empathy of a licensed therapist-but only if you give it a role to play.
Let’s look at some real-world examples:
By assigning a character, you force the model to filter its responses through that lens. The result? Better tone, better structure, and more aligned output-because the AI now has a frame of reference.
It’s the difference between asking:
“Write an article about pensions” vs “Act as a retired IFA writing a guide for 40-year-old business owners who feel behind on their pension savings.”
Same tool. Wildly different results.
The second letter in the CREATE formula is Request. This is the heartbeat of your prompt. If “Character” tells the AI who it’s being, the Request tells it what to do. And the more precise your request, the better the result.
Most people get weak responses from ChatGPT because they’re too vague. They say things like:
“Write a blog post.”
“Explain this topic.”
“Create some content for me.”
That’s like asking a chef to “make some food.” You’ll get something, but it won’t be tailored to your appetite, dietary needs, or occasion. You need to be clear, direct, and outcome-focused.
Here’s how to improve your requests:
✅ Weak Request:
“Tell me about marketing automation.”✅ Strong Request:
“Write a 1,000-word blog for B2B SaaS founders explaining how marketing automation can reduce lead friction and shorten the sales cycle. Use a conversational tone and include 3 real-world examples.”
Notice the difference? You’ve now told the AI:
This is prompt writing with purpose. When crafting your Request, ask yourself:
AI isn’t magic. It’s maths. And your Request is the equation that drives the output. The clearer your request, the better the result.
If your request is the brief, then Examples are the blueprint.
Language models learn by pattern recognition. They understand structure, tone, and logic based on what they’ve been trained on. But when you give them your examples, you’re teaching them your version of the pattern – and that’s what sharpens the response.
Here’s the simple truth:
When you show an example, you instantly raise the quality of the output.
Let’s take two prompts and compare:
❌ Prompt Without Examples:
“Write a product description for a high-end coffee machine.”✅ Prompt With Example:
“Write a product description for a high-end coffee machine. Here’s an example of the tone and structure I like:‘Meet the Barista X1 – where precision meets passion. Crafted for those who take coffee seriously, this machine turns your kitchen into a café. Dual boilers. One-touch controls. Zero compromise.’
“Match this style. Keep it to 80 words.”
See what happened? You’ve just trained the AI in real time.
What Makes a Good Example?
Even better, you can feed multiple examples. Start a prompt with:
“Here are three examples of past outputs I’ve liked. Use them to guide the tone and format.”
AI can learn from what you like. It’s fast, responsive, and highly coachable – but only if you take the time to show, not just tell.
Once you’ve given your character, request, and examples, you’re almost there. But even the best prompt needs room to manoeuvre – and that’s where Adjustments come in.
AI isn’t one-size-fits-all. Sometimes you need:
And you shouldn’t have to start over.
The most powerful prompts build in flexibility. You teach the model how to refine based on your feedback. Here’s how…
Think of it as a tuning dial, not a finished sculpture. You’re telling the model, “I might want to tweak this – and here’s how.”
This is how great creative directors work. They don’t just say, “Make it better.” They give specific adjustment criteria, and now you can do the same. With LLMs, you’re not stuck with version 1. Adjustments turn average responses into polished gold.
The final step in your prompt is telling the AI what kind of output you expect.
This sounds obvious, but it’s one of the most overlooked – and most important – parts of prompt engineering. Why? Because language models like ChatGPT can write an essay, a tweet, a 20-slide presentation, a legal contract, or a poem. If you don’t define the output, you leave the door wide open – and that usually leads to bland, bloated, or broken results.
Here’s how to be clear – Specify the Output Type:
The model needs format, tone, and length guidance to deliver precisely what you need.
If you’re asking ChatGPT to export as a Word or PDF file, remember this: the export process runs via OpenAI’s document generation API, which can occasionally truncate longer outputs. If your content is over 1,200–1,500 words, it’s safer to ask the AI to generate the full text in the chat first – then export it yourself manually.
It’s better to get the perfect raw output in the chat, then worry about formatting later. Don’t let a file format compromise the substance of your content. Clarity on the type of output ensures you get something useful – not something you need to untangle.
The final “E” in the CREATE formula stands for Extras – the strategic finishing touches that dramatically improve the success rate of your prompt. At the heart of this step is a simple but powerful instruction:
“Before you generate your response, confirm you understand the brief and ask me any questions you need to complete the task successfully.”
This is where most people miss the opportunity to transform ChatGPT from a typing tool into a true collaborative partner.
Large Language Models (LLMs) are not mind readers. They’re pattern-matching machines trained on vast amounts of data. If your brief is vague, contradictory, or missing key information, the model will still try to generate an answer – but it might be wrong, incomplete, or way off the mark.
By instructing the AI to pause and reflect on the brief, you open the door to clarification and refinement.
These prompts save time, reduce rewrites, and ensure alignment with your goals.
If you were briefing a junior team member, you’d expect them to ask questions. Why treat AI any differently? Ending your prompt with a simple invitation to clarify turns ChatGPT into a more thoughtful, responsive assistant – and raises the quality of the output every single time.
The rise of Large Language Models like ChatGPT is not the end of content creation – it’s the evolution of it. We’re no longer talking about basic chatbots or keyword-stuffed auto-responses. These tools are now capable of drafting business strategies, creative narratives, marketing campaigns, sermon outlines, legal disclaimers, and even coding frameworks – in seconds. But here’s the truth few are willing to admit…
The quality of AI output is still entirely dependent on human input.
Poor prompts lead to poor answers. That’s not the AI’s fault – it’s a communication problem.
The CREATE formula is designed to bridge that gap. It transforms a generic prompt into a structured brief that mirrors how you’d engage a professional copywriter, strategist, or analyst. It slows you down just long enough to think strategically and then equips the AI to do what it does best – generate with speed and scale.
As LLMs continue to evolve, we’ll likely see:
And even fully integrated AI colleagues embedded in your workflows.
But even then, the prompt remains king.
Those who learn to brief well will lead. Those who don’t will settle for average. So whether you’re a business owner, marketer, teacher, developer, or creator – take ownership of your prompts.
Use CREATE. And watch your productivity, clarity, and creative edge explode.
The future belongs to those who know how to ask the right questions. If you’re ready to build your next masterpiece, then start with a better prompt.