Tips for Advanced AI Prompting Techniques

Explore top LinkedIn content from expert professionals.

  • View profile for 🦾Eric Nowoslawski

    Founder Growth Engine X | Clay Enterprise Partner

    46,375 followers

    Prompting tips from someone that spends probably $13k+ per month on OpenAI API calls. I'll break the tips into chatGPT user interface tips as well as API tips. My bias is of course going to be about outbound sales and cold email because this is where we spend from and 100% of this spend is on 4o mini API calls. Chat GPT Prompting Tips 1. Use transcription as much as possible. Straight in the UI or use whisprflow(dot)ai (can't tag them for some reason). I personally get frustrated with a prompt when I'm typing it out vs. talking and can add so much more detail. 2. Got this one from Yash Tekriwal 🤔 - When you're working on something complex like a deep research request or something you want o3 to run or analyzing a lot of data, ask chatgpt to give you any follow up questions it might have before it runs fully. Helps you increase your prompt accuracy like crazy. 3. I've found that o3 is pretty good at building simple automations in make as well so we will ask it to output what we want in a format that we can input into make and often we can build automations just by explaining what we need and then plugging in our logins in Make. API prompting tips 1. Throwing back to the Chat GPT UI, but we will often create our complex prompts in the user interface first and then bring it into the API via Clay asking ChatGPT along the way on how to improve the prompt and help us think of edge cases. This can take any team member to a prompting pro immediately. 2. Examples are your best friend. Giving examples of what you would want the output to be is how we can get our outputs to be the same format and not put "synergies" in every email we are sending. I tell the team, minimum 2 examples for single line outputs. 4 examples for anything more complex than that. 6 examples for industry tagging because that gets so odd. Save on costs by putting some real examples in your system prompt. 3. Request the output in JSON. It keeps everything more uniform in the format you need. 4. Speaking of JSON, ask the API to prove to you why it thinks what it thinks and then output the answer. Especially for company category tagging, I find this works really well. I see this greatly increase the accuracy of our results for 2 reasons. I think if AI has to take the extra second to prove to you why a company is an ecommerce brand, the results are demonstrably better. This is just a guess, but I also think that because LLMs basically work by guessing what the next best word is, if you have it tell you why it thinks something is a certain industry and then it gives the output, I think it's much more likely to be correct. Anything else you've found?

  • View profile for Jimi Gibson

    No‑Fluff Digital Strategies for Fed‑Up Owners | VP, Thrive Agency | Keynote Speaker

    2,861 followers

    Stuck with generic AI answers? Your prompts are to blame. Inside the AI Mind ChatGPT isn’t clairvoyant—it’s a pattern matcher trained on billions of words. Your prompt is the GPS signal: the clearer the directions, the closer you get to your destination. Mastering these hacks means you spend less time massaging outputs and more time using them to drive real business results. Hack 1 – Tame the T‑Rex What it is: Lock in format and length from the start. Pro tip: “In 3 bullet points, explain…” Why it matters: Vague prompts give you walls of text that need heavy editing. By specifying format up front, you force ChatGPT to sculpt its response into the shape you actually want—cutting your rewrite time in half. Hack 2 – Feed the Beast What it is: Supply rich context—background data, customer profiles, past examples. Pro tip: Begin with “Based on the text above, draft…” Why it matters: The AI only knows what you feed it. Without context, it fakes knowledge. By “feeding” it your specifics, you get custom, nuanced answers instead of generic guesswork. Hack 3 – Chain Its Thoughts What it is: Ask for step‑by‑step reasoning before the final answer. Pro tip: “Walk me through your thought process on…” Why it matters: You discover how the AI arrived at its conclusion—spotting gaps, bias, or hallucinations. This transparency lets you catch mistakes early and refine your prompt for more trustworthy insights. Hack 4 – Dress It Up What it is: Define tone, style, and word count as clearly as a dress code. Pro tip: “Write a friendly LinkedIn post under 100 words.” Why it matters: You maintain brand consistency. Whether you need a snarky tweet or a formal memo, setting the “voice” prevents you from spending time rewriting blunt or off‑tone copy. Hack 5 – Play Pretend What it is: Assign a persona—expert, coach, critic—to shape the lens. Pro tip: “Act as a veteran UX designer and critique this homepage.” Why it matters: Personas tap into specialized knowledge. Rather than a one‑size‑fits‑all answer, you get domain‑specific insight that feels like expert consultation—no extra hire required. Hack 6 – Show Your Work What it is: Provide an example snippet or previous output as a style guide. Pro tip: Paste a 2‑line sample and add “Match this tone and structure.” Why it matters: Examples anchor the AI’s voice and structure. You get consistent, on‑brand content that matches your past successes—no more tone drift or awkward phrasing. Hack 7 – Polish the Gem What it is: Iterate one element at a time: clarity, length, emphasis. Pro tip: Reply “Make this more concise” or “Expand point 2.” Why it matters: Small, targeted tweaks compound into polished perfection. Rather than starting over, you refine in place—saving time and ensuring each change builds on solid foundations. Marketing isn’t magic—just third-grade math and psychology. DM “TruthBomb” for a no-BS audit of your digital marketing.

  • View profile for Sahar Mor

    I help researchers and builders make sense of AI | ex-Stripe | aitidbits.ai | Angel Investor

    40,500 followers

    In the last three months alone, over ten papers outlining novel prompting techniques were published, boosting LLMs’ performance by a substantial margin. Two weeks ago, a groundbreaking paper from Microsoft demonstrated how a well-prompted GPT-4 outperforms Google’s Med-PaLM 2, a specialized medical model, solely through sophisticated prompting techniques. Yet, while our X and LinkedIn feeds buzz with ‘secret prompting tips’, a definitive, research-backed guide aggregating these advanced prompting strategies is hard to come by. This gap prevents LLM developers and everyday users from harnessing these novel frameworks to enhance performance and achieve more accurate results. https://lnkd.in/g7_6eP6y In this AI Tidbits Deep Dive, I outline six of the best and recent prompting methods: (1) EmotionPrompt - inspired by human psychology, this method utilizes emotional stimuli in prompts to gain performance enhancements (2) Optimization by PROmpting (OPRO) - a DeepMind innovation that refines prompts automatically, surpassing human-crafted ones. This paper discovered the “Take a deep breath” instruction that improved LLMs’ performance by 9%. (3) Chain-of-Verification (CoVe) - Meta's novel four-step prompting process that drastically reduces hallucinations and improves factual accuracy (4) System 2 Attention (S2A) - also from Meta, a prompting method that filters out irrelevant details prior to querying the LLM (5) Step-Back Prompting - encouraging LLMs to abstract queries for enhanced reasoning (6) Rephrase and Respond (RaR) - UCLA's method that lets LLMs rephrase queries for better comprehension and response accuracy Understanding the spectrum of available prompting strategies and how to apply them in your app can mean the difference between a production-ready app and a nascent project with untapped potential. Full blog post https://lnkd.in/g7_6eP6y

Explore categories