• cognizix
  • Posts
  • Steal my Instructions to keep your LLM under control

Steal my Instructions to keep your LLM under control

LLMs run on tokens | And tokens = cost

So the more you throw at it, the more it costs

Also affects speed and accuracy

My exact prompt instructions are in the section below this one,

but first, Here are 3 things we need to do to keep it tight 👇

1. Trim the fat

Cut long docs, remove junk data, and compress history

Don't send what you don’t need

2. Set hard limits

Use max_tokens

Control the length of responses. Don’t let it ramble

3. Use system prompts smartly

Be clear about what you want

Instructions + Constraints

🚨 Here are a few of my instructions for you to steal 🚨 

Copy as is …

  • If you understood, say yes and wait for further instructions

  • Be concise and precise

  • Answer in pointers

  • Be practical, avoid generic fluff

  • Don't be verbose

That’s it

Small tweaks = big savings

Got your own token hacks?

I’m listening, just hit reply to this email