Discussion about this post

User's avatar
Rasmus Edwards's avatar

Good piece Shmulik!

I’m definitely of the opinion that a lot of prompt engineering is complicated for the sake of it.

At the end of the day, one of the core value propositions of AI is democratizing access.

And if that’s one of the goals, why would the developers not actively try to work against the need to have prompts that are 5000 words long?

Shouldn’t they work towards including as many users as possible, with as little “prompting knowledge” as possible?

Clarity, in my opinion, remains the most important aspect.

Daria Cupareanu's avatar

Prompt debloating is a useful strategy, as too often, complexity actually shows up when there’s not enough clarity. And I fully agree that the newer reasoning models don’t need the kind of heavy scaffolding we used to build around prompts.

But I also think prompt engineering still has a lot of power. Great prompts don’t just shape the output, they create a framework for how you think about the task, how you want the model to think, and what kind of result you want to shape.

And when it comes to context, I get the concern on the API side, but most average users don’t even perceive token costs. And even with APIs, I’ve run enough experiments to know this: the more relevant context I give, the better the output. Not to constrain the model, but to set the concept well. That’s something both Anthropic and OpenAI emphasize, too.

I see simplicity not as shorter prompt, but clearer ones.

5 more comments...

No posts

Ready for more?