Trendaavat aiheet
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.
Andrej Karpathy supports the introduction of a new term related to "context engineering" in AI Software development using LLMs.
And this term has long seemed very necessary. Every time I explain to people how we develop our Nethermind AuditAgent, one of the key aspects, besides using domain expertise (web3 security) and using the best available AI models (from OpenAI, Anthropic and Google), and tools for LLM, is precisely "context engineering".
There's sometimes an expression "context is the king," and it really is true. LLMs, whether huge advanced ones or optimized small LLMs, are a powerful tool, but like any tool, if it's in the wrong hands, you'll get much less promising results than you could if you work with them correctly. And context management (or engineering) is indeed a complex and not very well-described area that is constantly evolving, and it really emerged as an extension of the concept of prompt engineering, which already has some negative connotations.
Overall, Andrej listed the main aspects related to context engineering (on the second screenshot), but in each specific task, people achieve excellent results largely through trial and error, each time monotonously trying to select the right context elements that are really needed at this stage of problem-solving, collecting benchmarks for each stage, looking at metrics, dividing datasets into test, validation, and so on, and so forth.
What do you think about "context engineering"?

25.6.2025
+1 for "context engineering" over "prompt engineering".
People associate prompts with short task descriptions you'd give an LLM in your day-to-day use. When in every industrial-strength LLM app, context engineering is the delicate art and science of filling the context window with just the right information for the next step. Science because doing this right involves task descriptions and explanations, few shot examples, RAG, related (possibly multimodal) data, tools, state and history, compacting... Too little or of the wrong form and the LLM doesn't have the right context for optimal performance. Too much or too irrelevant and the LLM costs might go up and performance might come down. Doing this well is highly non-trivial. And art because of the guiding intuition around LLM psychology of people spirits.
On top of context engineering itself, an LLM app has to:
- break up problems just right into control flows
- pack the context windows just right
- dispatch calls to LLMs of the right kind and capability
- handle generation-verification UIUX flows
- a lot more - guardrails, security, evals, parallelism, prefetching, ...
So context engineering is just one small piece of an emerging thick layer of non-trivial software that coordinates individual LLM calls (and a lot more) into full LLM apps. The term "ChatGPT wrapper" is tired and really, really wrong.
477
Johtavat
Rankkaus
Suosikit