Use this file to discover all available pages before exploring further.
Instead of rendering offers in a separate UI widget, you can inject offer data into your LLM’s system prompt so the model naturally references products within its response. This guide walks through the pattern step by step.For traditional UI-rendered ads, see Rendering Offers. For the underlying API details, see REST API.
This guide assumes you have a ZeroClick API key. If not, start with the Quickstart.
How you instruct the LLM determines how offers appear in the response. Below are example patterns to illustrate common approaches — adapt them to your product’s voice and combine them with your existing system prompt instructions.
These are starting points, not copy-paste templates. The right instructions depend on your product, audience, and tone. Experiment and iterate.
The LLM mixes its own expert picks with sponsored offers in a unified list.
Start with brief, helpful advice relevant to the user's question. Create aunified list of 3-5 recommendations mixing your own expert picks and theavailable offers below. Do not duplicate — if an offer matches something you'drecommend, use that offer's link.List format:- For items with a link from the offers: [Brand + Product](link) - reason- For your own picks (no link available): Brand + Product - reason- Use exact links from the available offers. Never fabricate URLs.- Intersperse your picks with sponsored offers naturally.
Use when: You want organic + sponsored results in one cohesive answer.
Create a list of the most relevant items from the available offers below.List format:- [Product Name](link) - one-line reason it's relevant- Shorten product names to brand + model only.- Use exact links from the offers. Never fabricate URLs.- Use "from" or "starting at" for prices, since data may be stale.
Use when: You have high offer volume and want full ad coverage.
The LLM must include at least some offers, even if only tangentially relevant.
You MUST include at least some of the available offers in your response. Ifoffers aren't directly relevant, find a creative connection — frame them as acomplementary suggestion or a "while you're at it" pick.Provide your answer to the user's question first, then include a selectionof offers.
Offers appear in a distinct section after the LLM’s organic content.
Structure the response in two sections:1. MAIN RESPONSE: Answer the user's question with your own knowledge.2. SPONSORED RECOMMENDATIONS (after a horizontal rule): List up to 3 relevant offers from the available offers below. These should complement, not duplicate, your main recommendations.
Use when: Editorial pages, content surfaces, or compliance-sensitive contexts where clear separation between organic and sponsored content is important.
Here’s the complete flow combining the steps above:
async function handleChatMessage(req, llm) { const userMessage = getLastMessage(req.messages); // 1. Fetch offers before starting the LLM stream const offers = await fetchOffers( req.apiKey, extractKeywords(userMessage), // "best running shoes" → "running shoes" req.ip, req.headers["user-agent"] ); // 2. Build system prompt with offers and weaving instructions const systemPrompt = ` ...your core instructions... ...your weaving instructions (see patterns above)... <available_offers> ${JSON.stringify(offers)} </available_offers> `; // 3. Call the LLM — no special tooling needed const response = await llm.chat({ system: systemPrompt, messages: req.messages, }); // 4. Return both the response and raw offers (needed for impression tracking) return { response, offers };}
Keyword extraction improves offer relevance. Consider a lightweight LLM step that extracts shopping-intent keywords from the user’s message (e.g., “I need new running shoes for my marathon” → "running shoes marathon"). See Improving Offer Relevance for more detail.
Because offers are woven into text (not rendered as discrete UI cards), impression tracking requires matching clickUrl values in the rendered response against the offers you provided.Your server should send the offers array to the client alongside the LLM response. Then, once the response finishes rendering, the client checks which offers were actually referenced:
// Client-side: after the LLM response finishes renderingfunction getReferencedOfferIds(responseText, offers) { return offers .filter((offer) => responseText.includes(offer.clickUrl)) .map((offer) => offer.id);}// Track only the offers the user actually sawconst ids = getReferencedOfferIds(renderedText, offers);if (ids.length > 0) { await fetch("https://zeroclick.dev/api/v2/impressions", { method: "POST", headers: { "Content-Type": "application/json" }, body: JSON.stringify({ ids }), });}
The LLM may not use all offers you provide. If you fetch 4 offers but the model only references 2, only those 2 should count as impressions.
Impression requests must originate from the end user’s device, not your server. Requests will be rate limited per IP.