Sales people spend less than a third of their time selling. The numbers tell a story that’s far more compelling than your last earnings call. McKinsey estimates generative AI could unlock $2.6–$4.4T in value per year, with sales near the top. Gartner expects most B2B teams to use AI‑guided workflows by the end of 2025. Salesforce reports reps spend under a third of their time selling. The teams that win won’t just write nicer emails. They’ll wire LLMs for sales into the whole revenue engine so quality and learning compound.
Here’s what most people miss: LLMs for sales drive the biggest ROI when they do analysis, prioritisation, and follow‑through (not just copy). If you want AI for sales to move the needle on pipeline, design around signal, judgment, and action.
How to Read and Use This LLMs for Sales Guide
- Four stages. Eight use cases. Each includes prompts, setup, examples, and fixes you can use today.
- LLM picks:
- GPT‑5 for tool use, function calling, and strict JSON, or GPT-5-mini if you need speed and something cost effective.
- Claude 4.5 Sonnet for long context and careful synthesis.
- Gemini 2.5 Pro for high‑quality personalisation, or Flash/Flash-Lite for lower cost and high throughput use cases
- Always check vendor pricing and data terms on official pages.
- Cost ballparks: $50–$500 per month per use case at moderate volume if you optimise prompts, cache, and batch jobs.
- Proof first, then scale: run 2–3 week pilots with clear metrics. Keep what works. Bin the rest.
Primary sources and tools
- Research and benchmarks: McKinsey, Gartner, Salesforce
- Data sources: Crunchbase, SEC EDGAR, BuiltWith, Wappalyzer, G2, Bombora, 6sense, LinkedIn, People Data Labs, Seeking Alpha
- Infra and orchestration: Snowflake, BigQuery, Salesforce, HubSpot, Airflow, Prefect
- Automation and engagement: Zapier, Make, Workato, Outreach, Salesloft
- Proposal and demo: PandaDoc, Qwilr, Reprise, Navattic

Stage 1: Prospecting and Research
1) Signal‑Based Lead Scoring
ICP checkboxes don’t tell you who’s in‑market. Funding, hiring, tech changes, intent, earnings‑call language and other signals do. Blend them and you cut wasted outreach by 60–70% and lift meeting rates by 20–40%. You also stop lists from spinning like a tumble dryer. This is where LLMs for sales shine: signal‑based lead scoring that guides AI for B2B sales prospecting.
Sources to tap
- Funding and news: Crunchbase, SEC, company blogs
- Tech stack: BuiltWith, Wappalyzer
- Intent: G2, Bombora, 6sense
- People moves: LinkedIn, People Data Labs
- Earnings: Seeking Alpha, IR pages
What most miss: recency and signal density matter more than firmographics.
Workflow
- Nightly job fetches signals per account and normalises them into a profile.
- LLM synthesises signals and outputs a score and explanation.
- CRM updates account score and next steps.
Prompt
System: You score B2B accounts on buyer readiness from 0–100. Weigh recency, materiality, and signal density. Penalise generic ICP‑only matches. Output JSON strictly.
User:
Account profile:
- Company: {company_name}
- Industry: {industry}
- Employees: {employee_count}
- Product fit summary: {fit_summary}
Signals (past 90 days):
- Funding: {funding_events}
- Executive moves: {exec_moves}
- Tech changes: {tech_changes}
- Intent sources (G2/Bombora): {intent_summary}
- Earnings/transcript language: {earnings_snippets}
- Hiring/job postings: {hiring_snippets}
Scoring rubric:
- 0–30: Not in market
- 31–60: Mild interest
- 61–80: Active research
- 81–100: High intent
Output JSON:
{
“score”: int,
“reasoning”: “string”,
“top_signals”: [“...”],
“recommended_play”: “Prospect | Nurture | ABM 1:1”,
“next_steps”: [“...”]
}
Technical setup
- LLM: Claude 4.5 Sonnet for long context and synthesis. GPT‑5 if you want strong function calling.
- Integrations: For an enterprise setup, pull data into Snowflake or BigQuery. Push to Salesforce or HubSpot via reverse ETL. Schedule with Airflow, Prefect, or a serverless cron. For a cheaper alternative use PostgreSQL for the database and schedule with self-hosted n8n for unlimited workflows. For a quick hack or to get a quick prototype up and running you can even use Google Sheets as the database.
- Storage: Store outputs in CRM custom fields and a vector DB for history.
Cost: $100–$300 per month at ~5k–10k accounts/week if you batch and keep outputs short.
Real output example
{
“score”: 86,
“reasoning”: “Series C funding last week, +3 enterprise AE hires, swapped to Snowflake (compatibility with our integration), 11 G2 views on ‘contract lifecycle’ category, CFO mentioned vendor consolidation.”,
“top_signals”: [“Funding C”, “Hiring AEs”, “Snowflake adoption”, “Intent: contract lifecycle”],
“recommended_play”: “ABM 1:1”,
“next_steps”: [“Exec‑led intro referencing consolidation”, “ROI calculator tailored to Snowflake workflow”, “Schedule discovery within 7 days”]
}
Common failures and fixes
- Scores driven by generic fit. Fix: penalise lack of recency and add negative examples.
- Hallucinated signals. Fix: pass only verified inputs and enforce a JSON schema.
- Scores drift over time. Fix: monthly calibration on a small set of wins and losses.
Expected results in 30 days
- 60–70% less wasted outreach
- 20–40% lift in first meetings
- 10–20% more pipeline per rep
Mentor takeaways
- Weight recency and material impact, not vanity signals.
- Keep the rubric visible in CRM so reps trust the score.

Stage 2: Outbound and Engagement
2) Signal‑Based Personalisation at Scale
Most “personalised” emails read like a LinkedIn profile was copy‑pasted. The win is not flattery. It’s a sharp hypothesis tied to a live signal. This is AI for sales done right.
Workflow
- Assemble account context and top three signals.
- LLM writes two to three variants per persona.
- A/B test and store winners.
Prompt
System: Write crisp, specific emails that connect signals to business impact. Avoid filler. 70–110 words.
User:
Context:
- Account: {account_name}
- Persona: {persona}
- Signals: {top_three_signals}
- Hypothesis: {pain_hypothesis}
- Offer: {offer_or_asset}
- CTA: {direct_ask}
Output: 3 variants. Each with subject (≤6 words) + body (≤110 words).
Technical setup
- LLM: Gemini 2.5 Flash-Lite for cost and speed. GPT‑5 for nuance on higher ACV.
- Integration: CRM and engagement platform via API. Approve in Slack with one click. Or use Stack BD one-click send.
- Guardrails: PII redaction. Brand‑voice template.
Cost: $50–$200 per month.
Real output example
Subject: Approvals slowing expansion?
Body: “Saw your RevOps hire and the CPQ backlog mentioned at the town hall. When approvals touch legal and finance, our customers’ expansion deals stall 10–14 days. We automate multi‑step approvals in Salesforce so AMs don’t chase tickets. Here’s a two‑minute explainer. If this is on your Q4 list, open to a quick walkthrough to compare with your current flow?”
Common failures and fixes
- Still generic. Fix: enforce “signal → impact → action.”
- Too long. Fix: hard word counts and token limits.
- Compliance issues. Fix: strip sensitive fields and use allowlists for sources.
Expected results in 30 days
- 1.5–2.5x higher reply rate
- 15–25% more meetings
Mentor takeaways
- One email, one hypothesis. Keep it tight.
- Store the winners. Rerun each month.
3) Multi‑Touch Sequence Generation with Narrative Consistency
Sequencers throw touches at a wall. Prospects feel it. A simple story arc wins: Problem → Proof → ROI → Urgency → Social proof → Objection → Close. This multi‑touch sequence generation approach keeps your AI for sales coherent.
Workflow
- Provide ICP, persona, and signals.
- LLM drafts a seven‑touch sequence across email, social, and calls.
- Load into your sequencer. Rotate top variants each month.
Prompt
System: Create a 7‑touch sequence over 21 days with a consistent narrative. Include: subject, email (80–110 words), LinkedIn note (≤40 words), call opener (≤20 seconds). Each touch must advance the story: 1) hypothesis, 2) proof, 3) ROI, 4) urgency, 5) social proof, 6) objection preemption, 7) close.
User:
ICP: {icp}
Persona: {persona}
Signals: {signals}
Proof assets: {case_studies_links}
Primary metric: {metric}
Technical setup
- LLM: GPT‑5 for structured and consistent outputs. Cache top sequences.
- Integration: JSON export to sequence API. Enforce channel mix and pacing.
- QA: Regex tests for length, links, and spam words.
Cost: ~ $100/month.
Real output example
Touch 3 (ROI): “In Q3, Acme cut change‑order cycle time 37% by automating approvals in Salesforce. Fewer handoffs. Fewer tickets. At ~120 deals per quarter, that returns about 180 hours. If I share the four‑step flow we used, would that help?”
Common failures and fixes
- Repetition across touches. Fix: give a narrative map and forbid reused lines.
- Wrong pacing. Fix: specify daily cadence and gaps.
- Broken links. Fix: pre‑validate and use tracked short links.
Expected results in 30 days
- 10–20% lift in sequence replies
- 5–10% lower unsubscribe rate

Stage 3: Sales Process
4) Discovery Question Generation Tailored to Prospect Context
Bad discovery creates beautiful proposals that go nowhere. LLM discovery questions align to the account’s signals, goals, and notes. You get better data and clearer next steps.
Workflow
- Pull CRM notes, intent, and prior emails.
- LLM generates an 8–12 question plan with sequencing and red flags.
- Deliver as a pre‑call brief in Slack or the calendar.
Prompt
System: Generate a discovery plan for a 30‑minute call: 3 opening questions, 5 deep‑dive questions, 2 closing questions, red flags, and a summary hypothesis. Tailor to persona and signals.
User:
Account: {account}
Persona: {persona}
Signals: {signals}
Prior notes: {notes}
Our solution: {one_liner}
Primary value metric: {metric}
Technical setup
- LLM: Claude 4.5 Sonnet for nuanced questioning; GPT‑5 also strong.
- Integration: Pre‑call bot posts the brief in Slack with the calendar link. Store in CRM activity.
- Analytics: Track which questions correlate with stage progress.
Cost: $50–$100 per month.
Real output example
Opening: “You recently added a CPQ lead and mentioned change‑order delays in the 10‑Q. How are approvals routed today when finance and legal both need to weigh in?”
Deep dive: “If you cut change‑order time by 30%, what impact would you expect on expansion ARR and support tickets?”
Common failures and fixes
- Broad questions. Fix: force each question to reference a signal or a metric.
- Too many questions. Fix: cap to 10 and set time per section.
- Misaligned hypothesis. Fix: include prior notes and persona goals.
Expected results in 30 days
- 10–20% lift in Stage 1 → 2 conversion
- Better notes and clearer next steps
5) Demo Customisation that Solves Specific Pain
Nobody wants a tour of buttons. They want a story that fixes their pain with their data. LLMs turn discovery notes into a 20‑minute script with scenarios and talk tracks. Simple. Relevant. Short.
Workflow
- Gather pains, metrics, and integrations from discovery.
- LLM outputs a 15–20 minute storyline with three scenarios and talk tracks.
- Optional: generate a tailored sandbox dataset.
Prompt
System: Create a 20‑minute demo script with 3 scenarios that prove value for {persona}. Map each step to a pain and a metric. Include talk tracks and timeboxes.
User:
Pains: {pain_list}
Metrics: {metrics}
Integrations: {integrations}
Customer proof: {case_studies}
Technical setup
- LLM: GPT‑5 for structured scripts; Claude Sonnet 4.5 for long context.
- Integration: Export to Google Docs or Slides. Use dynamic fields for account name and numbers.
- Optional: Run in Reprise or Navattic.
Cost: $100–$150 per month.
Real output example
Scenario 2: “Change‑order approval. Show sales view in Salesforce. Trigger auto‑approval for under 10k expansions. Branch to legal for clause X. Narration: This removes two handoffs and cuts cycle time by ~35%, as Acme saw in Q3.”
Common failures and fixes
- Generic script. Fix: require each step to reference a pain and a metric.
- Overtime demos. Fix: set timeboxes and a two‑minute recap.
- Tech mismatch. Fix: include integration checks.
Expected results in 30 days
- 15–25% lift in demo → proposal conversion
- 10–20% shorter cycles on qualified deals

Stage 4: Pipeline Management Using LLMs For Sales
6) Meeting Summary with Actions and Risks
Call notes go missing. Actions slip. Risks appear at quarter end. AI meeting summaries for sales fix that: LLMs summarise the call, pull out decisions, assign actions, flag risks, and update CRM. Your future self will thank you.
Workflow
- Ingest the call transcript from Zoom, Teams, or Gong.
- LLM outputs summary, decisions, actions, risks, and BANT fields.
- Post to CRM and Slack. Email the recap to the prospect.
Prompt
System: Produce a factual meeting summary with decisions, action items (owner/date), risks, and BANT fields. Do not invent. Output JSON and a client‑facing email.
User:
Transcript: {transcript_text}
Known fields: {crm_context}
Technical setup
- LLM: GPT‑5 or Claude 4.5 Sonnet. Include a JSON schema.
- Integration: Webhook from call platform. Push to CRM tasks and opportunity fields.
- QA: Flag low‑confidence items for human review.
Cost: $20–$50 per month.
Real output example
- Decisions: “Pilot scope: approvals under 10k in North America. Start 18 Nov.”
- Actions: “Rep to send security pack. Owner Jane. Due Fri. Prospect to share approval matrix. Owner Alex. Due Thu.”
- Risks: “Legal review needed. Budget freeze if not signed by 10 Dec.”
Common failures and fixes
- Hallucinations. Fix: add “do not infer” and return confidence scores.
- Missed dates or owners. Fix: regex pre‑parse for names and dates; add a confirmation step.
- Overlong summaries. Fix: hard word caps and strict JSON sections.
Expected results in 30 days
- 20–30% more tasks done on time
- 10–15% fewer slipped deals
7) Deal Risk Prediction from Engagement Patterns
Managers often “feel” risk. Feelings don’t forecast. Deal risk prediction with AI reads timelines, gaps between meetings, single threading, asset opens and procurement steps. It then then scores risk and suggests next actions. One of the more inventive use cases of LLMs for sales.
Workflow
- Aggregate engagement events for each opportunity.
- LLM evaluates patterns vs stage benchmarks. Outputs score and reasons.
- Weekly roll‑up for managers with suggested actions.
Prompt
System: Evaluate opportunity health based on engagement signals. Compare to stage benchmarks and output a risk score 0–100 with reasons and 3 recommended actions.
User:
Opportunity: {op_name}
Stage: {stage}
Timeline: {events_list}
Benchmarks: {stage_benchmarks}
Buying committee: {contacts}
Technical setup
- LLM: GPT‑5 for tool calling and rubric‑based scoring, or a small model plus rules.
- Data: CRM events, email and meeting data, DocSend opens, security review status.
- Output: Score, top three risks, and actions to CRM.
Cost: $50–$100 per month.
Real output example
- Score: 62 (at risk)
- Reasons: “Single‑threaded with a manager only. 12‑day gap since last meeting. Security questionnaire not started.”
- Actions: “Add Finance and Security. Schedule technical validation. Send security pack today.”
Common failures and fixes
- Overweighting one signal. Fix: provide weights and benchmarks in the prompt.
- False alarms around holidays. Fix: include calendars and regional holidays.
- Opaque outputs. Fix: force reason codes and a one‑line summary.
Expected results in 30 days
- 15–25% better forecast accuracy on in‑quarter deals
- 10–20% more rescues of at‑risk opportunities
8) Champion Enablement Content for Internal Selling
Many champions don’t know how to sell you internally. That’s on us. LLMs create a kit they can use with Finance, Security, and the exec team. A powerful example of LLMs for sales enablement.
Workflow
- Intake objections, decision criteria, and procurement steps.
- LLM creates a champion kit: one‑pager, CFO brief, Security FAQ, ROI summary.
- Host it in a shareable portal. Track opens and questions.
Prompt
System: Create an internal selling toolkit for [Champion Name] at [Company] to advocate for [Solution]. Input: Stakeholders: [Finance/Security/Exec/Other] Objections raised: [Actual objections] Decision criteria: [Their criteria] Timeline: [Procurement cycle] Output 4 documents: Executive One-Pager - Business problem, solution, outcomes, ROI (1 page) CFO Brief - Cost-benefit analysis, implementation costs, payback period Security FAQ - Data handling, compliance, integrations, certifications ROI Summary - Current costs vs projected savings, metrics, timeline Tone: Data-driven, stakeholder-specific. Make it about their business case, not product features.
The teams that win in 2026 won’t be the ones writing slightly better emails. They’ll be the ones that wire LLMs into their revenue engine so signal compounds, judgment improves, and action happens without friction. Start with one use case. Run a clean two-week pilot. Measure meeting rates, conversion lift, or cycle time. Keep what works. Bin the rest. The ROI isn’t in the prompts. It’s in the system you build around them.
Stop talking about AI or LLMs for sales. Start shipping.