+15 min

Coaching Scorecards Explained

Learn what makes an effective scorecard in Attention

What you’ll learn

Complete Guide to Scorecard Creation in Attention

Scorecards are one of Attention's most powerful features for ensuring consistent execution, coaching your team, and tracking methodology adherence.

What Are Scorecards?

Scorecards automatically evaluate sales conversations against your criteria using AI. They help you:

  • Enforce methodology (MEDDIC, SPICED, BANT, etc.)
  • 📊 Measure performance objectively across all reps
  • 🎯 Identify coaching opportunities at scale
  • 📈 Track improvement over time
  • 🚨 Flag at-risk deals early
  • 🏆 Recognize top performers with data

Step-by-Step Scorecard Creation

Step 1: Define Your Objective

Before building, answer:

  • What call type? (Discovery, Demo, Negotiation, QBR, etc.)
  • What methodology? (MEDDIC, SPICED, custom framework)
  • What behaviors matter? (Questions asked, objection handling, next steps)
  • Who is this for? (All reps, specific team, specific role)

Step 2: Choose Your Scorecard Items

A good scorecard has 6-12 items. More than that becomes overwhelming.

Item Categories to Consider:

A. Methodology Adherence

  • MEDDIC components (Metrics, Economic Buyer, Decision Criteria, etc.)
  • SPICED elements (Situation, Pain, Impact, Critical Event, Decision)
  • BANT criteria (Budget, Authority, Need, Timeline)

B. Conversation Skills

  • Active listening (talk-to-listen ratio)
  • Question quality (open-ended vs. closed)
  • Objection handling
  • Rapport building

C. Process Execution

  • Agenda setting
  • Discovery recap
  • Next steps confirmation
  • Multi-threading (engaging multiple stakeholders)

D. Business Acumen

  • ROI/value quantification
  • Pain identification
  • Impact articulation
  • Competitive differentiation

E. Outcome Indicators

  • Prospect engagement level
  • Commitment secured
  • Information gathered
  • Deal advancement

Step 3: Choose Item Types

For each scorecard item, select the appropriate type:

Pass/Fail (Boolean)

  • Best for: Binary criteria
  • Examples:
    • "Were next steps confirmed?" (Yes/No)
    • "Was budget discussed?" (Yes/No)
    • "Did rep set an agenda?" (Yes/No)

Numeric Scale (1-5)

  • Best for: Quality/degree assessment
  • Examples:
    • "How well did rep identify pain?" (1=Poor, 5=Excellent)
    • "Quality of questions asked" (1-5)
    • "Prospect engagement level" (1-5)

Scoring Guide:

5 = Exceptional - Best practice example
4 = Strong - Above expectations
3 = Adequate - Meets minimum standard
2 = Needs Improvement - Below expectations
1 = Poor - Significant gap

Descriptive/Text

  • Best for: Capturing qualitative insights
  • Examples:
    • "What were the top 3 pain points mentioned?"
    • "Who are the key stakeholders identified?"
    • "What competitors were mentioned?"

Step 4: Set Weights/Importance

Not all items are equally important. Assign weights:

HIGH Weight (Critical):

  • Items that directly predict deal success
  • Core methodology components
  • Deal-advancing behaviors
  • Examples: Pain identification, next steps, budget discussion

MEDIUM Weight (Important):

  • Supporting behaviors
  • Quality indicators
  • Examples: Question quality, objection handling, engagement

STANDARD Weight (Good to Have):

  • Process items
  • Nice-to-haves
  • Examples: Agenda setting, time management

Step 5: Write Clear Criteria

For each item, provide detailed instructions so AI knows what to look for.

Example 1: Pain Identification

❌ Vague Criteria:"Did they talk about pain?"

✅ Clear Criteria:

Evaluate how well the rep uncovered specific business pain points.

Look for:
- Open-ended questions about challenges (e.g., "What's your biggest challenge with X?")
- Follow-up probing questions (e.g., "Tell me more about that")
- Prospect articulating specific problems in their own words
- Multiple pain points discussed (not just one)

Scoring:
5 = 3+ specific pain points clearly articulated by prospect with business impact
4 = 2 specific pain points with some detail
3 = 1-2 pain points mentioned but surface-level
2 = Generic problems or rep did most of the talking
1 = No meaningful pain discussion

Evidence: Provide direct quotes showing pain points discussed.

Example 2: Next Steps Confirmed

❌ Vague Criteria:"Were next steps set?"

✅ Clear Criteria:

Evaluate whether clear, mutual next steps were established.

Pass if ALL of these are present:
- Specific action items identified
- Owners assigned (who will do what)
- Dates/timeline confirmed
- Both parties verbally committed

Fail if:
- Vague "we'll follow up" with no specifics
- Only rep committed, prospect didn't confirm
- No timeline established
- No clear action items

Evidence: Extract the exact next steps agreed upon.

Example 3: Budget Discussion

❌ Vague Criteria:"Was budget mentioned?"

✅ Clear Criteria:

Assess whether budget/investment was meaningfully discussed.

Pass if ANY of these occurred:
- Specific budget range or amount mentioned
- Budget approval process discussed
- Who controls budget identified
- Timeline for budget allocation discussed
- Investment priority/urgency established

Fail if:
- No budget discussion at all
- Rep mentioned pricing but prospect didn't engage
- Prospect deflected and rep didn't circle back

N/A if:
- Too early in process (first discovery call)
- Explicitly scheduled for later conversation

Evidence: Quote the budget-related discussion.

Step 6: Build in Attention UI

Navigation:

  1. Log into Attention
  2. Go to Scorecards section
  3. Click "Create New Scorecard"

Basic Settings:

Scorecard Name:

  • Be specific and descriptive
  • Examples: "Discovery Call Scorecard - MEDDIC", "Product Demo Scorecard", "Executive QBR Scorecard"

Interaction Type:

  • Calls - Only score phone/video calls
  • All - Score calls, emails, chats (if applicable)

Team Assignment:

  • All Teams - Apply to entire organization
  • Specific Teams - Select which teams/departments

Enable/Disable:

  • Start with "Enabled" to begin scoring immediately
  • Or "Disabled" to test first

Adding Items:

For each scorecard item:

  1. Click "Add Item"
  2. Enter Title:
    • Clear, concise name
    • Example: "Pain Identification", "Budget Discussion", "Next Steps Confirmed"
  3. Select Type:
    • Pass/Fail
    • Numeric (1-5)
    • Descriptive
  4. Set Weight:
    • High, Medium, or Standard
  5. Add Instructions:
    • Paste your detailed criteria (from Step 5)
    • Be as specific as possible
    • Include examples of what "good" looks like
  6. Expert Mode (Optional):
    • Toggle on for advanced AI prompting
    • Allows more sophisticated evaluation logic
    • Use when you need complex conditional scoring
  7. Save Item
  8. Repeat for all items (6-12 recommended)

Expert Mode Deep Dive:

Expert mode gives you more control over AI evaluation.

When to Use Expert Mode:

  • Complex scoring logic
  • Conditional evaluation (if X then Y)
  • Need to extract specific data formats
  • Want more nuanced scoring

Expert Mode Example:

You are evaluating a sales discovery call for pain identification.

CONTEXT:
This is a B2B SaaS sales call. The rep should be uncovering business pain points that our solution can address.

EVALUATION CRITERIA:
1. Count the number of open-ended questions the rep asked about challenges
2. Identify specific pain points articulated by the PROSPECT (not the rep)
3. Assess whether the rep probed deeper with follow-up questions
4. Determine if pain was quantified (time, money, resources)

SCORING RUBRIC:
5 = Exceptional
  - 5+ open-ended questions about challenges
  - 3+ specific pain points clearly stated by prospect
  - Rep asked follow-up questions on at least 2 pain points
  - At least 1 pain point was quantified with business impact

4 = Strong
  - 3-4 open-ended questions
  - 2 specific pain points from prospect
  - Some follow-up probing
  - Attempted to quantify impact

3 = Adequate
  - 2 open-ended questions
  - 1-2 pain points mentioned
  - Minimal follow-up
  - No quantification

2 = Needs Improvement
  - 1 or fewer open-ended questions
  - Generic pain points
  - No follow-up probing
  - Rep talked more than prospect

1 = Poor
  - No meaningful pain discovery
  - Rep pitched features without understanding needs
  - Prospect barely spoke

OUTPUT FORMAT:
Score: [1-5]
Evidence: [Direct quotes from transcript showing pain discussion]
Reasoning: [Brief explanation of score]

IMPORTANT:
- Only count pain points that the PROSPECT stated, not what the rep suggested
- Look for phrases like "our biggest challenge is...", "we struggle with...", "it's costing us..."
- Discount leading questions where rep puts words in prospect's mouth

Step 7: Test Your Scorecard

Before rolling out to the team:

  1. Select 5-10 past calls of different types (good, bad, average)
  2. Run the scorecard on those calls
  3. Review the results:
    • Is the AI scoring accurately?
    • Is the evidence field showing relevant quotes?
    • Are scores aligned with your expectations?
  4. Refine criteria based on results
  5. Test again until accuracy is high

Common Issues & Fixes:

IssueFixAI too lenient (scores too high)Add stricter criteria, provide negative examplesAI too harsh (scores too low)Clarify what "adequate" looks like, adjust rubricWrong evidence extractedBe more specific about what quotes to pullInconsistent scoringAdd more detailed scoring rubric with examples

Step 8: Set Up Notifications

Automate alerts based on scorecard results:

Slack Notifications:

Low Score Alerts:

Trigger: Scorecard score < 60%
Action: Send to sales manager
Message: "⚠️ Low scorecard alert: [Rep Name] - [Call Title] scored 45%. Review needed."
Include: Link to call, scorecard breakdown

Missing Critical Items:

Trigger: "Next Steps" = Fail OR "Budget Discussion" = Fail
Action: Send to rep and manager
Message: "🚨 Critical item missing: No next steps confirmed on [Call Title]"

High Performance:

Trigger: Scorecard score > 85%
Action: Send to team channel
Message: "🏆 Excellent call by [Rep Name]! Scored 92% on [Call Type]. Great work!"

Weekly Summary:

Trigger: Every Monday 9am
Action: Send to sales leadership
Message: "📊 Weekly Scorecard Summary:
- Team Average: 73%
- Top Performer: Sarah (88%)
- Most Common Gap: Budget Discussion (42% pass rate)
- Calls Scored: 47"

Step 9: Roll Out to Team

Communication Plan:

1. Announce the Scorecard (1 week before):

Subject: New Discovery Call Scorecard - Launching Next Week

Team,

We're launching a new Discovery Call Scorecard to help us:
- Ensure consistent execution of our MEDDIC methodology
- Identify coaching opportunities
- Celebrate great calls

What it measures: [List items]
How it works: [Brief explanation]
Why it matters: [Benefits to reps]

This is a COACHING tool, not a punishment tool. We'll use it to help everyone improve.

Questions? Let's discuss in Friday's team meeting.

2. Training Session:

  • Walk through each scorecard item
  • Show examples of high vs. low scores
  • Address concerns
  • Emphasize developmental purpose

3. Pilot Period (2 weeks):

  • Score calls but don't take action yet
  • Gather feedback from reps
  • Refine criteria if needed

4. Full Launch:

  • Begin using for coaching
  • Review scores in 1-on-1s
  • Track trends over time

Coaching with Scorecards:

Weekly 1-on-1 Structure:

  1. Review scorecard trends (not individual calls)
  2. Identify patterns (e.g., consistently low on objection handling)
  3. Listen to evidence (specific call moments)
  4. Role play improvement techniques
  5. Set goals for next week
  6. Celebrate wins (high scores, improvement)

Monthly Team Review:

  • Share aggregate data (no individual call-outs)
  • Highlight top performers
  • Address common gaps
  • Share best practice snippets

Pre-Built Scorecard Templates

Here are complete templates you can implement immediately:

Template 1: MEDDIC Discovery Scorecard

Name: MEDDIC Discovery Scorecard
Type: Calls
Items: 8

1. Metrics (Numeric 1-5, High Weight)
  "Did rep identify quantifiable metrics/KPIs the prospect cares about?"
 
2. Economic Buyer (Pass/Fail, High Weight)
  "Was the economic buyer (budget holder) identified?"
 
3. Decision Criteria (Pass/Fail, High Weight)
  "Were the prospect's decision criteria discussed?"
 
4. Decision Process (Pass/Fail, High Weight)
  "Was the decision-making process and timeline uncovered?"
 
5. Identify Pain (Numeric 1-5, High Weight)
  "How well did rep uncover specific business pain points?"
 
6. Champion (Pass/Fail, Medium Weight)
  "Was a potential champion (internal advocate) identified?"
 
7. Next Steps (Pass/Fail, High Weight)
  "Were clear next steps with dates confirmed?"
 
8. Question Quality (Numeric 1-5, Medium Weight)
  "Quality of discovery questions asked (open-ended, probing)"

Template 2: Product Demo Scorecard

Name: Product Demo Scorecard
Type: Calls
Items: 10

1. Discovery Recap (Numeric 1-5, Medium)
  "Did rep recap pain points before demoing?"
 
2. Demo Customization (Numeric 1-5, High)
  "Was demo tailored to their specific use case?"
 
3. Feature-to-Benefit (Numeric 1-5, High)
  "Did rep connect features to business outcomes?"
 
4. Prospect Engagement (Numeric 1-5, Medium)
  "How engaged was the prospect during demo?"
 
5. Questions Asked (Pass/Fail, Standard)
  "Did rep ask 3+ questions during demo?"
 
6. Objection Handling (Numeric 1-5, Medium)
  "How well did rep handle concerns/objections?"
 
7. ROI Discussion (Pass/Fail, High)
  "Was potential ROI or value quantified?"
 
8. Stakeholder Alignment (Pass/Fail, Medium)
  "Did rep address multiple personas' needs?"
 
9. Next Steps (Pass/Fail, High)
  "Were specific next steps confirmed?"
 
10. Time Management (Numeric 1-5, Standard)
   "Did rep manage time effectively?"

Ready to learn more?

Attention's AI-native platform is trusted by the world's leading revenue organizations

Thank you! Your submission has been received!

Oops! Something went wrong while submitting the form.