Clarifying a Vague Feature Request with a Product Manager.
English for Software Engineers and IT Teams. Lesson 2.
Product has a request: “Can we make the dashboard faster?” At B2 level, the challenge is not understanding the words, but narrowing the request into something the team can build without guesswork. In this lesson you join a short kick-off call with Mia, the Product Manager, and you practise asking targeted questions that uncover the real goal, the user impact, and the success criteria.
You will work with a model dialogue where an engineer handles ambiguity politely, checks assumptions, and avoids sounding like they are blocking the work. You will build phrases for clarifying what “faster” means (latency, load time, throughput), and for confirming priorities and constraints. Your end-of-lesson artefact is a short written summary: a clear problem statement plus a list of agreed next steps and open questions to post in the ticket or meeting notes.
1. Joining the kick-off call about “faster dashboard”.
Today you’re in a very common Northbank Digital situation: Product says, “Can we make the dashboard faster?” and everyone nods… but no-one actually knows what “faster” means yet. Your job as the engineer is to narrow the request into something measurable and buildable, without sounding like you’re blocking the work.
In this first block, you’ll listen to a short kick-off call with Mia, the Product Manager. While you listen, focus on three things: one, what problem Mia is trying to solve; two, what “faster” might mean in real terms; and three, which questions the engineer asks to turn a vague request into a clear next step.
After the audio, you’ll answer a few questions. Don’t worry about perfect words: the goal is to show you understood the intent and the key details. Then we’ll reuse the best phrases throughout the lesson until you can write a clean, ticket-ready summary at the end.
Situation.
You’re on a short kick-off call with Mia (Product Manager). She has a request from stakeholders:
> “Can we make the dashboard faster?”
At B2 level, the challenge is not vocabulary. The challenge is turning a vague request into measurable success criteria.
What “faster” could mean (and why it matters).
In engineering conversations, “faster” can point to different performance problems. If you don’t clarify this early, you risk building the wrong thing.
Here are three common meanings:
Load time: how long until the dashboard first appears (for example, from clicking the link to seeing the page).
Responsiveness after load: how quickly charts update when the user changes filters.
Throughput / scalability: how well the system behaves when many users access the dashboard at the same time.
Your goal in the call.
You are not there to debate or to reject the request. Your goal is to ask targeted questions that uncover:
User group (who is affected most).
Impact (what pain it creates and what it costs the business).
Success criteria (a number or a clear definition of “good”).
Constraints (deadline, environment, tech limits, “must not break” areas).
Listening focus.
As you listen in a moment, try to catch:
One question that clarifies the meaning of “faster”.
One question that clarifies business impact.
One phrase where the engineer checks understanding (for alignment).
You’ll answer right after the audio.
Practice & Feedback
Listen to the short kick-off call. Then answer in 3–6 bullet points.
What does Mia mean by “faster” (as far as we know so far)?
Which users are most affected?
What success criteria or target numbers are mentioned (if any)?
What are the next steps the engineer suggests?
Write your answers as if you’re taking quick notes for yourself during the call. Keep your wording simple, but be specific. If something is not confirmed, say that clearly (for example: “Not confirmed yet”).
2. Noticing high-value clarifying questions and alignment checks.
Now that you’ve heard the call, let’s mine it for language you can reuse. The key skill here is sounding curious and constructive, not interrogative. Notice how the engineer doesn’t jump straight into solutions. Instead, they first define the problem: what exactly needs to be faster, for whom, and by how much.
In real teams, this is what prevents rework. If you can get even one measurable target and one clear user group, you’ve already improved the quality of the ticket dramatically.
In this block, we’ll look at a few question patterns and a simple structure: clarify the meaning, clarify the impact, clarify constraints, then confirm alignment. You’ll do a short rewriting task to turn blunt or vague questions into diplomatic, targeted ones. Aim for natural, calm English that you could genuinely say on a call with Product.
The four moves that make you sound helpful (not difficult).
A good clarification sequence usually follows this order:
Clarify the word (what does “faster” mean?).
Clarify the impact (who is affected and what happens?).
Clarify constraints (deadlines, demos, “must not break”, scope).
This sequence is collaborative because it shows you’re trying to deliver the right outcome.
Phrase patterns to reuse (with examples).
Below are strong B2 patterns you can lift into calls, tickets, and Slack.
Purpose
Useful language
Example in our situation
Define “faster”
"When you say 'faster', what exactly are we optimising for?"
When you say “faster”, are we talking about load time or responsiveness after it loads?
Offer options (so it’s easier to answer)
"Is the main issue X, or Y?"
Is the main issue initial load time, or the filter interactions?
Ask for a measurable target
"Do you have a target number we can aim for?"
Do you have a target number for time to first meaningful view?
Focus on the right users
"Which user group is most affected?"
Is it everyone, or mainly enterprise accounts with large datasets?
Explore business impact
"What’s the business impact if we don’t address this now?"
What happens if we don’t improve this before the QBR?
Check constraints
"Are there any constraints we should be aware of?"
Is there a deadline, or any area we must not change?
Confirm alignment
"Just to check I’ve understood: the goal is…"
Just to check I’ve understood: we’re aiming for under three seconds for enterprise dashboards.
Common tone problem: sounding like you’re pushing back.
Compare these two versions:
Too blunt: “We can’t do that. What do you even mean by faster?”
Better: “When you say ‘faster’, what exactly are we optimising for?”
The second version keeps the relationship positive while still demanding clarity.
Mini task (you’ll do this next).
You’ll rewrite a few questions so they sound calm, professional, and specific. Keep the meaning, improve the tone.
Practice & Feedback
Rewrite the 5 sentences below so they sound like a helpful engineer speaking to Mia in the same kick-off call. Keep each one as one sentence.
Tips:
Use at least three phrases from the table above.
Add specific options (load time vs responsiveness, enterprise vs all users) to make your questions easier to answer.
Keep a collaborative tone (no sarcasm, no blame).
After rewriting, add one extra question of your own that would be useful in this situation (for example, about constraints or priority).
Rewrite these blunt/vague questions:
"What do you mean by faster?"
"Who is complaining?"
"How fast do you want it?"
"Why is this important right now?"
"Do we have a deadline or not?"
3. Pinning down success criteria and trade-offs politely.
Let’s go one level deeper: success criteria and trade-offs. Product often starts with a feeling, like “it’s slow”, but engineers need a definition. The tricky part is that asking for numbers can sound like you’re refusing to help. So we’ll practise framing those questions as a way to deliver value.
We’ll also add one more professional move: mentioning trade-offs without sounding negative. At B2 level, you can say, “If we prioritise X, we may need to de-prioritise Y,” which is a calm way to manage scope.
In this block you’ll work with a short set of engineer lines from the same call. Your task is to choose the best follow-up questions and then write two of your own. Focus on measurable language: targets, baselines, and what “good” looks like. Imagine you’re preparing to investigate and you need clarity before you spend time optimising the wrong thing.
From “complaints” to a measurable goal.
A request like “make it faster” becomes actionable when you can answer these questions:
Baseline: What is the current performance (even roughly)?
Target: What number is “good enough”?
Scope: Which pages, which endpoints, which user segments?
Constraints: Deadline, release windows, areas we must not change.
You don’t need all of these in the first call, but you should leave with a clear direction and next steps.
Useful performance wording (B2, non-jargony).
You can be technical without overwhelming Product:
“initial load time” (easy to understand)
“responsiveness after it loads” (also clear)
“time to first meaningful view” (more precise, still stakeholder-friendly)
“under three seconds” / “within five seconds” (simple targets)
Trade-offs without drama.
When scope is unclear, you can introduce prioritisation gently:
“If we prioritise X, we may need to de-prioritise Y.”
“We can aim for an improvement by the QBR, even if it’s not perfect.”
“Let’s capture the open questions and come back with options.”
This language signals progress while acknowledging limits.
Mini decision practice.
Imagine Mia answers quickly and you have to choose your next question.
Scenario: Mia says, “It feels slow for enterprise customers.”
What’s the best next move?
Ask for a target number?
Ask for the most painful user flow?
Ask about constraints like the QBR?
In real calls, you often combine two: you can ask for a target and confirm a deadline.
Your task next.
You’ll select strong follow-ups and write two of your own. Keep them short, clear, and collaborative.
Practice & Feedback
You are the engineer on the same call with Mia. Choose the best 4 follow-up questions from the list (write the numbers). Then write two original questions of your own.
Your questions should help you define success criteria and constraints for “faster dashboard”. At least one of your original questions must include an alignment check (for example, “Just to check I’ve understood…”). At least one must address priorities or trade-offs (for example, “If we prioritise X…”).
Write in full sentences, as if you’re speaking on a call.
Follow-up question options:
"Can you send me the full codebase so I can investigate?"
"Do you have a target number we can aim for, like time to first meaningful view?"
"Which user group is most affected: everyone, or mainly enterprise accounts with large datasets?"
"Are there any constraints we should be aware of, like an upcoming demo or release?"
"Can you guarantee the backend isn’t the problem?"
"What would a good outcome look like for you in two weeks?"
"Could you tell the stakeholders to stop using the dashboard for now?"
"What’s the business impact if we don’t address this now?"
"So basically you want everything to be faster everywhere, right?"
"If we prioritise dashboard load time, are you comfortable de-prioritising other UI improvements for this sprint?"
4. Turning the call into ticket-ready notes.
So far you’ve practised what to ask. Now let’s practise what to write afterwards, because this is where good engineering communication really shows. A kick-off call is only useful if the team can act on it later. That means capturing the problem statement, the current understanding of success, and the open questions.
In this block you’ll read a draft ticket comment from an engineer. It’s realistic: it’s quite good, but it has a few gaps and a couple of vague phrases. Your job is to review it like you would in a real team, and improve it.
As you read, look for three categories: what is already clear, what is missing, and what is phrased too vaguely. Then you’ll rewrite a short section so it’s more measurable and more aligned with Mia’s goal. This is exactly the kind of writing that stops back-and-forth in Slack.
Why written follow-up matters.
After a call, people remember different things. A clear ticket comment reduces ambiguity by making three things explicit:
What we heard (current shared understanding).
What we’re aiming for (success criteria and constraints).
What we still need to confirm (open questions + next steps).
When you write it well, you sound organised and collaborative, and you protect the team from scope creep.
A practical structure you can reuse.
A simple template that works well in Jira or meeting notes:
Problem statement: what’s slow, for whom, and what happens.
Goal / success criteria: ideally a target number or a clear definition.
Constraints / timeline: anything time-based (for example QBR in two weeks).
Next steps: what you will check, and what Product will confirm.
Open questions: clearly listed, so they don’t get lost.
What “good” looks like.
Compare these two:
Vague: “We should improve performance soon.”
Ticket-ready: “Goal: reduce initial dashboard load time for enterprise accounts to ~3 seconds (TBC) for the QBR in two weeks.”
Notice the difference: it includes who, what, metric, and why now.
Your reading task.
On the next panel you’ll read a draft comment. It includes some strong phrases, but also some unclear bits.
As you read, ask yourself:
Is “faster” defined (load vs responsiveness)?
Are affected users clearly named?
Is there any target number or measurable goal?
Are next steps and open questions separated?
Then you’ll improve it with more precise wording.
Practice & Feedback
Read the draft Jira comment. Then do two things:
Write three short bullets: what is already clear and useful in the comment.
Rewrite the Goal / Success criteria part (2–4 sentences) so it is more measurable and aligned with Mia’s needs.
Keep the same situation: enterprise users, initial load time, and the QBR in two weeks. If you add something that is not confirmed, label it clearly as TBC (to be confirmed). Use at least two phrases from our chunk bank (for example, “Just to check I’ve understood…” or “So the success criteria would be…”).
Draft Jira comment (post-call):
"Quick summary from the kick-off with Mia. The dashboard feels slow and we should make it faster. It seems worse for enterprise customers. We probably need to look at load time and maybe the API.
Goal / Success criteria: Improve dashboard performance so it feels better. Ideally it should load quicker for users.
Constraints: There is a customer meeting coming up soon.
Next steps: I’ll investigate current metrics and see what the bottlenecks are. Mia will check with stakeholders on expectations.
Open questions: What does faster mean exactly? Which dashboards are the worst?"
5. Slack chat simulation with Mia to confirm details.
Next, we’ll practise the real-world follow-up that often happens after the call: a quick Slack chat with Product. This is where you confirm the last missing pieces, such as the target metric, which dashboard pages are in scope, and whether a partial improvement is acceptable.
The challenge in chat is being concise but still polite and precise. You don’t want a long essay, but you also don’t want a vague, “Any updates?” message. You’ll use the same patterns we’ve been practising: offer options, ask for a target number, and do a short alignment check.
In the activity, you’ll have a mini conversation with Mia. You’ll write your messages as the engineer, and Mia will respond. Keep each message short, like 1–3 sentences. Aim for calm, helpful energy. You’re moving the work forward, not defending yourself.
Chat is not a different skill: it’s the same skill, just shorter.
In Slack, you still need:
a clear question
enough context so Mia can answer quickly
a polite, collaborative tone
Quick message patterns that work well.
Here are compact patterns that sound professional:
“Quick check: when you say ‘faster’, are we optimising for initial load time, or responsiveness after it loads?”
“Do you have a target number we can aim for (for example, time to first meaningful view)?”
“Which user group is most affected: everyone, or mainly enterprise accounts?”
“Are there any constraints we should be aware of, like the QBR in two weeks?”
“Just to check I’ve understood: the goal is…, ideally … seconds. Is that right?”
How to mention trade-offs without sounding negative.
In chat, keep it neutral and practical:
“If we prioritise X for the QBR, we may need to de-prioritise Y for now. Are you OK with that?”
Notice how this invites a decision instead of complaining.
Your simulation.
You’ll have a short back-and-forth with Mia.
Context: You already had the call. You’re now writing in the ticket thread in Slack to confirm specifics.
Try to achieve these outcomes in the chat:
Confirm what “faster” means.
Confirm a target or at least a “good outcome” definition.
Confirm constraints and what “good enough for the QBR” means.
End with a next step and ownership (who will do what).
Practice & Feedback
Write a chat-style conversation with Mia.
Format it like this:
You: ...
Mia: ...
Write 4 messages from you (the engineer). After each of your messages, also write what you think Mia would reply (so the conversation has 8 short lines in total).
Keep each line to 1–3 sentences. Use at least four phrases from the chunk bank (for example: “When you say ‘faster’…”, “Do you have a target number we can aim for?”, “Are there any constraints we should be aware of?”, “Just to check I’ve understood…”, “If we prioritise X, we may need to de-prioritise Y.”). End the chat with a clear next step and agreement to capture it in the ticket.
Chat context reminders:
The initial complaint is: "The dashboard feels slow."
It’s mainly enterprise customers with large datasets.
There is a QBR in two weeks.
A rough target mentioned in the call was "under three seconds" (not fully confirmed).
Your goal: confirm details and leave with clear next steps and open questions.
6. Final artefact: post a clear summary in the ticket.
You’ve done the hard part: you clarified meaning, impact, constraints, and you practised confirming details in chat. Now you’ll produce the end-of-lesson artefact: a short written summary you could genuinely paste into a Jira ticket or meeting notes.
Your summary needs to do three jobs at once. First, it should state the problem clearly and calmly, without blame. Second, it should capture the success criteria as far as they’re known, including anything that is still to be confirmed. Third, it should show the team what happens next: who will do what, and what questions are still open.
Aim for a compact, readable structure. Imagine someone joins the project tomorrow and reads only your comment. They should understand what “faster dashboard” means, why it matters, and what the next actions are. That is professional engineering communication at B2 level: clarity under ambiguity.
Your ticket comment: simple, measurable, actionable.
A strong summary comment is not long. It is structured.
Use this format (you can copy it):
Problem statement (2–3 sentences)
What is slow?
For whom?
What is the user impact?
Success criteria (2–4 bullet points)
Prefer a number, but you can also define “good outcome”.
Label anything uncertain as TBC.
Constraints / priority (1–2 bullets)
Include the QBR timing and what “good enough” means.
Next steps (3–5 bullets with owners)
Make actions explicit.
Open questions (2–5 bullets)
Only include questions that matter for delivery.
Language to recycle (use it naturally).
Try to include some of these phrases in your own words:
“When you say ‘faster’, what exactly are we optimising for?”
“Do you have a target number we can aim for?”
“Are there any constraints we should be aware of?”
“Just to check I’ve understood: the goal is…”
“So the success criteria would be…”
“If we prioritise X, we may need to de-prioritise Y.”
“Let’s capture the open questions and come back with options.”
“I’ll summarise this in the ticket so we’re aligned.”
Mini rubric (self-check before you post).
Before you submit your comment, check:
Specific: Do you say load time vs responsiveness?
Measurable: Is there at least one metric or target (even if TBC)?
Scoped: Do you name which users and (if known) which dashboards?
Actionable: Are next steps and owners clear?
Tone: Does it sound calm and collaborative?
Now write your final comment.
Practice & Feedback
Write your final Jira/ticket comment (about 140–200 words). Use the structure on the screen:
Problem statement
Success criteria
Constraints / priority
Next steps (with owners)
Open questions
Keep it in the same scenario with Mia: enterprise customers, slow initial dashboard load, QBR in two weeks, and a possible target of under three seconds (mark as TBC if needed). Include at least five phrases from the chunk bank, but make the comment sound natural rather than forced.
When you finish, read it once and check the mini rubric: specific, measurable, scoped, actionable, and calm tone.
Key facts you can safely include:
Stakeholder request: "Make the dashboard faster."
Current understanding: main issue is initial load time.
Most affected users: enterprise customers with large datasets.
Timeline: customer QBR in two weeks; improvement is valuable even if not perfect.
Target: "under three seconds" was mentioned, but can be marked TBC.
Optional (mark TBC if you include): which specific dashboards are worst; current baseline metrics; whether API latency is a suspected contributor.