English for Software Engineers and IT Teams. Lesson 11.
During an incident, technical work is only half the job. The other half is communication that keeps trust. In this lesson you write stakeholder-safe updates for a live incident at Northbank Digital. You need to be factual, time-based, and clear about next steps, without overpromising or sharing internal speculation.
You will study two model updates: one written for engineers in the incident channel, and one written for non-technical stakeholders. You will notice how the wording changes: more detail for the team, more impact and reassurance for stakeholders. You will practise setting an update cadence, naming what is being investigated, and stating what users should do (if anything). You will also practise apologising appropriately: accountable but not dramatic.
Your artefact is a three-part update set: internal status, stakeholder update, and a short recovery message once service stabilises, including what comes next.
1. The incident starts: internal status update.
You’re on duty at Northbank Digital, and we’ve got a live incident. In moments like this, the technical work is obviously urgent, but clear communication is what keeps the team coordinated and reduces panic. In this lesson, you’ll practise writing updates that are factual, time-based, and calm.
In this first block, we’ll stay inside the incident channel. Your audience is engineers: SRE, backend, and support. That means you can include operational detail, but you still need structure: what’s happening, who is affected, what we’re doing, and when the next update will be. Notice as well how careful language avoids guessing. You can show a ‘likely cause’ as long as you label it as not confirmed.
Listen to the model update and focus on three things: the time stamp, the impact statement, and the next update time. Then you’ll answer a few quick questions and write a short internal update of your own.
Situation: Live incident at Northbank Digital.
It’s 09:42 (UK time). Our B2B SaaS platform is experiencing elevated 5xx errors on the public API. Customers are reporting failed exports and intermittent timeouts in the dashboard. The incident channel is active, and people need a clear, steady update they can act on.
In the short audio you’ll hear in this block, the incident lead posts an internal update for engineers. This kind of update should be:
Time-based: “as of 09:45” rather than “currently”.
Factual: what we know, what we don’t know yet.
Action-oriented: what’s being investigated and by whom.
Cadenced: when the next update will land.
What “good” looks like (internal update).
A strong internal update usually contains:
Status + time: “Status update as of 09:45”.
Impact: what’s broken and how severe it is (briefly).
What we’re doing: investigation thread(s) and mitigation.
Next update time: a clear commitment to communicate.
Language you can reuse today.
Here are a few high-value phrases from the lesson chunk bank that fit this moment:
“Status update as of [time]: …”
“We’re investigating an issue affecting…”
“Current impact: …”
“Mitigation in progress: …”
“We’ve identified a likely cause, but we’re still confirming.”
“We’ll provide the next update by [time].”
In the activity below, you’ll show you understood the model update, then you’ll draft a short internal message with the same structure.
Practice & Feedback
Listen to the internal incident update. Then do two things:
Answer the comprehension questions in short bullet points (you can write one line per question):
What time is the update?
What is the current impact?
What are the investigation threads (what are people checking)?
When is the next update promised?
Write your own one-paragraph internal update (3–5 sentences) for the incident channel, using the same situation. Include a time stamp, impact, what’s in progress, and the next update time. Keep it calm and factual.
2. Same incident, different audience: stakeholder update.
Now we’ll take the same incident, but change the audience. Internally, engineers want operational detail: what you’re checking, what metrics are moving, what mitigation is in flight. Stakeholders, however, usually need something different: impact in plain language, reassurance that it’s being handled, and clear expectation setting. If you include too much technical detail, it can create confusion or unnecessary alarm.
In this block you’ll read a model stakeholder update. As you read, look for what has been removed compared with an engineering update: there are fewer systems named, fewer hypotheses, and less “inside baseball”. Also notice what gets added: a clear apology, guidance for affected users (if any), and a next update time so people don’t chase you in DMs.
Your job is to spot the differences and then rewrite a few lines so they sound stakeholder-safe: factual, calm, and easy to scan.
Internal vs stakeholder-safe: what changes?.
You’re still in the same incident. The difference is who is reading.
Engineers need enough detail to coordinate investigation and mitigation.
Non-technical stakeholders (Customer Success leadership, Sales, senior management, sometimes customers) need impact + status + next steps, with minimal speculation.
A stakeholder-safe update is not “less honest”. It’s more intentional: you share what they need to make decisions, without turning the update into a debugging transcript.
Model stakeholder update (read it like a mini press release).
As you read, notice:
Plain impact language: “some users may see errors” rather than “elevated 5xx”.
No guesswork: you don’t mention the “likely cause” unless it’s confirmed.
Clear expectation setting: next update time, what users should do.
Appropriate apology: accountable, not dramatic.
Useful patterns to borrow.
You can recycle the lesson phrases, but tailor them:
“Status update as of [time]: …” (still useful!)
“Current impact: …” (translate into user language)
“At this stage, we expect…” (careful, no overpromising)
“We’ll provide the next update by [time].” (very stakeholder-friendly)
“If you are affected, please…” (only if you have real guidance)
In the activity, you’ll answer a couple of questions about the model and then rewrite two lines into a stakeholder-safe version.
Practice & Feedback
Read the stakeholder update. Then:
Answer these questions (2–4 short sentences total):
What is the impact described, in plain English?
What commitment is made about the next update?
Rewrite the two “too technical” lines below into stakeholder-safe English:
“We’re seeing elevated 5xx and timeouts on the API tier.”
“Likely cause is connection churn after the deploy.”
Keep your rewrites calm, factual, and non-speculative. You can say you are investigating, but don’t guess.
Stakeholder update draft (09:50).
Status update as of 09:50 (UK time): We’re investigating a service issue affecting some customers using the Northbank platform.
Current impact: Some users may experience errors or slow responses when loading the dashboard or running exports.
What we’re doing: Engineering is actively investigating and applying mitigations to restore normal service.
Next update: We’ll provide the next update by 10:10.
We’re sorry for the disruption this is causing. If you are affected, please retry after a few minutes. We’ll share further guidance if needed.
3. Set an update cadence and structure every message.
Let’s make your updates easier to write under pressure. The trick is to use a repeatable template, so you’re not inventing the structure each time. When you have a cadence, you also reduce noise: people stop asking “any update?” because they know one is coming. That’s good for focus, and it’s good for trust.
In this block, we’ll work with a simple pattern: Status as of time, impact, what’s being done, next update time. You’ll also see a helpful extra line that often improves clarity: what users should do, if anything. If there’s no action, it’s perfectly fine to say “No action required at this stage.”
You’ll practise by filling in a semi-complete update. I’ll be looking for: clear times, measurable impact where possible, and language that avoids false certainty. Keep it short, like something you’d genuinely post in Slack or an incident tool.
The cadence habit: fewer messages, better messages.
When incidents run hot, communication can become chaotic: people post fragments, repeat rumours, and ask the same questions in DMs. A simple update cadence is a professional way to take control.
A cadence means:
You commit to a rhythm (e.g., every 20 minutes).
Each update is structured the same way.
If you don’t have new information, you still update: “No material change; investigation continues; next update at…”
A practical template you can reuse.
Below is a lightweight template that works for both internal and stakeholder updates. The only thing that changes is the amount of detail.
Line
Template
Why it helps
1
Status update as of [time]: …
Anchors the message in time.
2
Current impact: …
Stops ambiguity and rumour.
3
Mitigation / investigation: …
Shows action without over-explaining.
4
Next update: …
Sets expectations and reduces pings.
5 (optional)
User action: …
Prevents support churn and repeated questions.
Examples of “no change” language (very useful!).
If you’re still investigating, you can say:
“No material change since 09:45; mitigation continues and we’re monitoring.”
“We’re still confirming the root cause; we’ll update again by 10:25.”
“Service remains degraded for some users; we’re continuing to apply mitigations.”
Notice how these phrases are honest, calm, and time-based.
In the activity, you’ll complete a draft update and choose an appropriate cadence.
Practice & Feedback
Complete the incident update draft below. Write 5–7 lines, each starting with the label given (Status / Current impact / Mitigation in progress / Next update / User action).
Use UK time and keep the incident the same (API errors, dashboard exports). Choose a realistic cadence: for example, 15–25 minutes. If you are not sure what users should do, it’s OK to write “No action required at this stage.”
Aim for calm, factual language. Avoid promises like “will be fixed in 10 minutes” unless you’re certain.
Draft to complete.
Status update as of 10:05: …
Current impact: …
Mitigation in progress: …
Next update: …
User action: …
4. Apologise appropriately and manage uncertainty.
Incidents are stressful, and it’s tempting to either sound too dramatic or too defensive. Strong incident communication sits in the middle: you acknowledge the disruption, you show ownership, and you stay measured. That’s where trust comes from.
In this block we’ll focus on two things: apologising appropriately and signalling uncertainty responsibly. The key is to separate facts from hypotheses. You can say “we’ve identified a likely cause, but we’re still confirming” internally. For stakeholders, you often say “we’re investigating the cause” until it’s confirmed.
We’ll also practise expectation setting. Phrases like “At this stage, we expect…” are useful, but only when you keep them careful and conditional. If you’re not sure, say so, and give a next update time instead of a promise.
Listen to the short lines, then you’ll rewrite a few risky sentences into calm, accountable English that would be safe to send during a live incident.
Accountability without drama.
A good apology during an incident should be:
Clear: one sentence is often enough.
Accountable: you don’t blame users or other teams.
Not emotional: avoid “we’re devastated” or “this is unacceptable” unless you’re in a formal statement.
A reliable, professional line is:
> “We’re sorry for the disruption this is causing.”
You can also add reassurance:
> “We’re actively investigating and working to restore normal service.”
How to talk about uncertainty (without sounding vague).
Uncertainty is normal. What matters is labelling it.
Useful patterns:
“What we know so far is…”
“We don’t know yet whether…”
“We’ve identified a likely cause, but we’re still confirming.”
“At this stage, we expect… (conditional, not a promise)”
Avoid these common traps.
Here are risky phrases and why they’re risky:
“It should be fixed soon.” (No time anchor; sounds casual.)
“This won’t happen again.” (Overpromising.)
“We think it’s the database.” (Sounds like guessing; label as unconfirmed.)
“It’s not our fault.” (Erodes trust.)
Better alternatives.
“Service remains degraded for some users; mitigation is in progress.”
“We’re still confirming the root cause; next update by 10:30.”
“We believe X may be contributing, and we’re validating.” (internal)
In the activity you’ll upgrade a few sentences so they sound calm, credible, and safe.
Practice & Feedback
Listen to the short sentences from an incident thread. Some are good, some are risky.
Your task:
Choose two sentences that are risky for stakeholders and explain briefly (one short line each) why they are risky.
Rewrite three of the sentences into stakeholder-safe English. Keep them short (one sentence each), calm, and time-based where possible.
Stay in the same incident: API errors affecting dashboard and exports.
5. Live Slack simulation: answer questions in the incident thread.
Now let’s simulate the messy part: people asking questions in Slack while you’re trying to keep the channel calm and useful. This is where strong incident communicators stand out. You don’t ignore questions, but you also don’t get pulled into speculation. You give what you can: facts, what’s being investigated, what’s next, and when the next update will arrive.
In this block, you’ll write as the incident lead in the internal channel. I’m going to play three short messages you receive from colleagues: one from Customer Support, one from an engineer, and one from an engineering manager. Your replies should be short and structured. If you don’t know something, say you’re confirming, and commit to an update time.
Think of your replies as “mini-updates”: each one should reduce uncertainty, not increase it. Use the phrases you’ve seen: current impact, mitigation in progress, still confirming, next update by. Keep the tone steady and professional.
Chat behaviour during incidents (internal).
In an active incident thread, your goal is to keep communication:
Centralised (in the channel, not in private messages)
Calm (no blame, no panic)
Useful (facts + actions + time)
Patterns that work well in Slack replies.
You can reply to questions without writing an essay. Try these building blocks:
“As of [time], current impact is…”
“We’re investigating [area]; mitigation is in progress.”
“We don’t have confirmation yet on [X].”
“Next update by [time].”
“If you’re speaking to customers, please use this wording: …”
A quick note on “customer-facing wording”.
Even in the internal channel, Support will ask what they should tell customers. Give them a short, safe line they can paste:
> “We’re aware of an issue affecting some users. Engineering is investigating and we’ll share another update by [time].”
Keep it generic unless you’re 100% sure.
In the activity, you’ll respond to three Slack messages. Treat it like a real incident thread: short, calm, and time-based.
Practice & Feedback
You are the incident lead in #incidents-api. Listen to the three incoming Slack messages.
Write three separate replies, labelled A), B), and C). Each reply should be 2–4 short sentences.
Requirements:
Include at least one time reference in each reply (either “as of 10:15” or “next update by 10:30”).
Avoid speculation. If you don’t know, say you’re confirming.
For message A (Support), include a one-sentence “pasteable” line they can use with customers.
Keep the incident consistent: API errors affecting dashboard and exports.
6. Final artefact: internal, stakeholder and recovery messages.
Time to produce the deliverable for this lesson: a three-part set of updates you could genuinely send at work. You’ll write one internal status update for engineers, one stakeholder-safe update, and then a short recovery message once service stabilises. This mirrors real incident comms: you keep the team aligned, you keep stakeholders informed, and when things improve you close the loop with what happens next.
As you write, keep the discipline you practised: time stamps, impact, what’s being done, and the next update time. For the stakeholder update, keep the language plain and avoid unconfirmed root cause. For the recovery message, don’t declare victory too early. Say metrics are trending to normal and you’re monitoring closely. Then set expectations: a post-incident review will follow.
When you finish, read your three messages and ask yourself: could someone act on this without asking you another question? That’s the standard we’re aiming for.
Your three-part update set (copy-and-paste ready).
You’re still in the same Northbank Digital incident. Assume it’s now 10:32 UK time.
The error rate has dropped after mitigation.
Some customers may still see occasional slow responses.
The team is monitoring and validating stability.
Your job is to write three messages with the right audience fit.
1) Internal status update (engineers).
Include a little more operational detail: what’s being monitored, what mitigation happened, and what you’re still confirming.
Include:
“Status update as of …”
“Current impact: …”
“Mitigation in progress / completed: …”
“Next update by …”
2) Stakeholder update (non-technical).
Translate the impact and keep it safe. Do not add internal speculation.
Include:
Plain-English impact
Reassurance: investigating / mitigating
Next update time
One-sentence apology (appropriate, not dramatic)
3) Recovery message (once service stabilises).
This is the “we’re coming out of it” message.
Useful chunk-bank lines to reuse:
“Service is recovering and metrics are trending back to normal.”
“We’ll continue monitoring closely.”
“A full post-incident review will follow.”
Mini rubric (self-check).
Before you send, quickly check:
Time-based: do all three messages include clear times?
Audience fit: internal has a bit more detail; stakeholder is plain and safe.
Expectation setting: next update time is present where relevant.
Tone: calm, accountable, no blame, no overpromising.
Now write your three messages in the activity.
Practice & Feedback
Write your final three-part update set. Format exactly like this:
1) Internal update (engineers): 4–6 sentences.
2) Stakeholder update: 4–6 sentences.
3) Recovery message: 3–5 sentences.
Keep the incident consistent (API errors affecting dashboard and exports). Use UK time. Assume the internal update is at 10:32, the stakeholder update is at 10:35, and the recovery message is at 10:55.
Include “next update by …” in messages 1 and 2. In the recovery message, instead of a cadence, focus on monitoring and what comes next (for example, post-incident review).
Reference phrases (use any that fit).
Status update as of [time]: …
We’re investigating an issue affecting…
Current impact: …
Mitigation in progress: …
We’ve identified a likely cause, but we’re still confirming. (internal)
We’ll provide the next update by [time].
We’re sorry for the disruption this is causing.
At this stage, we expect… (be careful)
If you are affected, please…
Service is recovering and metrics are trending back to normal.