Better Communications Reporting Starts with an Objective
If you work in communications, you’ve probably lived some version of this scene:
A stakeholder leans back, squints at your report and says, “So… what’s the main KPI here?”

They don’t mean to be difficult. They’re looking for a simple way to judge whether communications is working. One metric. One scoreboard. Something tidy they can repeat to their boss without breaking a sweat.
And honestly? Fair.
The problem is that communications doesn’t have one universal scoreboard. I mean, if you work in media relations, it’s nice and easy to just count media hits, but communications has objectives—different ones depending on what the organization needs most at the moment. So it doesn’t even make sense to debate “media coverage vs. conversions.” The better question is:
What are we trying to change—and what evidence would convince a reasonable person that it’s changing?
Once you start there, the KPI conversation gets a lot easier. And a lot less weird.
The Reporting Trap Communicators Keep Falling Into
A lot of communications reporting, particularly in media relations, still looks like a “stuff we did” list. Here are the hits. Here are the impressions. Here are the top outlets. Here’s a sentiment chart that suggests math can read the room.
Then comes the inevitable follow-up: “OK… but did any of this matter?”
That question isn’t annoying. It’s the question.
The problem isn’t that media coverage is irrelevant. The problem is that we often present media coverage as if it automatically equals impact. It doesn’t. Coverage is an output. A useful one, often. But an output nonetheless.
Media coverage matters because it can do things: build legitimacy, shift perception, change what people search for, reduce confusion, improve compliance with guidance, build public trust, drive participation in programs, strengthen donor confidence, help a community understand a decision or protect a reputation when things get spicy. But you only get to claim those benefits if you can connect the work to an objective and show evidence that things moved in the direction you intended.
That connection is what most measurement in communications is missing.
The Simplest Model that Keeps You Honest (and makes your reporting clearer)
If you want a measurement approach that works across industries—and doesn’t require you to invent “comms math”—use this mental model:
Outputs → Outcomes → Impact
Outputs are what communications directly produces: media coverage, briefings, interviews, stakeholder materials, web and social content, speaking opportunities, contributed pieces, strong backlinks, spokesperson prep.
Outcomes are what changes because those outputs exist: improved message pull-through, a shift in share of voice, branded search lift, higher-quality traffic, better-informed stakeholders, fewer repeat questions to frontline staff, more people using the right forms, fewer misconceptions spreading, more community partners referencing your materials, and so on.
Impact is the business result leadership ultimately cares about: better uptake of public services, increased attendance at health clinics, improved vaccination appointment completion, safer behaviour during emergencies, stronger trust and credibility, improved relationships with Indigenous partners and community groups, successful recruitment into a program, higher participation in public consultations, fewer FOI headaches caused by misunderstandings, reduced reputational damage, better staff morale after a tough issue, or fundraising success for a nonprofit.
Here’s the key: Communications typically has strong control over outputs, meaningful influence over outcomes, and shared influence over impact. That’s not a weakness. That’s how trust and public understanding work. The mistake is pretending communications can claim direct ownership of impact in the same neat way an ad campaign can.
When you’re clear about which layer you’re reporting—and you’re honest about it—the whole “prove ROI” conversation gets calmer. You can show results without overclaiming. You can be persuasive without being slippery.
What Should You Measure? Start with the Goal, not the Metric
A lot of KPI drama happens because we pick metrics first and then try to reverse-engineer a story that makes them sound important.
Flip it.
Pick the primary objective—one primary objective—and then choose a small set of metrics that actually match it.
If the goal is awareness or consideration, media relations and visibility metrics make sense. But “visibility” doesn’t mean “anywhere and everywhere.” It means in the places that matter to the audience you’re trying to influence, in a context that helps them understand what you stand for. In those cases, measures like quality placement volume, a disciplined share-of-voice view, and branded search trends can tell a coherent story over time.
If the goal is credibility, the most important question becomes: Are you being treated as a serious source? Not “how many hits did we get,” but “what kind of hits, in what outlets, saying what about us?” This is where message pull-through, quote quality and the relevance of the coverage matter more than reach numbers. You’re measuring whether the public, stakeholders or peer institutions are granting you legitimacy—not just noticing you.
If the goal is engagement and participation—very common in government, higher ed, nonprofits and healthcare—the measurement needs to reflect that. Maybe you’re trying to increase attendance at a public information session. Maybe you’re trying to get more people to complete a survey or participate in a consultation. Maybe you’re trying to increase uptake of a program, reduce no-shows at clinics, get people to follow new guidance or recruit participants for a research study.

In those cases, your indicators might include click-throughs to official info pages, completion rates for key forms, attendance numbers, repeat attendance, fewer “how do I…” calls to service desks, or improvements in the quality of questions you’re receiving (a subtle but very real sign people are better informed).
If the goal is discoverability, digital communications and media relations overlap. Earned media and authoritative web mentions can improve how your organization is represented across the web, strengthen credibility signals and make it easier for people to find the right information quickly—especially during high-stakes moments. This doesn’t mean you should become an SEO department. It does mean you can track signals like referring domain growth and the strength and relevance of earned links, particularly when misinformation or confusion is part of the risk landscape.
And if the goal is reputation defence or crisis stability, measuring communications by “coverage volume” can be actively misleading. The question is whether an issue escalated or stabilized, whether misinformation spread or got corrected, whether stakeholders felt informed and reassured, and whether response speed and clarity helped contain risk. In that world, “success” can look like fewer headlines, not more.
Notice what’s happening: We didn’t pick one “main KPI.” We built a measurement approach that fits the job we’re doing.
That’s the happy middle ground between “coverage is everything” and “only conversions matter now.”
How to Make Communications More Measurable Without Pretending It’s Ads
You can make measurement stronger with a few practical habits.
First, use tracking where you can. Not everywhere, not obsessively, but thoughtfully. UTMs for placements you control (contributed content, partner posts, newsletters). QR codes for print and events. Dedicated landing pages for campaigns that are big enough to justify them.
Second, measure “assists,” not just last click. In public-sector and institutional work, the “conversion” is often a behaviour: people using the right service, showing up to something or understanding a decision well enough not to panic. Communications influence can show up as fewer repeat questions, fewer errors in applications, fewer people showing up at the wrong place at the wrong time or fewer staff hours spent clarifying basics. Those are real outcomes, even if they don’t fit neatly in a funnel.
Third, use lift thinking. Before/after windows. Trendlines. Comparative periods. Benchmarks where it makes sense. You’re not claiming communications “caused” every movement. You’re showing directional evidence that your work coincides with shifts in the outcomes you care about.
And yes—qualitative evidence still matters. It just needs to be handled responsibly. Community partner feedback, stakeholder emails that show improved understanding, journalists framing your position accurately, front-line staff reporting fewer misconceptions, or people using your language back to you—these are real signals. They shouldn’t replace data. But they can support the story in ways that raw counts can’t.
A Practical Reporting Structure that Won’t Make People Hate Your Reports
If you want your reporting to feel more strategic immediately, structure it like this:
Start with the objective (one sentence). Explain the strategy (two sentences). Report a small set of metrics that match the objective (not a buffet). Explain what changed and why. End with what you’re going to do next.
That’s it. Simple. Calm. Defensible.
The real measurement skill in communications is knowing what you’re trying to change, choosing indicators that match that goal, and telling the truth about what you can prove versus what you can reasonably infer.
Do that well, and the conversation shifts. You stop being the person defending “comms value” with vague claims and pretty charts. You become the person who can say, with confidence:
“Here’s what we’re trying to accomplish. Here’s how we’re measuring it. Here’s what’s moving. And here’s what we’re doing next.”
Receive our newsletter
Sign up below and we’ll be in touch with monthly updates about Broadsight, along with news and insights to keep you on the cutting edge of communications work in an AI era.