8 Best Practices for Collecting Data for Your Annual Report
We love sharing best practices for developing surveys because data and metrics are our jam. Without reliable data, you can’t measure real impact—only activity. Whether you’re preparing for your annual report or simply trying to understand how your programs are working, the quality of your surveys matters.
Below, we’ve laid out eight best practices for developing surveys and collecting data, with examples pulled from a fictional nonprofit called CivsAct, whose mission is to encourage more people to vote, attend local council meetings, and actively shape their communities.
1. Start With Your Mission, Not Your Metrics
Before deciding what to track, clarify why you’re tracking it. Effective data collection begins with alignment to your mission, values, or strategic pillars. The goal isn’t to gather a ton of numbers—it’s to prove you’re fulfilling your mission, not just staying busy.
Example: CivsAct has three core pillars: confidence, access, and action. Their surveys ask direct Likert scale questions about each pillar, allowing them to measure true mission fulfillment instead of surface-level outputs.
Confidence: “After participating in this program, I feel confident engaging with local government or speaking at a public meeting.” (Strongly Disagree → Strongly Agree)
Access: “This program provided me with new tools, resources, or connections to participate in civic life.” (Strongly Disagree → Strongly Agree)
Action: “As a result of this program, I am more likely to vote, attend a community meeting, or contact an elected official.” (Strongly Disagree → Strongly Agree)
By centering questions around their pillars and using a consistent scale, CivsAct gathers data that directly reflects their mission and allows for meaningful comparison over time.
2. Be Consistent and Use One Survey
Your organization has one mission—and every program should support it. That means you don’t need a unique survey for every event or department. Instead, create one core, mission-aligned survey tool and use it consistently across programs. Doing this makes your data cohesive, comparable, and meaningful year over year.
Example: CivsAct built a single survey around its core pillars: confidence, access, and action. Whether they’re running a voter registration drive or a youth leadership workshop, they use the same set of core questions. If a program can’t connect back to the survey questions, it’s a red flag that the program may not align with the mission.
3. Design for Completion (Short + Simple)
Survey fatigue is real. People are more likely to complete surveys that are short (10 questions max), clear, and respectful of their time. Behavioral research shows completion rates drop dramatically as surveys get longer.
Example: CivsAct keeps their surveys to a few questions, most using a Likert scale (e.g., “Strongly Agree” to “Strongly Disagree”) for easy completion.
They end with one open-ended question like “What was the most valuable takeaway from today’s event?” This balance gets them actionable data without exhausting participants.
The open ended question can then be used as quotes or testimonies, and the Likert scales can be used as data - we like to think of it as Google reviews, 5 stars and a comment to support the rating.
4. Use the Outputs vs. Outcomes Lens
Outputs tell you what you did. Outcomes tell you what changed. Great impact reporting answers: What can they now do because of your program?
Read our Nonprofit Hive Feature on Outputs vs. Outcomes here.
5. Make It About Them, Not About You
Impact data should reflect participants’ experiences and growth, not just your performance. Asking people to share their own progress (Attendee-Centered Design) makes your data richer and more useful.
When surveys focus too heavily on internal metrics (e.g., “How satisfied were you with the speaker?” or “Was the venue comfortable?”), you end up measuring logistics instead of true impact. While operational feedback can be helpful, it doesn’t tell you whether your mission is being fulfilled.
Example: Instead of focusing on logistics (“Was the event room comfortable?”), CivsAct asks:
“What’s one new skill or piece of knowledge you gained?”
“How has your confidence in taking civic action changed since the program?”
This participant-centered approach identifies the “bright spots” to build on in future programming.
6. Follow Up Immediately—and Then Again Later
The best feedback comes in two waves:
Immediately – when experiences are fresh.
1–3 months later – when lasting change can be assessed.
Good survey data isn’t a one-and-done effort. To track true impact, you have to follow the outcomes—and those often reveal themselves well down the timeline.
Example: CivsAct sends a quick survey within 24 hours of each event to capture initial reactions: “As a result of this program, I am more likely to vote, attend a community meeting, or contact an elected official.” (Strongly Disagree → Strongly Agree). They're taking advantage of automated and segmented email campaigns, pairing thank yous with surveys.
Then, 90 days later, they send a second follow-up asking, “Since attending our training, have you voted, contacted an elected official, or attended a public meeting?” (Yes or No). This second survey uncovers whether their programs created sustained civic action.
7. Positive Framing of Questions
The way you ask a question influences the way participants remember the experience. Positive framing reinforces constructive memories and creates forward-looking energy. This approach reflects a strength-based evaluation model.
There is time to ask for constructive feedback, but it’s not from your entire general audience right after they’ve completed your program.
You want to make sure your constituents remember your programs positively (improving chances of word-of-mouth promotion) by asking them to reflect positively.
Example: CivsAct asks “What inspired you at this event?” rather than “Please offer any suggestions for the future?”
This small shift prompts participants to focus on what was valuable. When open-ended questions invite criticism too early or too broadly, attendees may fixate on minor negatives that color their overall memory of the event.
Bonus: Those “bright spots” are often the best starting point for improving and evolving future programming.
8. Measure Word-of-Mouth Impact (Net Promoter Score)
Including a Net Promoter Score (NPS) question is one of the simplest ways to gauge satisfaction and brand affinity. It asks:
“How likely are you to recommend this program to a friend or colleague?”
The NPS is one of the simplest and most reliable indicators of overall satisfaction and brand affinity. From a psychological standpoint, asking whether someone would recommend an event taps into both their personal experience and their willingness to publicly associate with it.
Example: CivsAct’s NPS question helps them understand who their strongest advocates are. They use an NPS calculator to measure their score and track it annually.
A rising NPS suggests that participants are not only satisfied but likely to spread the word—a powerful form of organic growth.
Bonus: Be Transparent When the Data Isn’t What You Hoped
Not every data point will be glowing, and that’s okay. Being honest about gaps or challenges builds credibility.
Example: CivsAct saw that participant’s confidence wasn’t where they wanted it to be.
Instead of ignoring it, they shared the insight with stakeholders and used it to develop a new peer-mentorship strategy. Their transparency helped build trust with funders and their community.
Final Thought
Annual reports aren’t just about numbers; they’re about stories backed by data. By keeping your surveys mission-aligned, participant-centered, and outcome-focused, you’ll tell a clearer, more compelling story of your real world impact.
Want us to walk you through this?
(Disclaimer: This content was developed with the assistance of AI tools for editing and refinement. All concepts, strategies, and examples are original and based on prior experience and research. AI was used to enhance clarity, structure, and readability.)