01202 006 464
learndirectPathways

Collecting and Analysing Research Data

Podcast episode 57: Collecting and Analysing Research Data. Alex and Sam explore key concepts from the Pearson BTEC Level 4 HNC in Leadership and Management. Full transcript included.

Episode 57 of 80
Unit 6: Marketing Essentials
Pearson BTEC Level 4 HTQ Hosts: Alex & Sam

Key Takeaways

  • Data collection must follow the methods specified in the PMP; deviating without justification weakens research validity and makes it harder to draw reliable conclusions that can be defended in the project report.
  • Quantitative data is summarised using descriptive statistics (frequency, percentage, mean, median) and presented visually with charts and tables; qualitative data is analysed through thematic coding, grouping responses into themes that answer the research questions.
  • Triangulation strengthens research findings by using multiple data sources or methods to check consistency; a survey finding supported by interview evidence and corroborated by secondary data is significantly more robust than a finding resting on a single source.
Listen to this episode
Full audio available inside the course
Start learning

Full Transcript

How do you collect reliable data for a research project?

Alex: Welcome to the Leadership and Management podcast. I'm Alex, and today Sam and I are at a pivotal point in the project management journey. The planning is done. Now we're in execution: collecting data and analysing it to produce meaningful findings.

Sam: This is where projects succeed or fail. You can have a beautifully crafted plan, but if the data collection is sloppy or the analysis is superficial, your conclusions won't stand up. Weak data produces weak conclusions, no matter how polished the presentation.

Why is pilot testing a questionnaire important before full deployment?

Alex: Let's start with executing the data collection. You've got your methods planned. What disciplines matter most when you're actually in the field?

Sam: For questionnaires, pilot testing is non-negotiable. You send your questionnaire to three to five people before the main distribution, which reveals ambiguous questions, broken links, timing problems, and technical issues that are invisible to the person who wrote it. Then you send the questionnaire with clear instructions: what's the purpose, how long will it take, how will responses be used, and importantly, that participation is confidential. Set a hard deadline. And send a follow-up reminder, because response rates typically double with just one reminder. Monitor your responses and don't close the survey too early.

What is the difference between quantitative and qualitative data analysis?

Alex: For interviews, what's the biggest discipline challenge?

Alex: Data visualisation is an important part of communicating quantitative findings. Are there rules about which chart type to use when?

Sam: Some clear guidelines. Bar charts are best for comparing categories, for example, agreement levels across different departments. Line charts show trends over time. Pie charts work for showing composition, how a total breaks down into parts, but avoid them if you have more than five or six segments because they become impossible to read. Scatter plots show relationships between two variables. The overriding principle is: choose the chart that makes the data clear, not the chart that looks impressive. A well-chosen simple chart beats a complex one that confuses the reader.

What types of chart are best suited to different kinds of data?

Alex: There's also an important principle around objectivity. Confirmation bias is a real risk in project research.

Sam: It's one of the most common errors. Confirmation bias is when you unconsciously favour data that supports what you already believe and discount data that challenges it. The safeguard is triangulation: comparing findings from different sources and methods. If your survey and your interviews both point to the same conclusion, that's strong evidence. If they contradict each other, that's not a problem to hide; it's an interesting finding in itself that deserves exploration. Acknowledging limitations and contradictions in your data strengthens your credibility. It shows that you're being honest about what the data can and cannot tell you.

How do you avoid confirmation bias when analysing research findings?

Alex: A thought to close with: think about a decision in your organisation that was made without gathering data first, or where the data that was gathered was interpreted selectively. What would rigorous, unbiased analysis have revealed? And what might have been decided differently?