Section 3: Evaluation of the Federal Initiative to Address HIV/AIDS in Canada 2008–09 to 2012–13 – Evaluation description

3. Evaluation Description

3.1 Evaluation Scope

This is a scheduled evaluation as per the Public Health Agency of Canada/Health Canada's approved Five-Year Evaluation Plan 2013-2014 to 2017-2018 to meet Financial Administration Act requirements.

The evaluation focused specifically on activities funded by the Federal Initiative. As such, the following activities were considered out of scope: activities related to tuberculosis, hepatitis B and C and sexually transmitted infections; the Canadian HIV Vaccine Initiative; and any efficiencies gained from integrating HIV/AIDS activities with other communicable disease areas.

Data collection activities took place between April and August 2013. The evaluation examined activities conducted by all four federal partners involved in the Federal Initiative for the period of 2008-13, including activities implemented through regional offices of the partner departments.

3.2 Evaluation Issues

The evaluation issues were aligned with the Treasury Board of Canada's Policy on Evaluation (2009) and considered the five core issues under the two themes of relevance and performance, as shown in Appendix 3. Corresponding to each of the core issues, specific questions were developed based on program considerations and these guided the evaluation process.

3.3 Evaluation Approach and Design

An outcome-based evaluation approach was used to assess the progress made towards the achievement of expected outcomes and to identify lessons learned.

The Treasury Board's Policy on Evaluation (2009) outlines the core issues of relevance and performance to be addressed in evaluations of federal programs. The evaluation was designed so that data collection methods would meet the objectives and requirements of the Policy. A non-experimental design was used, which means that there was neither random assignment of sample groups for inclusion in the evaluation nor a control group to compare the sample to. As a non-experimental design, the evaluation relied on correlation to demonstrate effect, and did not imply causation. As such, the evaluation was designed to demonstrate the likely contributions of the program to the expected outcomes, rather than demonstrate direct causal links between the program and outcomes.

3.4 Data Collection and Analysis Methods

Evaluators collected and analyzed data from multiple sources:

  • program performance data
  • document and file review
  • literature review
  • financial data review
  • key informant interviews (see Appendix 4 for more detailed information).

Data were analyzed by triangulating information gathered from the different sources and methods listed above. This included:

  • systematic compilation, review and summarization of data to illustrate key findings
  • thematic analysis of qualitative data
  • trend analysis of comparable data over time
  • comparative analysis of data from disparate sources to validate summary findings.

3.5 Limitations and Mitigation Strategies

Challenges are inherent in any evaluation and limitations can impact the evaluation findings. As such, mitigation strategies are used to ensure that the data collected produces a credible evaluation report with evidence-based conclusions and recommendations. 

The following table outlines the limitations, their impact/potential impact on the evaluation and the mitigation strategies employed in this evaluation to limit these impacts.

Table 1: Limitations and Mitigation Strategies
Limitation Impact/ Potential Impact Mitigation Strategy

Limited primary data collection

  • Resulted in reliance on review of program documents and secondary data. Primary data collection was limited to key informant interviews.
  • Direct beneficiaries of the funding were not consulted as part of primary data collection. More interviews with these stakeholders could have provided greater insight into performance of the Initiative.
  • Triangulation methods were used to corroborate key findings (literature review and key informant interviews).
  • The Federal Initiative has been assessing activities during a variety of time points, and this would include gathering information directly from users of Initiative-funded activities and these data were used by the evaluation team.
  • Through literature and document reviews, as well as key informant interviews, the evaluators were able to gain a general understanding of these issues.

Limited quality and/or availability of financial data

  • Limited ability to assess efficiency and economy.
  • Use of other data collection methods assisted in assessing economy and efficiency.

Limitations in performance data:

  • Lack of benchmarks, baselines and targets
  • Output data stronger than outcome data
  • Without benchmarks and targets, demonstrations of success in achieving outcomes was difficult to assess.
  • While there was a vast amount of performance measurement information available, in many cases the assessment of outcome achievement was difficult. Outcome measures were less available than output and activity measures, resulting in limited ability at times to assess evidence of outcome achievement.
  • Performance reports were used to their fullest extent and provided indications of success in achieving outcomes. Where information was lacking, triangulation of evidence from literature review, document review and key informants helped to validate findings and provide additional evidence of outcome achievement.

Regardless of the limitations outlined above, the Evaluation Directorate is confident that the findings accurately reflect and assess the activities of the Federal Initiative over the last five years.

Page details

Date modified: