Evaluation of the Habitat Stewardship Program for Species at Risk

Previous page | ToC | Next page

3.0 Evaluation Design

The following sections outline the evaluation purpose and scope and the data collection approach and methods used.

3.1 Purpose and Scope

The evaluation examined the overall effectiveness of the HSP, focusing on issues of relevance, success and cost-effectiveness. It addressed the extent to which there is a need for the federal government to deliver this program, whether it has been successful in achieving its intended outcomes relating to the protection of species at risk and their habitats, and whether the most appropriate, cost-effective and efficient means have been used to achieve outcomes.

While the previous evaluation conducted in 2004 focused on the HSP headquarters as well as the Quebec and Atlantic regions, the current evaluation concentrated on the results and relevance of the HSP from 2004–2005 to 2007–2008, considering program operations in all five Environment Canada regions.

The following evaluation issues were addressed as part of this evaluation.

Relevance

Success/Impact

Cost-effectiveness

Design and Delivery

A matrix table mapping each evaluation question to the related indicators, information sources and methods of enquiry is presented in Annex 2

3.2 Evaluation Approach and Methodology

This section describes the methods that were used to conduct the evaluation of the HSP as well as limitations of the evaluation.


Top of Page


3.2.1 Methods

Document and Data Review – The evaluation team reviewed key documents including relevant legislation, speeches from the Throne, federal budgets, reports on plans and priorities (RPPs), departmental performance reports (DPRs), the HSP Results-based Management and Accountability Framework (RMAF), contribution call letters, project proposals and past evaluations, reviews and research reports. A complete list of documents reviewed is presented in Annex 3.

The evaluation team also reviewed the content of the HSP Online Tracking System and analysed the quantitative administrative data (e.g., number of proposals accepted or rejected, average amount of funded contributions, etc.) extracted from the Tracking System.

This data collection method addressed evaluation questions 1 to 10.

Key Informant Interviews – A total of 50 key stakeholders from Headquarters and the five regions were interviewed, distributed as follows:

The semi-structured interviews were conducted in person in the National Capital Region and by phone with respondents located in the regions, using an interview guide tailored for each category of key informant.

This data collection method addressed evaluation questions 1 to 10.

Survey – An online survey of funding applicants was conducted by an external survey firm between January 19 and February 10, 2009. The survey invitation was sent to the entire population (311) of organizations or individuals who submitted a funding application to the HSP between 2004–2005 and 2007–2008. A total of 38 of the emails were returned (i.e., had an invalid email address). These stakeholders were called and informed of the study and asked for a valid email address. Eleven of these stakeholders were reached and a new email recorded. As a result, a total of 284 applicants received the survey invitation and 130 completed the questionnaire, for a response rate of 46 percent. The margin of error for a sample of this size is +/- 6.4 per cent, 19 times out of 20.

Respondents to the survey answered the questionnaire based on administrative data that determined whether they had at least one project listed and completed in the project database (Stream 1), one or more cancelled projects but no completed projects in the project database (Stream 2), or one or more rejected projects but no completed projects in the project database (Stream 3).

Table 2 outlines how the final survey sample compares to the overall population of HSP funding applicants across streams and regions. As can be seen, the sample of responding organizations closely mirrors the overall population for this study across both region and stream. While the percentage of respondents from Stream 3 is lower than the population percentage, this is not surprising, given that these respondents did not receive program funds.

Table 2. Comparison between Population and Sample by Stream and Region
  Population (N=311) Sample (n=130)
  n % n %
Stream    
Stream 1 251 81 113 87
Stream 2 24 8 8 6
Stream 3 36 12 9 7
Region    
Pacific and Yukon 91 29 33 25
Prairie and Northern 50 16 20 15
Ontario 63 20 33 25
Quebec 57 18 22 17
Atlantic 50 16 22 17

Of the survey respondents, 28% percent focused their answers on an aquatic species project and 71% focused on a terrestrial or multiple-habitat species project.

This data collection method addressed evaluation questions 1, 2, 3, 4, 5, 7, 8, 9 and 10.

Case studies – Five case studies of HSP-funded projects were conducted to explore areas requiring additional investigation. Topics were identified through key informant interviews, which both flagged issues that required more targeted attention and identified projects that would provide the most valuable information to address the topics identified. It must be noted that, due to their non-representative nature, findings from specific case studies cannot be generalized to the rest of the program. Rather, they were used to illustrate, explore in-depth and help interpret findings from other lines of evidence.

The five topics selected for case study analysis are the following:

  1. Measurement of outcomes of outreach projects: Prairie Conservation Action Plan (Saskatchewan).
  2. Longer-term project investment: Plan de gestion durable du Mont-Rougemont (Quebec).
  3. Longer-term project investment: Integrated Ecosystem Management Related to the Recovery of the Endangered Eastern Loggerhead Shrike and the Constructive Conservation of Associated Short Grassland Species (Ontario).
  4. Freshwater species at risk: White Sturgeon (Pacific).
  5. Marine species at risk: North Atlantic Right Whale (Atlantic).

Each case study collected tombstone information on the project(s) being examined (e.g., profile of the organization(s), project objectives, outcome measurement activities and measured outcomes) and focused on addressing three to five core, case-specific questions. Data collection methods included three to four interviews with key stakeholders and a review of available documentation on each case.

An outline of the case study projects, core questions and key findings is presented in Annex 4.

This data collection method addressed evaluation questions 2, 3, 4, 7 and 10.


Top of Page


3.2.2  Limitations

There were four foreseeable constraints to this evaluation. The evaluation was not only interested in gathering information from past and current contribution applicants, but also from a sample of other individuals and groups who participate in other habitat stewardship efforts and yet are not involved in the HSP. The input of such individuals who have not benefited from or been involved in delivering the program would be useful to provide an external perspective on the program’s relevance, to explore possible limitations to the program’s reach, and to gain external views on the program’s perceived impacts to date. While a list of all successful and unsuccessful HSP funding applicants was available through the program’s online tracking system, no similar list existed for eligible recipients that have never participated in the program. Thus, identifying and contacting those potential recipients presented a challenge.

The approach taken was to employ a snowball method via planned key informant interviews in order to identify others who could apply for HSP funding but do not. From there, the evaluation team intended to determine the best way to contact and gather information from this body of individuals and organizations. However, as this approach yielded too few potential additional interviewees, the evaluation team instead relied on interviews with experts6 to gain an external perspective on the program’s relevance and impacts, as well as on a thorough examination of the program’s funding allocation processes to ascertain the adequacy of its outreach mechanisms.

A second limitation to this evaluation was the extent to which Canadians’ support for species-at-risk conservation – a component of the fourth immediate outcome addressed by Evaluation Question 3 – could be accurately measured. Such measurement is difficult in the absence of baseline data. In addition, there was no available database of participants in HSP-funded projects, further limiting the evaluators’ ability to directly measure their attitude or behavioural changes. While some projects funded by the HSP had administered questionnaires with their participants following outreach, education and extension activities, such practices were not consistently applied across most projects and also did not address participants’ attitude or behavioural changes. Achievement of this immediate outcome was therefore measured indirectly through interviews with HSP funding applicants.

Finally, the scope of this evaluation did not include on-the-ground measurement of the biological impacts of the HSP project activities. Rather, evaluators relied on the self-reported project outcomes contained in the HSP tracking system, on external research reports, and on testimonies from key informants and survey respondents to assess the success of the program in meeting its immediate and intermediate outcomes.

The evaluators’ ability to report on the achievement of program outcomes was, however, limited by uncertainty as to the accuracy of some of the performance data addressing immediate outcomes and the limited data demonstrating the achievement of intermediate outcomes.


6 Experts refer to SAR Recovery Team members and technical reviewers of HSP project proposals.

Previous page | ToC | Next page