Saturday 21 January 2012

Assignment #2

Evaluation Focus
In preparing this assignment I had two points of focus: to determine both the effectiveness of the program in relation to its goal in promoting regular physical activity in order to lower the incidence of diabetes in First Nations women as well as to assess the effectiveness of the implementation of the program. The evaluation would target these two areas in an effort to align with the goals and details as provided by the program description.  The program overview explicitly states that the program is both a means and an end to address these health concerns within the First Nations population in Saskatchewan.   In this vain, I believe that a mix between the Scriven, Naturalistic and the CIPP models would be appropriate. 

Justification
In order to assess and evaluate the effectiveness of the program in reducing the rates of diabetes in First Nations women, in an effort to optimize the program, it is important then to gather information and evaluate the program by determining its end result as well as its implementation process.  The CIPP model focusses on providing feedback and assessing the context, input, process and product of a program.  In this way, the CIPP model meets the need to assess the effectiveness of the program in meeting its goals as both a means and an end to the health concerns the program wishes to target.  The Scriven model also focusses on really understanding the program goals, and as such, serves as a good theoretical base for this evaluation.  The Naturalistic model fits specifically with one focus area of this evaluation: providing feedback from the participants themselves.  The following plan details the incorporation of these models in more detail and further justification is provided throughout.    

Focus Questions
1.      What is the primary goal of the program?
a.       Does the primary goal of the program match with the implementation process?
b.      Are there any pitfalls in the implementation in providing the needs of the participants and goals?
2.      Is targeting the GDM is First Nations women an effective way to attain lowered rates of diabetes among First Nations populations?
3.      Did the physical/recreational activity lower the incidence of GDM among participants?
4.      Did the implementation process of the program optimize the probability of success in achieving the end (lowering the incidence of GDM among participants)?
a.       Was participant attendance consistent enough to base conclusions concerning reductions rates for GDM upon?
b.      Was the drop in format optimal for encouraging participants to attend?
c.       Did providing services free of charge (child care, snacks, bus passes) provide an incentive for participants?  Was it something that worked to raise/sustain participation?
d.      Did the choice of venue help to encourage participation?
e.       Did the nature of the activities (flexible, participant input format) encourage and foster participation?
f.       Did the inclusion of an Elder in the decision making process aide in providing the desired input and perspective?

Information Gathering
Existing data would need to be gathered in an attempt to connect the incidence of diabetes among First \nations population and GDM in pregnant women.  This may prove to be a limitation of the evaluation because such information may be difficult to acquire.  Never the less, it is still an important piece of the context piece of the evaluation, as prescribed by the CIPP model. 
In order to evaluate program success in relation to meeting the end goal, follow up information would need to be provided on the participants’ health status.  The number of women who developed GDM would need to be determined.  In addition to acquiring data on the women within the program, existing data, on rates of GDM in First Nations populations, would need to be acquired in order to determine whether the program had an impact on lowering the rates as compared to women who did not participate in the program.
Evaluation of the implementation process of the program would be determined through participant, program organizer and facilitator feedback.  The feedback would be provided through an interview and survey process.  This information would serve to provide the input component of the CIPP model.  Staffing and approaches to the implementation of the program would be examined and assessed.  Surveys would be completed by the program organizers and facilitators and would take the form of scaled response questions.  The aim would be to assess the effectiveness of the program elements as opposed to attempting to gather human interest information, and as such, the scale response style survey is appropriate to glean such information.  Application of a Naturalistic approach would be most conducive in acquiring the feedback of participants.  The feedback would be best relayed through open ended style responses provided within a focus group setting.   A focus group setting would be appropriate within the context because the women might be more likely to provide in detailed and honest responses where they are able to interact with their peers as opposed to a one on one interview with a stranger.  Both formats would include questions focusing on the structure and delivery model of the program itself. 

Data Analysis
The data collected in order to determine the occurrence of GDM in participants would be compared to a sample set of women who developed GDM who did not participate in the program.  This data analysis would provide information on the effect the program had on actually lowering the rate of GDM.  In order for this comparison to take place, allocation of a sample set to base the comparison on would be required.  This comparison of data of women within the program and those who did not participate would provide a solid base for determining the effectiveness of the program in meeting its end goal.  A limitation of the analysis would be that the reliability of the results data from the program correlates directly to the consistency of participant participation.  The data provided and conclusions drawn from the comparison would aid in determining the end result (Product) of the program. 
The information collected from the focus group and surveys would be analyzed in order to determine any common threads that appear.  The key points from the focus groups would indicate the effectiveness of the different components of program implementation (Input, Process)

Using the Information
The information and conclusions from the program evaluation would take the form of a final report in order to satisfy the needs of the program developers and facilitators.  In addition to a final report, it would also be useful and appropriate to create a less formal report in order to present to program participants.  

Saturday 14 January 2012

ECUR809.3 Assignment #1

After some struggles selecting a completed evaluation to review/analyze, I finally decided upon an evaluation that concerns a particular special education program.  The evaluation that I decided upon, “An Evaluation of the Structured Success Program: From the Students Perspective,” centers upon an evaluation of an alternate education program offered and implemented in Saskatoon.  Once I stumbled upon the article, it peaked my interest because the Structured Success Program is also offered within the school division I am currently working for and is run out of the building where I teach.  The program is an alternate program for those students who have serious behavioral and/or emotional disorders.  Students who are unable to operate within the regular classroom environment, and whose behaviors are a significant disruption to the learning environment for classmates, are placed in the Structured Success Program.  It is the goal of the program to teach and model appropriate behavior and social skills so that the students will eventually be integrated back into the regular classroom setting.
Although the evaluation does focus somewhat on the overall student perception of the Structured Success Program, the title is a little misleading.  A considerable amount of the evaluation is focused on the students’ perception of their classroom environment and its relationship with their self-concept.  The evaluation aimed to examine these two perceptions and then compare these findings with results from the regular classroom.  The aim of the comparison was to fill a gap in research that exists in studying the relationship between classroom environment and self-concept for students with B/ED.  This aim does still link with the overall purpose of the evaluation, but the connection between the two is not explicitly stated.  The comparison is made because, it seems, the goal is to get an overall understanding of students’ perceptions of their classroom environment, their self-concept, as well as students’ general impressions/perceptions concerning the program. 
The methods that were used in order to evaluate students’ perceptions and impressions of the Structured Success Program were all student completed surveys and questionnaires.  In order to determine the students’ perception of their classroom environment and self-concept, the CES (Classroom Environment Scale) and MSCS (Multidimensional Self-Concept Scale) were used.  More specifically, 17 students currently enrolled in the S2P from five schools in Saskatoon participated in the study.  The subjects were all males ranging from 11-14 years, with eleven being enrolled full time in the program and six only part time.  The length of time that these students were enrolled in the program also varied from 3-27 months.  The data collection for the evaluation was completed in two phases.  During phase one of the data collection participants completed the two self-report questionnaires (CES and MSCS).  The results from these two tests were then compared with results completed by students in regular classroom environments.  In the second phase of data collection students participated in semi-structured interviews with the author of the evaluation.  The questions from the interview included some limited response style questions as well as more open ended questions that required more detailed responses from the participants.  The aim of the second phase was to ascertain the general perceptions and sentiments of students in relation to their enrollment in and experiences within the program.
Perhaps the most evident strength of the methods employed is simply that they match with the intended purpose to provide information on students’ perceptions of their classroom environment, self-concept and the program in general.  The goal was to provide some information in terms of how regular classroom students’ perceptions of environment and self-concept compares with students with B/ED, and the administering of the CES and MSCS serves that purpose.  The evaluation also included an overview of the similarities and differences found as a result of administering and comparing the two sets of results.  The comparison is presented in a clear, concise manner and the reasons for the variances in responses are also explained.  The nature of the interview process also aligns with the goal to provide insight into the general perceptions of the program.  The open ended nature of the interview questions allows students to provide detailed, personal responses in relation to their experiences in the program.  In considering alternative formats for the open ended style responses, the evaluator also notes, rightly so, that a focus group format would have likely proved ineffective with this group of students.          
Although the overarching purpose of the evaluation seems to be achieved at first glance, there are some weakness and limitations of the evaluation.  The nature of the S2P student lends itself to general inaccuracies in attempting to establish empirical data to base conclusions upon.  The fact that these students usually harbor a general resentment for testing and mandatory work means that there is a possibility that students are not really thinking about the questions as they answer them, but simply trying to “get it over with.”  The evaluation itself provides evidence to suggest that these assumptions may very well be the case.  The interview phase of the data collection only included 5 out of the 17 participants.  The evaluator could not get the other students to participate in this section of the evaluation and also had to offer a $10 incentive just to get those 5 to complete the interview.  The conclusion here is an obvious one: how can you base conclusions about a program based on the responses of 5 out of your 17 participants.  For that matter, 17 participants isn’t exactly a large number to base conclusions upon either.  The small number of participants in the data collection process is an obvious weakness, one that the evaluator chooses to mention as well. 
The number of the participants is a concern, but in addition to that, the participants were also all males.  This seems to cause some concern as it certainly seems plausible to assume that there may be variances in the perceptions of females in the program compared with their male counterparts.  In addition to the problems with numbers in the interview portion of the evaluation, there is also little information/insight offered into the purpose of the interviews in relation to what the information will be used for.  The evaluator states that the interview process provided students with the opportunity to offer their opinions and experiences concerning the program, but to what end?   What will this information be used for?  These questions are not addressed.  The conclusion notes that understanding students’ perceptions within special education programs is needed because it may have implications for the future development and redevelopment of those programs.  It seems then that, despite some limitations, this evaluation still provides some research and conclusions in an attempt to shed light on an underrepresented group of students.  


Source:
Da Silva, Trudy A. "An Evaluation of the Structured Sucess Program: From the Students' Point of View." Canadian Journal of School Psychology. 2003 18: 129.  http://cjs.sagepub.com/content/18/1-2/129.