Identifying Your Audience
There are five possible audiences you want to prepare for, each with its own interest:
- Administrators and policy makers
- Parents and community
- Other educators
- Researchers
- Yourselves
As you pull your evaluation findings together, think about who will likely be asking questions about your program and who you want to present your findings to. We will now talk about the kinds of information those audiences will most likely be interested in.
Making Your Case
When you present your evaluation to an audience, you do so to show that what you're doing works. That means making a clear statement about what your program is and what it has achieved. This is where you pull together so much of what this Toolkit is about.
Be prepared to describe your program to an audience using the information we recommended in your Evaluation Notebook. Explain the program design, the relative use of the two languages, how much they are used in each grade level. Tell about your approach to bilingual literacy development. Give the information on who the students are, especially by native language group, and any other characteristics you think your audience needs to know (e.g., free lunch participation and special education services).
In Section 2, we encouraged you to list your program's goals and objectives. Those are the targets that guide everything you do. Show your audience your goals and objectives and why they are important. Explain the advantages of bilingualism and biliteracy in the modern world and the simple logic of promoting bilingualism and biliteracy through student interaction. Show your academic achievement and language proficiency targets.
Show and explain your data. Use the kinds of tables and graphs we discussed in Section 7. Make your data and your outcomes straightforward and comprehensible, and use meaningful comparisons. Show how your students are doing in comparison to the adopted state standards—how many of them are meeting the standards, how many more are meeting them year by year. Compare your students to other similar students and show how your program students are doing better, or how they are doing as well, plus the added benefit of bilingualism and biliteracy. You can compare your student outcomes to outcomes in various research studies. Show the results of your surveys that hopefully show how satisfied your students, staff, and parents are with the program. Make your evaluation a powerful tool for public relations.
Matching Information to the Audience's Interests
Administrators and policy makers can be a tough audience. In an age of accountability and sanctions, they will likely have two kinds of questions:
- Is your program getting good enough results to keep us out of trouble?
- Does your program cost more in dollars and resources than a different model might?
Administrators and policy makers will be focused on the kinds of accountability data that NCLB has brought upon us. Therefore, they will be most interested in results from the state-mandated assessments for Annual Yearly Progress (AYP) and progress in English proficiency for English learners pursuant to NCLBTitle III and state-adopted Annual Measurable Achievement Objectives. Dual language educators are excited about fostering bilingualism, biliteracy and biculturalism, but those are not NCLB goals for which school districts and administrators are held accountable. Of course that is why this Toolkit was developed in the first place, to meet the evaluation needs of dual language programs in a way that NCLB accountability does not.
Therefore, you need to keep conscientious track of your program participants, especially if your program is a strand within a school. Let's say your school is not meeting AYP for the ELL disaggregated subgroup, but the English learners in your program are making the expected achievement gains. You want to highlight that fact to your administrators and school board. Otherwise, your program data get lost in the larger school data, and your positive results don't emerge from the larger picture. Use your data to defend your program!
Sometimes your program may not be getting better results than other programs in the school and district, but they are at least as good. That too is an important fact. Policy makers and administrators are sensitive to parents' demands, and parents put their children in dual language programs because they value the goals of bilingualism, biliteracy and biculturalism. Therefore, if your students are achieving comparably to other students in your school or district, but not necessarily better, you will want to highlight three things:
- The dual language program students are comparable to those of other students, and therefore, the program does not have negative effects.
- Students in the program are making progress toward bilingualism and biliteracy, as shown in the data from your non-English measures (e.g., SOLOM, Spanish achievement assessment, etc.).
- Parents support the program because of the benefits they see, as shown in your attitudinal surveys. (That's one reason we recommended those parent surveys in Section 3.)
Of course administrators and policy makers want to know about costs. Many people maintain a persistent perception that for some reason, bilingual education costs more. Your evaluation notebook that we recommended in Section 1 can list the personnel, materials and training that have gone into your program. You may be able to assign some dollar figures, or you may point out that in terms of teachers, materials, etc., you merely use the same kinds of resources anyone else does, you just do them differently. You may purchase a Spanish reading series for certain grades instead of an English one, but just about everyone uses a reading series.
You may invest more money into testing since you are assessing in two languages. Whatever resources you use, however much you spend, be prepared to present that information and forestall the argument that "it's too expensive."
Parents and community members will be less interested in the specific achievement targets of NCLB than will policy makers and administrators, but they will care about academic achievement, and parents will particularly care about their own children's academic achievement.
In making the case to parents and community for expanding a program, recruiting for a program, or just reporting on a program, you will want to use the same kinds of achievement data you present to administrators and school boards. You will want to show that overall, the achievement results look good in comparison to other schools or programs. However, you will need something else. For each parent, you will want the achievement data for that parent's child.
Many parents may come from a generation before standards-based assessment, and they will think in norm-referenced terms: Is my child on grade level? Is my child above average? You will want to have that child's test scores on hand. In Section 4 we advised against entering grade equivalents and percentiles in the program database because those scores cannot easily be statistically analyzed. However, those scores probably appear on the printed report that the test publisher sends back to the school, so make sure the individual scores are in the students' files to show them to parents who ask.
Parents who place their children in dual language programs care about bilingualism, biliteracy and biculturalism. Therefore be prepared to show your findings about progress in language proficiency—both oral and written—and academic achievement in the non-English language. And again, be prepared to show the achievement evidence for the individual student to the parents.
Make sure that you orient parents to the meaning of the data you will present and the meaning of the test scores and the achievement levels. For the language proficiency assessments, try to couch scores in terms of what a student can do functionally with the language. Above all, show the parents tangible evidence of what their children can do in the language. That is why in Section 3 we recommended different kinds of evidence, including projects, portfolios and student performances: Look what your child can write in each language. Look what your child can say in each language. Look at the kinds of things your child can read in each language. Make it tangible, make it communicative.
Be sure to make a presentation to parents and community about the surveys you have conducted on them. If community members are asked to take the time to participate in a focus group or fill out a survey, they are entitled to know what became of the information they provided.
Other educators can be part of many different types of audiences, but typically they are the kind of people who go to a conference to learn or come to your school to see your program in action and decide if they want to do something similar. Their basic question is, "How do you do it?" They will want to see the same outcome data as administrators and policy makers because they want some assurance that it can work, and they will want data they can take home with them to make their own arguments: "If they did it, so can we."
Nevertheless, "How do you do it?" may be the question they are most interested in. In Section 1 we said that among other things, evaluation has the purpose of "ensuring that teachers understand the model and are implementing it correctly." That's why the program design in the Evaluation Notebook is so important. It specifies what "implementing it correctly" means. That information, plus good outcome data, provides other educators the information they need to understand your program and possibly replicate it at their own sites.
Researchers will be interested in all of the above—outcome data in academic achievement and language proficiency, disaggregated information on those outcomes by different student groups, and clear information on the program definition and implementation. Researchers may also have more interest than the other groups in such student variables as ethnicity, SES, family background, etc. We have talked about recording this kind of information in the database in Section 4.
Researchers supplement the evaluation questions of "What is it?" (implementation) and "Does it work?" (outcomes) with questions such as, "Does it work the same way for different groups of students?"
Researchers want good implementation data so they can compare different kinds of programs. They want to see whether a dual language program is different from a late-exit transitional program, and whether both are different from an SEI or English mainstream program—for ELL students. They want to know if different approaches to reading development, e.g., teaching everyone in the non-English language vs. native language first for everyone, have different outcomes.
They want consistently good outcome data so they can make those program comparisons, between TWI andDevelopmental Bilingual (DB) and between 90:10 and 50:50 TWI programs.
And they want to know in detail who the students are so they can answer such questions as:
- Do TWI programs work as well for low SES English speakers as they do for higher SES English speakers?
- Do low SES students show better reading achievement if their initial literacy is conducted in their native language vs. the second language vs. both languages?
- What are the respective contributions of strong native language literacy and target language proficiency in reading achievement in the target language?
Don't worry. You don't need to be able to answer these questions. However, if you maintain the kind of data we have recommended, and if you disaggregate your data in the ways we have suggested, you will provide researchers a great deal of information that will help them answer their questions – and help dual language programs as well. If you maintain good program fidelity, and if you keep good, clean data, researchers may want to collaborate with you to answer these important research questions. That can be beneficial for both of you.
You may be the most important audience of your own report. In Section 3 we talked about strategies to document the correct implementation of your program as you designed it. You can use that documentation to interpret your outcome findings and use it for the most important purpose of all in program evaluation, program improvement.
You can use your implementation and outcome findings to draw the following conclusions and make the corresponding decisions:
- Outcomes were good, and we implemented the program as planned. We can pat ourselves on the back and keep doing what we were doing.
- Outcomes were not so good, and we implemented the program as planned. Consider redesigning the parts of the program that showed the least favorable results.
- Outcomes were not so good, and we did not implement the program as planned. Look at the implementation data and figure out where the implementation fell apart, and do something about changing it.
Of course findings will rarely be all good or all bad. They will be spotty, good in some areas, not so good in others. Outcomes may be more favorable for some student groups than for others. Outcomes may be more favorable for some student groups than for others. It will be easier to detect real differences in outcomes among groups of students—by primary language, ethnicity, SES, etc.—if you keep good data on those student characteristics. Then, if you do find important differences, you’re in a better position to find out why and make appropriate program adjustments.