...

T WERA E J

by user

on
Category: Documents
18

views

Report

Comments

Transcript

T WERA E J
T H E WERA E D U C A T I O N A L J O U R N A L
Volume 5 Number 2
August 2013
Inside This Issue
Editor’s Corner
-Karen Banks
2
WaKIDS
WaKIDS Assessment Data: State Overview
-Kathe Taylor
Analyzing and Using WaKIDS Assessment Data: One Approach from the Edmonds School District
-Nancy Katims and Lara Drew
WaKIDS Implementation in Tacoma
-Minh-Anh Hodge and Pat Cummings
3 – 16
Mitigating Passivity in the Primary Grades: Effects of Teacher Behaviors
-Dorothy E. Munson
17 - 21
The Neglected Benefits of Testing: Implications for Classroom and Self-Directed Learning
- Olusola O. Adesope and Dominic A. Trevisan
22 - 26
How Education Funding Relates to Student Outcomes
- Peter J. Bylsma
27 - 34
Stupid Excel Tricks: Calculated Fields
- Pat Cummings
35 - 39
Book Reviews
41 - 53
Inventing Kindergarten
by Norman Brosterman
- Reviewed by: Andrea Meld
42 - 45
Charter Schools and the Corporate Makeover of Public Education: What’s at Stake?
by Michael Fabricant and Michelle Fine
- Reviewed by: Jill Hearne
46 - 48
Embedded Formative Assessment
by Dylan Wiliam
- Reviewed by: Nancy Katims
49 - 50
The New Jim Crow: Mass Incarceration in the Age of Colorblindness
by Michelle Alexander
- Reviewed by: Karen Banks
51
The Signal and the Noise: Why So Many Predictions Fail –but Some Don’t
by Nate Silver
- Reviewed by: Pat Cummings
52
The Washington Educational Research Association
The WERA Educational Journal
Page 2/August 2013
Editor’s Corner
WEJ Needs Peer Reviewers
by Karen Banks
This issue of WEJ differs somewhat from our original
conception, which was to focus primarily on early
childhood (P-3) issues. With a number of critical
issues facing our state the timing seemed appropriate
to broaden our focus to also include some legal and
policy issues raised by events in late 2012, both in the
state courts and due to the state election results.
The original P-3 theme of this issue is certainly
reflected in a very practical section on how districts
are using the WaKIDS inventory, as well as an
extended book review of Norm Brosterman Inventing
Kindergarten, and an article on young students'
active versus passive behaviors in classrooms.
However, in addition, readers will find a succinct,
timely article on school finance in Washington, an
extended book review on charter schools, and a book
review of Michelle Alexander's The New Jim Crow:
Mass Incarceration in the Age of Colorblindness.
Perhaps you will wonder, as I did, if Washington voters
last fall were thinking about how much money we
were spending prosecuting people for possession of
small amounts of marijuana. Or were they thinking of
how prosecution of those cases was affecting the
social fabric, particularly in minority communities?
Finally, articles on the benefits of testing, and on
using calculated fields in pivot tables (for Excel files)
will be of particular interest to readers who work in
applied settings or who manipulate data on a regular
basis. It may be hard to focus on this point while you
are still recovering from the flurry of end-of-year
testing, but there are ways that classroom tests can
help students retain and organize information!
Note: There is a new email address for the journal,
although the old email address will still work. The
new one is [email protected].
-Karen Banks, Editor
WEJ really needs your help. We are
urgently seeking individuals with
diverse areas of expertise to serve
as peer reviewers of WEJ
submissions. Reviewers can be
based in schools or school districts,
academic settings such as colleges
and universities, community
organizations, professional
associations, non-governmental
organizations, consulting roles, and
international, national, state, or
local agencies. You could also be
retired from one or more of these
entities.
If you are willing to serve as a peer
reviewer, please send an email to
[email protected]. Include your
area(s) of expertise and interest,
along with your current position and
organization (or your retirement
status!)
Unless you want to be contacted
more often, you will not be asked to
review more than one article per
year, and the more likely frequency
is once every two or three years,
depending on the topics of articles
submitted.
The WERA Educational Journal
Page 3/August 2013
Page 3De/M
ABOUT THIS SECTION
State-funded, full-day kindergarten has been declared part of the basic education for Washington students.
In addition, the state of Washington has mandated expansion of the Washington Kindergarten Inventory of
Developing Skills (WaKIDS) to serve all state-funded, full-day kindergarten students. While full-day
kindergarten requirements are being phased in across the state over the next few years, approximately onefourth of state kindergarteners participated in the WaKIDS assessment in 2012, and that percentage is
expected to increase each year through 2017. WEJ requested this three-part series of articles. The series
starts with an overview of WaKIDS, with particular attention to the GOLD® data gathered in Fall 2012. The
next two articles show how two different school districts implemented WaKIDS and have used the data from
it. These district-level perspectives on using the data may help readers from other school districts that are
already participating in WaKIDs, along with those districts coming on board over the next few years.
Researchers may be interested in knowing the types of data that will be available.
Page 4/August 2013
The WERA Educational Journal
WaKIDS Assessment Data: State Overview
by Kathe Taylor
The Washington Kindergarten Inventory of Developing
Skills (WaKIDS) is a process for engaging key adults in a
child’s life in a conversation about how to help that
child be successful in school. The process, intended
to smooth transitions, inform instruction, and advise
policy development, includes three interrelated
components:
•
Family Connection, where teachers welcome
families as partners in their child’s education.
Teachers meet individually with families at the
beginning of the school year, providing an
opportunity for teachers to get to know the
children and families, and for families to get to
know their child’s teacher.
•
Early Learning Collaboration, where early
learning providers and K-12 school staff to share
information about children entering the school
system.
•
Whole Child Assessment, where teachers gather
information about students’ strengths in six
areas of development and learning that have
been shown by research to be predictive of
school success:
social emotional, physical,
cognitive, and language development; literacy,
and math.
This article focuses on the third, “whole child
assessment” component. It provides an overview of
the state-initiated Washington Kindergarten Inventory
of Developing Skills (WaKIDS) and the 2012 baseline
assessment results that were published on the
Washington State Report Card earlier this year. Two
districts, Edmonds and Tacoma, describe their
strategies for analyzing, sharing, and integrating the
WaKIDS data.
Development of WaKIDS
Most large-scale initiatives have many beginnings.
Although the 2012-13 requirement (RCW 28A.655.080)
to implement WaKIDS in state-funded, full-day
kindergartens may be new, the germ of the WaKIDS
idea has been developing over many years.
Almost a decade ago, the Office of Superintendent of
Public Instruction (OSPI) sponsored a survey of
kindergarten teachers’ perceptions of kindergartners’
preparedness for school. Findings from that survey
continue to be referenced as an early call-to-action
(Pavelchek, 2005).
In 2006, the Department of Early Learning and the
public-private partnership Thrive by Five were
established, and the Washington Learns Steering
Committee urged Washington to develop a “world
class, learner-focused, seamless education” system.
The report stressed the importance of early learning
and included as one strategy the development and
implementation
of
a
kindergarten
readiness
assessment tool to “help teachers, parents and
caregivers understand the social and academic
development of kindergartners.” (Washington Learns,
p. 24).
Several years later, the Washington State Legislature
initiated the development of the Washington
Kindergarten Inventory of Developing Skills (WaKIDS)
in the 2009 legislative session by providing funds to
the Department of Early Learning (DEL). The
appropriation directed DEL to conduct a pilot study, in
consultation with OSPI, for the development of a
kindergarten inventory to be administered to students
at the beginning of the school year.
In Fall 2010, three “whole child” kindergarten
assessments were piloted:
Teaching Strategies
GOLD®, Work Sampling System (WSS; Pearson), and
Developing Skills Checklist (DSC; CTB/McGraw-Hill). A
total of 116 teachers participated in the pilot,
including some from the Tacoma School District, which
is profiled later in this series.
The University of Washington (UW) Department of
Childcare Quality and Early Learning provided guidance
for the pilot, obtaining feedback from the teachers
administering the three different assessments,
analyzing the results, and issuing a report (Joseph et
al., 2011). A review committee composed of early
learning providers, kindergarten teachers, and
representatives of DEL, OSPI, Thrive by Five
Washington, and the Bill and Melinda Gates Foundation
(Gates Foundation) reviewed the results and
recommended that WaKIDS consist of three major
components—family
connection,
early
learning
collaboration, and whole child assessment.
The
committee also recommended that GOLD be used as
the “whole child” assessment.
In 2011, GOLD was piloted in volunteer schools, and
the Department of Early Learning was awarded a Race
to the Top Early Learning Challenge grant that
included resources for training teachers to implement
a kindergarten entry assessment.
Continued on next page
The WERA Educational Journal
Teaching Strategies GOLD®
GOLD is a comprehensive, whole-child, observational
assessment. Although Teaching Strategies plans to
expand the developmental progressions beyond
kindergarten, the tool is currently reliable and valid
for use with children from birth through kindergarten.
GOLD is not a screening tool, but its focus on
children’s strengths and its use in the first weeks of
school may help teachers understand more quickly e
further assessment may be needed.
GOLD assesses progressions of development and
learning on 38 objectives. Each progression identifies
the behavioral and learning indicators that serve as
development markers for children. Some, but not all,
objectives have “dimensions” that fine-tune and
further define the objective. For instance, in the area
of social emotional development, there is an
objective to “regulate own emotions and behaviors,”
and a dimension to “follow limits and expectations.”
By contrast, the mathematics objective, “compares
and measures,” stands alone.
Recognizing that observational assessment places
higher demands on teachers’ time than other forms of
state
assessments,
Washington
developed
a
customized version of GOLD with fewer (19)
objectives.
These high leverage objectives and
dimensions are connected to Washington’s state
standards (including the Common Core State
Standards) and early learning guidelines. Most of
Washington’s Early Childhood Education Assistance
Programs (ECEAP), many Washington Head Start
Programs, and some private providers are also using
GOLD; most are assessing all of the objectives and
dimensions.
WaKIDS Baseline Year: Fall 2012 Implementation
Washington law (RCW 28A.150.315) incorporates statefunded, full-day kindergarten into basic education,
and requires that funding be phased in until full
statewide implementation is achieved in 2017-18.
Funding to full-day kindergartens is directed first to
schools with the highest percentages of students on
free and reduced lunch. The 2012 Legislature made
WaKIDS mandatory in all state-funded, full-day
kindergartens unless a district received a waiver for
implementation.
Over 100 districts, 300 schools,
1,000 teachers, and 21,000 children participated in
the 2012-2013 implementation.
The large number of high poverty schools (see Figure
1) skew the demographics of the baseline sample for
the Fall 2012 results, when compared with the
demographic
characteristics
of
kindergartners
statewide. This occurred even though some other
schools also volunteered to participate in the Fall 2012
implementation of WaKIDS. All data were collected in
the first seven weeks of the school year.
Page 5/August 2013
Page 5De/M
Figure 1: Demographics of Kindergartener Participating in Fall
2012 WaKIDS Implementation
Demographic Comparison
American Indian or Alaska
Native
WaKIDS
Statewide K
1.8%
1.3%
Asian
4.7%
6.2%
Black/African American
6.9%
4.4%
Hispanic
Native Hawaiian/Pacific
Islander
38.4%
24.2%
1.2%
1.0%
White
34.2%
54.9%
Two or More Races
5.7%
7.9%
Not Provided
7.1%
0.0%
Male
51.5%
51.8%
Female
48.5%
48.2%
Special Ed
8.3%
9.2%
Bilingual
30.3%
18.5%
Free-Reduced Lunch
68.9%
48.3%
Total Students
21,811
83,255
Findings
The use of a common kindergarten assessment is new
to Washington, so this baseline year of 2012 data is
the first time that the state has been able to look at
kindergarten data across districts and regions. The
school, district, ESD and state data are posted on the
OSPI Washington State Report Card website, and can
be disaggregated by gender, race, ethnicity, special
education, limited English, and low income.
Three themes are present in the data:
1. Skill levels of the Fall 2012 kindergartners
assessed varied across the six areas of
development assessed. Children were most likely
to demonstrate the widely held expectations of
entering kindergartners in physical development
and least likely in mathematics (see Figure 2).
2. Skill levels of the Fall 2012 kindergartners
assessed varied within the six areas of
development assessed. In Washington, children
must be five or older to enter kindergarten.
Although most kindergartners demonstrated the
characteristics expected of children 4 and older,
some students (16% to 33%, depending on the area
of development) demonstrated the characteristics
of children 3 years old and younger (see Figure 3).
Continued on next page
Page 6/August 2013
The WERA Educational Journal
3. The opportunity gap (Carter et al., 2013) is
evident in the first few weeks of kindergarten;
some groups of children were more likely to have
developed the widely held expectations of
entering kindergartners than others.
Figure 2: Percentages of Fall 2012 Students Demonstrating the Characteristics Expected of Entering Kindergartners
Continued on next page
The WERA Educational Journal
Page 7/August 2013
Page 7De/M
Figure 3: Skill Levels of Fall 2012 Students Within Each Area of Development
Next Steps
Now that the urgency to “get data” has temporarily
subsided, the strength and future of this initiative lies
in what we do with the data. OSPI, DEL and Thrive by
Five are committed independently and collectively to
engaging their respective communities in discussions
of the WaKIDS data so that it can be used strategically
to inform instruction, enable families to support
students’ learning, address opportunity gaps, increase
students’ preparation for kindergarten, and influence
policy development.
look at cross-sector data, as well. Early Learning
Coalitions and Education Service Districts (ESDs),
under the leadership of Thrive by Five, have begun to
execute their plans for using data about children's
strengths as a way to start conversations about
strategies for increasing students' preparedness for
kindergarten, and for individualizing the ways
teachers help students learn. The intricacies of datasharing across site licenses and sectors are a work-inprogress, but the work is moving.
Thanks to the efforts of over 1,000 kindergarten
teachers, Washington has, for the first time, a
statewide look at the characteristics of entering
kindergartners across schools and districts. In this
first year, most districts are still learning how to make
sense of this new source of information, but some are
ahead of the curve. In the next section of this article,
two school districts share their approaches to using
the data.
In addition to publishing the data on the State Report
Card, OSPI has created a parent-friendly document
entitled “Characteristics of Entering Kindergartners”
to help communities understand what types of
strengths are being assessed.
This handout is
available on the WaKIDS website.
Similarly, thanks to the efforts of 38 ECEAP providers
and 25 Head Start programs, Washington is poised to
The 2012 data offer a baseline to observe trends and
study impacts longitudinally, as the system continues
to expand to serve all state-funded, full-day
kindergarten students.
Continued on next page
The WERA Educational Journal
Page 8/August 2013
In the meantime, we are working on the following:
•
•
Improving implementation. Extensive feedback
from teachers and principals has helped to
identify ways to streamline and adapt the
assessment process. Improvements were made in
Fall 2012; more improvements will be made in
time for Fall 2013 implementation.
Seeking policy solutions. As of this writing, the
2013 Legislature is considering bills that would
allow up to five days for the Family Connection
component of WaKIDS.
•
Making explicit the relationship of WaKIDS to other
state initiatives, such as Common Core State
Standards (and all of other state standards) and
the Teacher/Principal Evaluation Project (TPEP)
•
Improving reliability. The University of
Washington is conducting a reliability and
concurrent validity study; results will be available
this spring. WaKIDS training is being restructured
to include the opportunity for teachers to
complete their inter-rater reliability (IRR)
certification, which was voluntary in 2012, when
174 teachers (approximately 20%) received their
IRR certificate.
Summary
The development of WaKIDS provides a common
language for K-12 educators, early learning providers,
families and other community members to discuss
children’s development and learning strengths, and to
identify strategies that will help all children be
prepared to succeed in school. Collecting the data is
only the first action step. Using the data to inform
practice is the next. The following Edmonds and
Tacoma School District profiles describe what some of
those next steps look like at the local levels.
References:
Carter, P.L. and Welner, K.G. (Eds.). (2013) Closing
the Opportunity Gap: What America Must Do to
Give Every Child An Even Chance. New York:
Oxford University Press.
Joseph, G. E., Cevasco, M., Nolen, E., Stull, S. (June
2011). WaKIDS Pilot—Second Report. Childcare
Quality and Early Learning Center for Research and
Training. University of Washington. Seattle, WA.
Retrieved from http://www.k12.wa.us/WaKIDS
Pavelchek, D. (November 2005). Student Readiness
for Kindergarten. Office of Superintendent of
Public Instruction. Olympia, WA.
Retrieved from
http://www.k12.wa.us/WaKIDS/Resources
Washington Learns. (November 2006). State of
Washington Office of the Governor.
Retrieved from
http://www.washingtonlearns.wa.gov
Author:
Kathe Taylor, Ph.D., is Director of Early Learning
Assessment at OSPI. Contact Kathe concerning this article
at [email protected], or at Office of Superintendent
of Public Instruction, Old Capitol Building, PO Box 47200,
Olympia, WA 98504-7200. Phone: 360-725-6153
The WERA Educational Journal
Page 9/August 2013
9De/M
Analyzing and Using Assessment Data: One Approach from the EdmondsPage
School
District
by Nancy Katims and Lara Drew
“I need to reflect on where my kids are presently and
what I need to do to move them forward.” This
comment from a kindergarten teacher in the Edmonds
School District succinctly sums up one of the major goals
of the early learning work underway in our district.
With support from an Early Learning grant from the Bill
and Melinda Gates Foundation, the Edmonds School
District has been developing processes and structures to
help our kindergarten teachers be able to achieve this
goal.
providing the level of instruction to which they were
accustomed. The concept of an observational “whole
child” assessment was relatively new for our teachers.
We were fortunate to have funds from our Gates grant
to provide paid time for our teachers for professional
development, to meet to discuss these issues and to
work with district staff to try to find solutions. We
honestly spent most of our time working on
implementation issues rather than on the use of the
assessment data to help meet students’ needs.
Background
Analyzing the Data
Our work began in earnest in the 2010-11 school year
with a focus on building meaningful connections
between our Prekindergarten (PreK) community and our
K-3 teachers.
We constructed a professional
development sequence of activities focused on the
development of writing in the early years, based on the
work of David Matteson (Matteson & Freeman, 2005).
Teachers from the PreK-3 spectrum met together to
develop a common understanding of the developmental
learning sequence across this age range and to share
strategies for helping to move student learning forward.
Grade level/school teams (including PreK) formed
Professional Learning Communities (PLCs) to deepen the
understandings that were introduced in large group
meetings.
School district staff forged strategic
connections with our PreK providers in a variety of ways,
including groups of teachers and administrators visiting
classrooms across the PreK-3 continuum, observing
lessons, and debriefing each observation.
The 2012-13 school year marked some important
additions to our model:
In 2011-12, six of our highest needs schools (out of 22
elementary schools in the district) officially adopted the
state’s WaKIDS program.
Because of our earlier
development of PreK connections in our community, we
had an established basis for the Early Learning
Collaboration component of WaKIDS. Additionally, we
had previously used a Jump Start model of
kindergarteners spending 1-2 weeks in August at their
new elementary school, so we were able to fairly easily
implement the Family Connection component of
WaKIDS.
It was the “Whole Child Assessment”
component of WaKIDS that turned out to have the most
bumps in the road.
Similar to Tacoma’s first year experience, we listened
carefully to our teachers to determine what issues they
were facing with the assessment and how we could help
provide support to them. Some of the issues dealt with
non-educational aspects such as data entry. Other
aspects were more troubling, such as teachers finding it
difficult to collect all the evidence required in the
Teaching Strategies GOLD (TSG) Assessment while
•
We instituted a delayed school start for
kindergarteners in September, which enhanced
our implementation of the Family Connection
component of WaKIDS.
•
A seventh high-need school adopted WaKIDS,
bringing our participants to a total of 22
teachers and 519 students.
•
But perhaps most important, we addressed the need
to help our teachers begin to make use of their TSG
data to inform their instruction. In November, we
held a half-day data analysis workshop with all of our
WaKIDS teachers and principals with the goal of
identifying trends and patterns in our fall kindergarten
data and to consider implications for our instruction.
A secondary goal was to share our findings with PreK
teachers to guide their instruction in ways that would
to promote future kindergarten readiness.
We structured our workshop into three levels of data
analysis: district, school and classroom. We used only
reports from the TSG website, to show our
participants the reports readily available to them.
Several of our teachers admitted that they had not
looked at any of these reports after having given the
assessment the previous year.
District Analyses
We started with a one-page district summary of
performance by developmental area from the district
Snapshot Report, as shown in Figure 1.
At first blush, the district results were rather
disheartening. However, an important understanding
Continued on next page
The WERA Educational Journal
Page 10/August 2013
Figure 1: Edmonds District Snapshot by Developmental Area Summary
is that the top performance level of the TSG Assessment
is “meeting” kindergarten readiness. TSG does not
measure “exceeding” for kindergarten-age students.
(The one student appearing as “exceeding” is below
kindergarten age.) Another important understanding is
that a student who is just one point below kindergarten
readiness level will appear as “below.” In addition, the
students in our WaKIDS classrooms are those who attend
schools with the highest percentage of students who are
on the free/reduced meal status and who are ELL.
The participants came to the following consensus:
District Strengths (by Area)
•
•
Literacy
Physical (Fine Motor)
District Areas of Greatest
Need (by Area)
• Language
• Physical (Gross Motor)
• Cognitive
With these understandings in mind, participants
examined the data to answer the following questions:
1. Based on the percentage of students meeting
expectations for each area, which area(s) appear to
be the strongest for our incoming K students? Why
do you think so?
2. Based on these percentages, which area(s) appear to
be of greater need for our incoming K students?
Why do you think so?
Continued on next page
The WERA Educational Journal
Page 11/August 2013
Page 11De/M
To delve more deeply into student performance by objective, we then used TSG assessment reports available in the
same format for each level, so that once the participants learned how to read the format of the first (district) report,
they could generalize easily to the school and then classroom report, making comparisons when appropriate. The
reports that we used were:

For district and school data -- Snapshot by Dimension (within the Snapshot Report)

For classroom data – Class Profile Report
For each of the six TSG areas, the Snapshot by Dimension report shows the number and percentage of students who
scored at each performance level on each objective/dimension within that area. A purple band denotes the level
expected for kindergarten. An example of this report appears in Figure 2.
Figure 2: Example from the Edmonds District Snapshot by Dimension Report
The participants then examined the district Snapshot by Dimension report and answered questions similar to the first set
of questions, but this time identifying objectives/dimensions of strength and need for the district, as shown below.
•
•
•
•
•
•
District Strengths (by Objective/Dimension)
District Areas of Need (by Objective/Dimension)
Social Emotional
o Interacts with peers
o Makes friends
Physical (gross/fine motor)
o Demonstrates balancing skills
o Uses hands/fingers
Language
o Speaks clearly
o Uses expanding expressive vocabulary
Cognitive
o Shows flexibility in thinking
o Shows curiosity and motivation
Mathematics
o Demonstrates knowledge of patterns
o Connects numerals with their quantities (1-5)
Literacy
o Writes name
o Interacts during read-aloud/book conversations
o Identifies/names letters
o Uses print concepts
o Notices and discriminates alliteration
•
•
•
•
•
•
Social Emotional
o Takes care of own needs
o Follows limits/expectations
Physical (gross/fine motor)
o Throws/catches
o Uses writing/drawing tools
Language-engages in conversation
o Uses conventional grammar
o Uses social rules of language
Cognitive
o Uses classification skills
Mathematics
o Counts - rote (1-20)
Literacy
o Writes to convey meaning
o Uses emergent reading skills
o Notices and discriminates smaller and smaller
units of sound
o Uses letter-sound knowledge
o
Notices and discriminates rhyme
Continued on next page
The WERA Educational Journal
Page 12/August 2013
After analyzing the district results in terms of
objectives by strength and need, participants
answered
the
following:
For
the
objectives/dimensions of need identified, what
strategies have you found to be helpful in moving
children
up
the
developmental
continuum?
Participants used the WaKIDS Progressions of
Development & Learning as a reference for the
continuum.
School Analyses
The Snapshot Report for each school is in the same
format as that of the district, showing the number and
percentage of students at that school scoring in each
performance level for each objective/dimension.
Each school team examined their school’s Snapshot
Report, guided by the following questions:
1. When comparing your school results to the
district results, what is similar? What is
different?
2. What objectives/dimensions are areas of
strength for the K students at your school?
Why do you think so?
3. What objectives/dimensions are areas of
concern for the K students at your school? Why
do you think so?
Each school team also addressed implications of their
data for their school, guided by the following
questions:
1. What structures are in place at our school to
help us address the areas of concern?
2. Who are the personnel at our school whom we
should involve in our efforts to address the
areas of concern (e.g., PE teachers?
Psychologist? Learning support teacher? ELL
teacher?)
3. Which strategies suggested in the earlier
activity do we already use at our school?
4. Which strategies suggested in the earlier
activity have we not tried at our school?
5. What other strategies might work for us to
address our areas of concern?
Class Analyses
The Class Profile report is in the same format as the
Snapshot by Dimension, but rather than showing
number and percentage of students in each cell, it
lists the names of students in that class who scored at
each level for each objective/dimension.
As
illustrated in Figure 3 below, the information in the
Class Profile report provides visible groupings of
students with similar levels of kindergarten readiness,
and therefore, similar needs.
Each teacher examined their own Class Profile Report,
guided by the following questions:
Figure 3: Example from a Class Profile Report
Continued on next page
The WERA Educational Journal
1. What patterns do you see in terms of students
scoring in the lower performance levels (e.g.,
does it seem to be the same students in the lower
performance levels across the
objectives/dimensions or students showing
different profiles of needs)?
2. To what extent are these students already being
served by Title I/LAP, IEP, or ELL services?
3. To what extent are these students the ones
previously identified as part of your school’s
kindergarten “watch list”?
Using the Data
As a result of the data analysis workshop, school and
classroom follow-up activities were intended to
address the needs identified in the current
kindergarten classes at each school. The district,
however, has undertaken two other significant uses of
the data.
The first significant use of the data was to share the
results with our PreK community. Soon after the
district’s WaKIDS data analysis workshop, district staff
hosted an evening event for prekindergarten teachers
and providers in the community as well as for our
WaKIDS teachers. The event provided a structured set
of interactions in which district staff shared the
district-level results from the data analysis workshop,
and participants worked in cross-grade groups to
address the following topics:
•
•
•
•
•
What can we celebrate?
Where do we need to focus our
instruction?
What does this mean for kindergarten
readiness?
Where can we support each other?
How can we increase the PreK-3
alignment?
The second significant use of the data is the
expectation that each of our WaKIDS teachers has
identified 1-5 students in their class who did not meet
benchmark on multiple TSG dimensions and will
monitor the progress of these target students through
the year. The teachers will use the TSG assessment to
re-assess each of their target students on the
student’s deficit dimensions in February/March and
then again in May/June on those dimensions that are
still below benchmark.
Next Steps
The district is willing to support all schools that want
to implement WaKIDS in the coming year. District staff
Page 13/August 2013
Page 13De/M
as well as staff from the WaKIDS schools have met
with K teachers and principals from across the district
to share the benefits and challenges of implementing
WaKIDS. In addition to the current seven schools, all
of whom want to continue to implement WaKIDS, an
additional eight schools have indicated some level of
interest in implementing it next year.
Another step the district is taking is to conduct a
survey of our current WaKIDS teachers to ascertain
ways for streamlining the assessment. For each TSG
objective/dimension, teachers are being asked to
indicate how they think the dimension is best
measured, with the following options:
•
•
•
•
Through observation during regular instruction
By using a district-developed task
By using an activity that is part of our curriculum
By using another assessment regularly given to
kindergarteners
The teachers are also being asked for each dimension
whether it would be best to assess it during Jump
Start or after school has begun.
Conclusion
The “whole child” assessment component of WaKIDS
has definitely been challenging to implement
effectively.
However, involving teachers and
principals in a comprehensive data analysis activity
geared toward better understanding the needs of each
student has proved to be a key element in assuring
that the assessment results will be used to inform
instruction. We hope that continuing to help teachers
with analyzing and using their WaKIDS data will
address the teacher’s universal observation that “I
need to reflect on where my kids are presently and
what I need to do to move them forward.”
References:
Matteson, D. M., & Freeman, D. K. (2005). Assessing
and teaching beginning writers: Every picture tells
a story. Katonah, NY: Richard C. Owen Publishers,
Inc.
Authors:
Nancy Katims, Ph.D. is the Director of Assessment,
Research, and Evaluation for the Edmonds School District,
and a Past President of WERA. Lara Drew is the Executive
Director of Student Learning for the Edmonds School
District. For questions about this article, contact Nancy at
[email protected] or at the Edmonds School
District, 20420 68th Avenue West, Lynnwood, WA 98036
(425)-431-7302.
The WERA Educational Journal
Page 14/August 2013
WaKIDS Implementation in Tacoma
by Minh-Anh Hodge and Pat Cummings
In Tacoma, our journey with WaKIDS began in the
2010/2011 school year when 17 classroom teachers
bravely volunteered to be part of our first pilot. That
year, we had a sample of 400 students that were
assessed using WaKIDS. As a follow-up, in spring of
2011, we conducted a structured focus group with our
“pilot" teachers and learned valuable lessons on how
to improve the implementation for the following year.
Clear recommendations about teacher training,
improving the efficiency of testing strategies,
supporting better organization, and refining the
adequacy of training for the assessment were all
lessons learned from the pilot.
In August of 2011 many of our pilot teachers became
our trainers as Tacoma geared up for a 100%
implementation of WaKIDS. We provided training for
over 100 kindergarten teachers in preparation for
testing during September and October of 2012. We
also created a SharePoint website which was
accessible by our kindergarten teachers that contained
all of the training materials and videos, Power Points,
and contact information in support of the WaKIDS
initiative. In the fall of 2012 we tested 2565 students
with a 99% participation rate by our teachers….quite a
remarkable effort on the part of our kindergarten
teachers.
Later that fall, after the testing, we convened all of
the kindergarten teachers for a follow-up meeting on
how the testing went and how we could improve for
the next year. Suggestions included
•
trying to improve the usability and performance of
the GOLD Teaching Strategies website,
•
better coordination with the early childhood
providers,
•
improved sharing between teachers on various
aspects of the assessment process, and
•
district support with time and training for future
implementations.
The teachers voiced strong support for the parent
connection aspect of WaKIDS which allowed our
teachers the opportunity to get information about
children prior to the first day of kindergarten.
We also did a lot of number crunching at the district
level. This included correlating WaKIDS with our
standards-based report card, and a detailed analysis
of results by school, gender, poverty, and ethnicity.
For information about our data analyses please refer
to http://tinyurl.com/TPSWaKIDS.
Among the surprises found in our data were our Social
and Physical domains were equal to the state;
however, Tacoma showed 8 to 10 percent increases
over the state (for characteristics of entering
kindergartners) by the domains of Language,
Cognitive, Literacy, and Math. It's important to note
that WaKIDS was not administered to all Washington
kindergartners but rather a subset of students more
typically receiving free and reduced lunch.
FIGURE 1: Percent of Students Who Demonstrate Characteristics of Entering Kindergartners By Domain-Tacoma
And Washington State
Continued on next page
The WERA Educational Journal
Page 15/August 2013
Page 15De/M
In general, three out of four students demonstrated the characteristics of entering kindergartners in the domains of
Social, Physical, and Language. Four out of five students met these characteristics in the Cognitive and Literacy
areas, but only two out of three students met the characteristics in Math. Tacoma is a very diverse district of 37
elementary schools ranging from poverty rates in the low teens to the high 90s. When we looked at WaKIDS results for
each school, it was apparent that our low poverty schools showed 95% of all students demonstrating the
characteristics of entering kindergarteners, while in our high poverty schools this number was less than half. This
appears to support the face validity of the instrument and the scoring reliability of our teachers.
Looking at the effects of poverty, results reflect a wide gap in readiness (averaging 17 percent difference). The
poverty group showed the greatest gaps in the area of Math (25%) and Language (22%).
FIGURE 2: Percent of Students Who Demonstrate Characteristics of Entering Kindergartners By Domain And
Poverty
Analyses also reflected a gender gap, with females showing the advantage. The average difference was
around 7 percentage points. For this demographic group the Social area showed the biggest gap (11%).
Continued on next page
The WERA Educational Journal
Page 16/August 2013
FIGURE 3: Percent of Students Who Demonstrate Characteristics of Entering Kindergartners By Domain And Gender
Tacoma plans to use this data in a number of ways.
First, we hope to improve the calibration and accuracy
of our teachers using the instrument to ensure that it
is a valid snapshot of entering kindergarten students.
This will prove invaluable as we support and enhance
our early childhood programs across the district.
Second, we want to continue to correlate this
screening instrument with our standards-based report
card. It's important to note that WaKIDS serves a
different purpose than the report card; however, it
should work in concert with other evaluation systems
across the district. Third, we hope to improve the
efficiency of administering WaKIDS. This is a valuable
but time-consuming initiative, which places a strain on
our kindergarten staff. We hope to continue to
support these educators by a variety of methods to
improve the implementation of the screening
instrument.
One of the Tacoma public schools strategic plan goals
relates to early learning: “We will focus on early
assessment and intervention at the pre-K-12 grade
levels to ensure early academic success”. One of the
major lessons of the WaKIDS initiative is the diversity
of the basic foundational skills of entering
kindergartners. By a variety of measures this gap is
already well-established in kindergarten and first
grade. If we're ever to truly close the gap, it will
require concerted effort and resources prior to the
child entering kindergarten. Our hope is that the
WaKIDS screening will prove to be an invaluable tool in
monitoring the impact of our early childhood
initiatives and continue to support and highlight the
prominence of early learning as it relates to later
success in school.
References:
Matteson, D. M., & Freeman, D. K. (2005). Assessing
and teaching beginning writers: Every picture tells
a story. Katonah, NY: Richard C. Owen Publishers,
Inc.
Authors:
Minh-Anh Hodge, Ed.D. is the Director of ELL and
Early Learning in the Tacoma Public Schools.
Pat Cummings is Director of Research and Evaluation
in the Tacoma Public Schools.
For more information about this article, readers can
contact Minh-Anh at [email protected], or
contact Pat at [email protected].
The WERA Educational Journal
Page 17/August 2013
Page 17De/M
Mitigating Passivity in the Primary Grades: Effects of Teacher Behaviors
by Dorothy E. Munson
The purpose of this research was to gain insights into
teacher behaviors that help mitigate student passivity
in the primary grades. Data were gathered as field
notes and reflections during observations in K-3
classrooms during the 2009-10 and 2010-11 academic
years.
Using an ethnographic approach to the
fieldwork during this study helped with understanding
and describing the classrooms, the teachers and
students, and their interactions. Analyzing the data
revealed that certain teacher behaviors did affect
student passivity. Teachers who had positive, caring,
and encouraging relationships with their students,
responded favorably and equitably to both female and
male students, fostered student autonomy and selfregulation, clearly communicated high expectations
for students’ social and academic successes, and
created and maintained welcoming classroom
environments had few instances of student passivity
in their classrooms. The present study is one
component of a larger project focused on
understanding the development, maintenance, and
mitigation of passivity in students from kindergarten
through graduate school.
What happens to change some students who enter
kindergarten as curious, active, assertive, and
engaged learners into students who are passive,
tentative, or hesitant in their behaviors as they move
up through the grades?
Teachers who exhibit
behaviors that promote students seeing themselves as
capable, liked by their teachers, and supported in
their efforts to develop autonomy and self-regulation
are less likely to have students who become passive
and disengage from school (Birch & Ladd, 1998;
Blatchford, Baines, Rubie-Davies, Bassett, & Chowne,
2006; Slavin, Hurley, & Chamberlain, 2003). Student
passivity, a reluctance or hesitance to engage in
classroom interactions and activities, is often born out
of social and academic experiences that have resulted
in criticism and perceived failures.
Negative
experiences that lead students to see themselves as
less able than their peers, less liked by their teachers,
or less supported in working toward autonomy and
self-regulation can lead to student discouragement
and disengagement (Bandura, 1994; Bronfenbrenner,
1993; Good, Slayings, Harel, & Emerson, 1987). This
study examines how teacher behaviors in the primary
grades may affect student passivity.
A Model of Student Passivity and Related Variables
Good (1981) proposed one of the earliest conceptual
models of student passivity. His model of passivity
addressed students' willingness or unwillingness to
engage in “many interactions in the classroom [that]
are initiated by students (they raise hands, approach
the teacher, ask questions, make comments, and so
on)” (p. 415). Good's model emphasized teachers’
differing behaviors toward students and students’
responses to discouraging teacher behaviors:
It seems that a good strategy for students who
face such conditions would be not to volunteer or
not to respond when called on. Students would
appear to be discouraged from taking risks and
chances under such an instructional system. To
the extent that students are motivated to reduce
risks…it seems that students would become more
passive (Good, 1981, p. 418).
In this context, student passivity is not the same as
student apathy (which implies a lack of caring or
interest). Passivity is a hesitancy to actively engage in
the classroom in order to avoid criticism and
unsuccessful participation; which, according to Good,
is consistent with students wanting to “maintain a
sense of self-respect” (1981, p. 419). Children who
have school-based experiences that train them to be
passive are less likely to develop a positive sense of
self-efficacy as a student, which is key to seeing
oneself as capable, competent, and able to persist in
the face of challenges (Bandura, 1994).
School is one of the most important microsystems in a
child’s life with students and teachers exerting
reciprocal influences on each other (Bronfenbrenner,
1993; Good, Slayings, Harel, & Emerson, 1987).
Ideally, these reciprocal influences will support
students’ and teachers’ successes in learning and
teaching while building positive relationships.
According to Bandura, “During the crucial formative
period of children's lives, the school functions as the
primary setting for the cultivation and social
validation of cognitive competencies” (1994, p. 77).
Additional variables related to teacher behaviors and
beliefs seemed to influence student passivity. These
included: the effect of teacher-child relationships in
the early years of education on the degree of “schoolliking, classroom participation, and academic
competence” in later years (Birch & Ladd, 1998); the
different ways teachers respond to male and female
students in the classroom (Leaper & Smith, 2004);
teacher expectations for academic achievement
(Alvidrez, & Weinstein, 1999; Good, 1981); varied
instructional strategies, classroom environments, and
practices; (Aulls, 2002; Blatchford, Baines, RubieDavies, Bassett, & Chowne, 2006,); motivating style
Continued on next page
Page 18/August 2013
and support or inhibition of learner autonomy (Reeve,
Bolt, & Cai, 1999); and fostered dependency on the
teacher (Meece, Blumenfeld, & Hoyle, 1988). These
variables, the model of passivity proposed by Good
(1981), and the concepts articulated by Bandura
(1994) regarding self-efficacy, provided a context or
“nest” (Wolcott, 2009) for this study of the problem of
student passivity in the primary grades.
Method
This is an ethnographic study based on two
consecutive years (2009-10 and 2010-11) of classroom
observations by the author of this article. Data were
recorded as field notes and reflections about what was
observed; these were recorded during the actual
observations. The goal of this research was to
understand and describe what was observed in the
context of the salient concepts and variables in the
research literature (Wolcott, 2009).
A total of 82
observations were conducted in K-12 classrooms with
22 of these occurring in K-3 classrooms, which were
the focus for the current study. Approximately 172
girls and 169 boys; two male teachers and 11 female
teachers were observed primary grade classrooms.
Several of the teachers were observed more than
once.
Mitigating Passivity: Teacher Behaviors
Relationships with students
During most (15/22; 68%) classroom observations, the
positive, caring, and encouraging relationships
teachers had with their students were obvious.
Teachers in these classrooms smiled, laughed with
their students, listened carefully to students, spent
much of their time in close proximity to their
students, were affectionate, and encouraged their
students to join in social and academic activities. The
students in these classrooms seemed to reciprocate
the teachers’ positive feelings and were rarely
passive. Examples of these relationships included:
In a second grade classroom, the teacher moved
throughout the room and “took a knee” next to
each pod of desks. The teacher said “How are you
guys doing?”, “Good job buddy.”, “Keep going
there…”, and “Sweet! Good for you!”.
This
teacher also ruffled one boy’s hair, patted a girl’s
back, and smiled when a little boy did a happy
dance in his chair when his team solved a
crossword puzzle.
During a reading lesson in a second and third
grades combination class, the children who
wanted to read aloud were encouraged to do so by
the teacher. The teacher sat close to the students
The WERA Educational Journal
and read right along with them. The teacher and
the students seemed relaxed and focused on the
story. If a student did not want to read aloud,
they simply said “No” and the class moved right
on. The teacher smiled directly at each student
whether they chose to read aloud or not. The
students also approached the teacher and would
gently touch her arm or just stand close to her
during the reading lesson. Again, the teacher
would smile.
In stark contrast to these positive interactions is an
illustrative example of the way a teacher related to
the students in a different classroom:
A first grade student, after unsuccessfully trying
to get the teacher’s attention by raising her hand
at her desk and waiting for the teacher to
respond, slowly walked toward the teacher. The
teacher looked at the student and said loudly: “If
you need me, raise your hand! Don’t follow me
around. Go sit down!” The student returned to
her seat and put her head down on her desk.
During several classroom observations (4/22; 18%) the
teachers’ relationships with their students were at
times positive, caring, and encouraging, but lacked
consistency. In the remaining classroom observations
(3/22; 14%) the teachers’ relationships with their
students were primarily negative, uncaring, and
discouraging. In the classrooms where the teachers’
relationships with the students were negative,
students were much more passive, withdrawn, and
hesitant to engage in academic or social activities.
Responses to female and male students
In this study, teachers who responded to both female
and male students in favorable and equitable ways
had fewer passive students. In 17/22 (77%) of the
observations, teachers responded to boys and girls in
much the same ways. Teachers in 3/22 (14%) of the
classroom observations responded more frequently
and favorably to boys and 2/22 (9%) of the teachers
gave preferential treatment to girls. Teachers who
were equitable in calling on both boys and girls to
answer or ask questions mitigated student passivity.
Classrooms in which the students’ gender was an
important factor in whether or not they would be
called on and listened to had a higher rate of student
passivity:
During a third grade lesson on the character trait
of the month the teacher paused while reading a
story about honesty to ask students questions. Of
the 13 female students and 11 male students in
the classroom, nine boys and 10 girls quickly
Continued on next page
The WERA Educational Journal
Page 19/August 2013
Page 19De/M
raised their hands and appeared eager to answer
the teacher’s first question. The teacher called
on one boy and then asked three more questions
and called only on boys to answer. During this
question and answer time, six girls eventually put
their hands down and did not raise them again.
The teacher returned to reading the story and at
the end of the story asked, “What’s the moral of
the story?”. At this point the same 9 boys raised
their hands and only 3 girls did. The teacher
called only on boys with their hands raised. One
girl kept her hand up until the end of the lesson
even though the teacher never called on her and
when the lesson ended this girl frowned and shook
her head.
Not all instances of gender inequity in calling on
students involved teachers calling on boys more than
girls. In some classrooms, girls were called on more
than boys, even when the number of boys and girls in
the classroom was almost the same. Teachers who
interacted with their students in ways that were free
of gender stereotypes and biases also had fewer
instances of student passivity. Examples of freedom
from gender stereotypes and biases were:
During center time in a kindergarten classroom,
the students were playing house. One girl said to
a boy, “Honey, I’m going out, you stay home and
watch the baby” and the teacher smiled at the
students playing house. At another center one boy
said to three other boys, “C’mon, it’s time to go
fishing brothers”. When a female student said “I
want to go fishing”, the boy leading the fishing
expedition looked at the teacher and the teacher
smiled and nodded. He then said to the girl,
“Let’s all go fishing together! C’mon with us.”
In P.E., library, and story time in multiple
classrooms boys openly displayed appropriate
physical affection for other boys. Some boys
hugged each other, held hands, patted each
other’s backs, and put their arms around each
other. Some girls displayed appropriate physical
affection for each other, but more often they
would sit or stand very close together, face each
other, and talk. Teachers who accepted and
affirmed appropriate displays of affection by boys
and girls equally had fewer passive students.
During small group instruction with 3rd grade boys,
students used highlighters to correct their work on
math stories. One boy asked for a pink highlighter
and another boy said, “PINK?” while he wrinkled
his nose and shook his head. The teacher said,
“YES, pink! What’s wrong with that? Not a single
thing!”. A third boy in the group said, “Well, I
like purple…” and picked up a purple marker.
Fostering student autonomy and self-regulation
Teachers who taught their students behaviors and
procedures designed to enhance student autonomy
and self-regulation mitigated student passivity. The
following happened in a second and third grades
combination class:
The author of this article was sitting at a kidneyshaped table with several students during quiet
reading time. After 25 minutes, two girls sitting
on the floor reading to each other near the table
began talking rather loudly. One girl from the
table walked over to the girls on the floor and
whispered, “Hey you guys…can you be a little
more quieter please? The girls sitting on the floor
said, “Oh sorry”, and quieted themselves.
However, examples of student passivity in the
following classroom were common:
In a second grade classroom, all the children were
required to be silent and put their heads down on
their desks before the teacher would call on them
one-by-one to line up inside the classroom before
going to lunch or recess. This teacher could be
heard through the door yelling at the students:
“Silence! No one will go to lunch (or recess) until
everyone is silent!”, “I won’t call anyone’s name
until everyone’s head is down on his or her desk
and everyone is silent!”
Most teachers, 15/22(68%), encouraged and supported
student autonomy and self-regulation whenever
possible. These teachers also helped their students
see themselves and other students as capable of
helping each other and solving problems.
High expectations for students’ successes
Teachers who expressed deep beliefs in the abilities of
their students to succeed were extremely successful in
mitigating passivity. Sixty-four percent of teachers
(14/22) routinely articulated high expectations for
their students.
One teacher provided multiple
examples of this:
“You second graders are so smart!” “Let’s listen
to each other’s ideas about multiplication…come
on everyone! We want to hear from everyone.”
(Math)
“OK, we love this word!
(Spelling)
It’s nice and hard!”
“I’m looking for beautiful work. Everyone can do
Continued on next page
The WERA Educational Journal
Page 20/August 2013
beautiful work. How can that happen?” To which
the students responded in chorus, “Everyone do
their best!”, and the teacher replied, “You are in
charge of your pencil! You can make it help you
do your best!” (Writing)
“I’m watching for students who can be really good
demonstrators of our science investigations so
please practice, and I know you will get really
good at these investigations.” (Science)
The students in this teacher’s classroom were also
expected to treat each other well and behave in
socially appropriate ways. Students in this teacher’s
class and in similarly functioning classrooms were
almost never passive.
However, there were also times when teachers
contributed to student passivity. Instances during
which teachers’ behaviors and beliefs appeared to
elicit student passivity almost always occurred during
whole class instructional and transition times during
the regular school day, rather than during informal
interactions with individual students or small groups of
students before or after school. It seemed that, when
faced with some of the unique challenges associated
with teaching classes of students in the primary
grades, several teachers unfortunately resorted to
behaviors that contributed to student passivity in their
classrooms. Based on insights gained from this study,
this author is hopeful that supports and interventions
can be put in place to help teachers who contributed
to student passivity learn ways to effectively mitigate
passivity in the young children they teach.
Welcoming classroom environments
Limitations and Implications
All but one (21/22; 95%) of the classroom observations
occurred in environments that were clearly created in
an attempt to be welcoming. It was clear that the
teachers in this study worked to create classroom
environments that appeared colorful, inviting, and
well organized. However, not all of these teachers
were able to behave in ways or hold beliefs about
their students that mitigated student passivity.
One limitation of this study was that one researcher
will see and hear some important events during
classroom observations while failing to see and hear
others.
An additional limitation was that the
interpretations and understandings of the data, while
contextualized in research that preceded this study,
were undoubtedly influenced by the researcher’s
interests, experiences, and potential biases.
Discussion
The findings from this study, coupled with the
relevant research literature, provide support for the
five teacher-related factors highlighted here being key
influences on student passivity in the primary grades.
The teacher-related factors which emerged as notable
were:
• teachers’ relationships with students;
•
teachers’ responses to female and male
students;
•
the ways in which teachers did or did not
foster student autonomy and self-regulation;
•
the degree to which teachers expressed high
expectations for students’ successes; and
•
teachers’ creation of welcoming classroom
environments.
Teachers who successfully mitigated student passivity
exhibited multiple behaviors and beliefs identified as
helpful to their students’ views of themselves as
likeable, capable, and autonomous in developmentally
appropriate ways.
During the majority of the
classroom observations teachers behaved in ways and
expressed beliefs that were supportive of students
staying engaged and involved in classroom activities.
An important implication of this research is that
student passivity is a present and concerning
phenomenon worthy of additional study. The focus of
the next phase of this project will be the study of the
data gathered in grades 4-8.
References:
Alvidrez, J., & Weinstein, R. (1999, December). Early
teacher perceptions and later student academic
achievement. Journal of Educational Psychology,
91(4), 731-746.
Aulls, M. (2002, September). The contributions of cooccurring forms of classroom discourse and
academic activities to curriculum events and
instruction. Journal of Educational Psychology,
94(3), 520-538.
Bandura, A. (1994). Self-efficacy. In V. S.
Ramachaudran (Ed.), Encyclopedia of human
behavior (Vol. 4, pp. 71-81). New York Academic
Press.
Birch, S., & Ladd, G. (1998, September). Children's
interpersonal behaviors and the teacher-child
relationship. Developmental Psychology, 34(5),
934-946.
Continued on next page
The WERA Educational Journal
Page 21/August 2013
Page 21De/M
Blatchford, P., Baines, E., Rubie-Davies, C., Bassett,
P., & Chowne, A. (2006, November). The effect of
a new approach to group work on pupil-pupil and
teacher-pupil interactions. Journal of Educational
Psychology, 98(4), 750-765.
Bronfenbrenner, U. (1993). The ecology of cognitive
development: Research models and fugitive
findings. In R. H. Wozniak & K.W. Fisher (Eds.),
Development in context: Acting and thinking in
specific environments. Hillsdale, NJ: Erlbaum.
Ellis, A.K. (2005). Research on educational
innovations (4th ed.). Larchmont, NY: Eye on
Education.
Good, T. L. (1981). Teacher expectations and student
perceptions: A decade of research. Educational
Leadership, 38(5), 415 - 422.
Good, T. L., Slayings, R. L., Harel, K. H., & Emerson,
H. (1987). Student passivity: A study of question
asking in K-12 classrooms. Sociology of Education,
60(3), 181-199.
Leaper, C., & Smith, T. (2004, November). A MetaAnalytic Review of Gender Variations in Children's
Language Use: Talkativeness, Affiliative Speech,
and Assertive Speech. Developmental Psychology,
40(6), 993-1027.
Meece, J., Blumenfeld, P., & Hoyle, R. (1988,
December). Students' goal orientations and
cognitive engagement in classroom activities.
Journal of Educational Psychology, 80(4), 514-523.
Reeve, J., Bolt, E., & Cai, Y. (1999, September).
Autonomy-supportive teachers: How they teach
and motivate students. Journal of Educational
Psychology, 91(3), 537-548.
Slavin, R. E, Hurley, E. A., & Chamberlain, A. (2003).
Cooperative learning and achievement: Theory and
research. In W. M. Reynolds & G. E. Miller (Eds.),
Handbook of psychology: Educational psychology
(Vol. 7, pp.177–198). New York: Wiley.
Wolcott, H.F. (2009). Writing up qualitative research
(3rd edition). Sage Publications; Newbury Park,
London, New Delhi.
Author:
Dorothy E. Munson, Ph.D., is an Assistant Professor of
Psychology at Eastern Washington University.
Correspondence concerning this article should be
addressed to Dorothy E. Munson, Department of
Psychology, 135 Martin Hall, Eastern Washington
University, Cheney, WA 99004.
E-mail: [email protected].
Page 22/August 2013
The WERA Educational Journal
The Neglected Benefits of Testing: Implications for Classroom and Self-Directed
Learning
by Olusola O. Adesope and Dominic A. Trevisan
Many policy makers and educators view tests as
summative assessment tools for measuring students’
mastery of skills and knowledge. Increasingly, tests
are used to make high-stakes decisions such as school
accountability and associated funding, merit pay for
teachers based on student performance, student
admission into colleges and other career decisions.
Regrettably, the heavy emphasis on tests to assess
mastery or for other high stakes decisions often
obscures a very important function of tests that is
central to the goals of education: promotion of
learning. This article takes a fresh approach to the
contemporary debates surrounding testing in the
United States. Succinctly, the article reviews over
1,600 studies on the benefits of various types of
practice tests for improving student learning. The
summary concludes with important implications for
educators and policymakers.
Tests are ubiquitous; millions of students in both K-12
and college settings take tests every week.
Regrettably, many of these students and their
teachers do not know that tests can be a strategy to
facilitate learning. Although teachers and students in
other parts of the world have used tests long before
the turn of the 19th century, the testing movement in
the United States really began at the beginning of the
20th century with the development of the StanfordBinet Intelligence Scales. Since then, tests have been
used extensively in this country for many different
purposes.
Sadly, the summative assessment culture that has
emerged in many parts of the world, especially in the
United States, has undermined one of the most potent
uses of tests; i.e., as learning tools. For example,
many policy makers and educators view tests mainly
as tools to assess mastery (Marsh, Roediger, Bjork, and
Bjork 2007). The enactment of the No Child Left
Behind Act in 2001 (Guisbond & Neill, 2004; Misco,
2008) and subsequent trends have largely focused on
extensive use of tests for accountability purposes,
thus compounding the use and misuse of tests for
high-stakes decisions. Increasingly, tests are used to
make high-stakes decisions related to school
accountability and associated funding, merit pay for
teachers based on student performance on
standardized tests, student admission into colleges
and other career decisions. Now more than ever,
testing issues such as scoring, bias, reliability and
validity attract both media debate and scholarly
attention. High-stakes decisions often create highstakes debates, so it is not surprising that a recent
Kappan article authored by Buck, Ritter, Jensen and
Rose (2010) reported that in a review of three policyoriented education journals, they found 90% of articles
were critical of testing rather than favorable.
Meanwhile, "non-testing" learning strategies such as
restudying (rereading) or rehearsal have been used by
students, but such strategies have been found to be
less effective than the use of practice tests as a
strategy for learning (Rohrer & Pashler, 2010). In this
context, the term testing effect is used to describe
the improvement in learning when students take a
practice test on a studied material before the final
test. For the purposes of this review, practice tests
are defined as tests used as a study strategy to
prepare for a final graded examination. The testing
effect is observed when students attempt to retrieve
information from memory, either while studying or
preparing for an upcoming examination. This learning
strategy is known as retrieval practice.
The emphasis on the use of tests for summative
purposes obscures an important function of tests that
is central to the goals of education: identifying gaps in
learning and designing instruction to remediate such
gaps. Paul Black and Dylan Wiliam, in their seminal
article on “Inside the black box: Raising standards
through classroom assessment", argued that good
assessment practices need to promote formative
classroom assessment and ultimately improve learning
and long-term retention of information (Black &
Wiliam, 1998). In recent years, many educators have
argued for the benefits of formative classroom
assessment (1) to help students understand the
learning expectations set forth by their teachers, (2)
1
This research was funded by the Faculty Funding Award of the College of Education, Washington State University. The
opinions expressed are those of the authors and not the sponsor. All errors and omissions are those of the authors.
Continued on next page
The WERA Educational Journal
to monitor their own progress toward meeting these
expectations, and (3) for teachers to facilitate the
instructional process so that students take ownership
for their learning (Popham, 2008; Shepard, 2005;
Stiggins & DuFour, 2009). Unique to the concept of
the testing effect, is the concept that tests can
contribute to learning rather than only being used to
measure learning. The idea of using tests for this
purpose is not a predominant view of many
policymakers and educators. The remainder of this
article delineates major findings from over 1,600
testing effect studies that we reviewed, and describes
important
implications
for
educators
and
policymakers.
Major Findings from Testing Effect Review
A systematic approach consistent with wellestablished review protocols was followed throughout
the review process. First, extensive searches were
done on several online databases and then criteria
were developed to capture all relevant studies
investigating the use of practice tests. More
importantly, studies were included in the review if
they examined the effects of taking a practice test
compared with restudying or using other learning
strategies. Additionally, to be included in the review,
studies needed to report measurable learning
outcomes and provide sufficient information in the
article for effect sizes to be extracted.
Only 89 studies met all criteria for inclusion in the
review. These 89 studies were then coded and Cohen’s
d effect sizes were extracted. An effect size is a
standardized estimate of the difference in mean
scores between students who took a practice (self)
test before a final test compared with those who did
not take a practice test before taking a final test
divided by the pooled standard deviation of the two
groups. Typically, an effect size of d=.2 will be
considered a small effect, d=.5 a moderate effect
while an effect size that approaches d=.8 is
considered a large effect (Cohen, 1988).
The overwhelming majority of testing effect studies
found practice tests to be superior to all other study
techniques (effect size, d = .63). This means that, on
the average, students who used a practice test to
study for the final exam generally outperformed those
who did not take a practice test before taking a final
exam with an increase in percentile rank from the 50th
percentile to the 74th percentile. Thus, overall,
practice tests translate into a percentile gain of 24
percentile points. This means that the average score
for students who used practice tests was 24 percentile
points higher than the average students who did not
use practice tests to study for the final examination.
Page 23/August 2013
Page 23De/M
Our review found benefits for practice tests in diverse
populations. For example, practice tests were found
to benefit preschoolers as well as elementary (grades
1-5, d = .66) and middle and high school (grades 6 -12,
d = .99) students. This suggests that practice tests are
beneficial to students in all grades. However, middle
and high school students seem to enjoy the greatest
benefits with a percentile gain of 34 percentile points.
This is not surprising, since these students typically
have
higher
working-memory
capacity
than
elementary school students. Here, we describe a few
specific examples to demonstrate the usefulness of
retrieval practice in a wide array of distinct
populations. In a classic study, Spitzer (1939) used
over 3,600 sixth-graders from various schools in Iowa
to identify optimal forms of instruction. Taking a
practice test was found to be beneficial for retention
on a final test. Similarly, the benefits of practice tests
have been shown with older populations including
advanced medical students (Kromann, Bohnstedt,
Jensen, & Ringsted, 2010) and older adults with
Multiple Sclerosis, a neurological disease associated
with memory dysfunction (Sumowski & Chieravalotti,
2010).
Results from several other studies show that practice
tests are beneficial for learning across different
domains of study with various learning materials. For
example, practice tests proved to be useful in foreign
language learning (Karpicke & Roediger 2008), science
learning (Roediger & Karpicke 2006) and history
(Carpenter, Pashler, & Cepeda 2009; Roediger &
Karpicke 2006). Considering the wide applicability of
practice tests for learning across different domains,
we should encourage teachers and students to use
them to maximize learning benefits.
Our analysis shows that students and teachers alike
can use all forms of practice tests to promote
learning. Although all test formats (multiple choice,
short answer, free recall, etc.) enhance learning,
multiple choice practice tests produce the most
benefits, especially when accompanied with feedback
(d = 1.21) producing a percentile gain of 38 percentile
points. More importantly, practice tests can lessen
student test anxiety and improve their own ability to
assess their understanding of a particular topic. For
example, in a recent study, 55% of students surveyed
reported feeling better at assessing their own learning
after taking some practice tests or quizzes (McDaniel,
Agarwal, Huelser, McDermott, & Roediger 2011).
The results of research also show increased benefits
for practice tests when final tests are delayed; i.e.,
when there has been a time gap between the practice
test and the summative test. Therefore, students can
Continued on next page
Page 24/August 2013
be encouraged to use practice tests after each
lecture, even though the final exam may not occur
until much later. In fact, researchers have found that
the use of practice tests “increased the degree of
learning in comparison with restudying and reduced
the rate of forgetting” (Pashler, Rohrer, Cepeda, &
Carpenter 2007, p. 192). Practice tests were shown to
improve both retention (d = .68) and transfer-based (d
= 1.26) outcomes of learning. In fact, practice tests
were found to produce the largest learning benefits
for transfer-based outcomes, producing a percentile
gain of 40 percentile points. Typically, transfer-based
outcomes are favored by educators because they help
students develop conceptual understanding and
promote life-long learning beyond the classroom.
Not to be confused with transfer of learning, transferappropriate processing describes the similarity or
alignment of practice tests and final tests (see Squires
2012). The result shows that test performance is
enhanced when the question format of practice tests
matches the final test format (d = .71), producing a
percentile gain of 26 percentile points. In other words,
performance on a final test is enhanced when there is
a match between the way learners cognitively process
the material when they use practice tests and the way
they are required to retrieve the material during the
final test.
A number of other important findings emerged from
our review. While middle and high school students are
expected to learn large amounts of material, they may
not receive advice from their teachers on effective
study strategies. Karpicke and Roediger (2008)
reported that students predicted their own retention
of information would be higher after restudying than
with retrieval practice, although final test results
disproved their assumptions. Generally, findings show
that many students do not know that the use of
practice tests (e.g., quizzing themselves with
flashcards) could be used as a technique or as a study
strategy (Karpicke, Butler & Roediger, 2009; Kornell &
Bjork, 2009). Therefore, teachers should be
encouraged to show students how to use practice tests
effectively to leverage the benefits of the testing
effect. In addition, teachers should be encouraged to
offer their students a variety of test-taking strategies
to relieve test-anxiety, including taking online
practice tests when available (Salend, 2012).
Frequent Classroom Quizzes
Thus far, we have discussed the usefulness of practice
tests as a tool for self-directed study. Next we
highlight findings showing that frequent classroom
quizzes greatly benefit student learning (Mayer et al.,
2009). For example, researchers successfully
The WERA Educational Journal
implemented a "test-enhanced learning" program in a
middle school over three years (Agarwal, Roediger,
McDaniel & McDermott 2010). They found that
students who took pre-tests before a teacher's lesson,
post-tests after the lesson, and review tests a few
days later strongly and consistently outperformed
students who did not take such practice tests before
the final exam. The authors of this study pointed out
that practice tests using educationally-relevant
materials is especially helpful for learning subjects
heavy in facts such as social studies, history, science,
mathematics, and subjects related to language.
Furthermore, frequent classroom quizzing allows
students to use corrected quizzes to assess what they
do and do not know, guiding their personal study
efforts for subsequent exams.
Other benefits of frequent classroom testing have
been observed. For example, students who take
frequent quizzes are more engaged and forced to
actively participate (Mayer et al. 2009). In a survey,
students in a classroom with frequent quizzes reported
studying more regularly to prepare for quizzes,
learning more in those classes than in classes not
requiring frequent quizzes, and enjoying the class
more (Leeming, 2002). Some teachers are hesitant to
implement additional examinations in their classroom
because they create an additional grading burden.
However, computerized technologies such as
"Scantrons" and "clickers" minimize or eliminate
grading time. In fact, clickers (also called personal
response systems) are now commonly used in large
classes to promote student-teacher interaction and
allow immediate feedback on classroom tests (Duncan,
2005). In the next section, we highlight the major
practical implications and recommendations emerging
from our extensive review of studies examining the
effectiveness of practice tests.
Practical Implications of Using Practice Tests for
Learning
A number of major practical implications emerged
from this review. We offer the following practical
implications and recommendations based on the
findings. These implications and recommendations are
not exhaustive and should not be touted as panacea to
alleviating all educational problems. Rather, we
encourage students, teachers and stakeholders to
carefully adopt these recommendations with
consideration for varying contexts, populations and
individual differences.
Implication 1: Teachers may use well-crafted multiple
choice tests to gauge their students’ prior knowledge
before the start of instruction. This can help teachers
identify
students’
misconceptions
and
plan
Continued on next page
The WERA Educational Journal
Page 25/August 2013
Page 25De/M
instructions to correct them. Multiple choice questions
can also help teachers assess students’ knowledge
after instruction.
Implication 2: Practice tests can be used as a
personal tool to assess mastery of material. Students
can use flash-cards or corrected quizzes to evaluate
which topic areas need increased study efforts.
Implication 3: Students are encouraged to use
practice tests after each instruction, even though the
final examination may not occur until much later. In
fact, the effects of practice tests are more
pronounced when the final tests (or exams) are
presented at a later date. Thus, practice tests reduce
the rate of forgetting.
In fact, students are
encouraged to practice repeated self-testing until the
final examination is administered.
Implication 4: Good alignment between practice tests
and final examination enhance learning. Students are
encouraged to use the same test formats that will be
used in the final exam for their studying and practice
tests. Therefore, teachers should divulge final exam
formats (e.g., multiple choice, essay, etc.) to their
students and encourage them to practice (or self-test)
using such formats.
Implication 5: Practice tests can be used by all
students irrespective of their grade levels. We
observed that rarely does an intervention be effective
for learning across all ages. The fact that practice
tests work for preschoolers, elementary, secondary
and postsecondary students presents a unique
opportunity to deploy practice tests for use with
different students.
Implication 6: Practice tests can be used in different
domains of study with various learning materials. For
example, students may use practice tests to learn
word pairs, or use feedback on tests to overcome
scientific misconceptions.
Implication 7: Practice tests can be used for both
retention and transfer-based outcomes although
practice tests are more beneficial for facilitating
transfer of learning. Since knowledge transfer is the
ultimate goal of learning, teachers and students are
encouraged to use practice tests to develop higherorder thinking skills.
Conclusion
This article takes a fresh approach to the
contemporary debates surrounding testing in our
nation. Our comprehensive, evidence-based review
highlights the benefits of tests for improving student
learning. This work provides a platform to help policy
makers and educational stakeholders rethink the
purpose and methods of testing in the classroom,
particularly as a means for study. When stakeholders
understand the benefits of tests as formative, selfregulated learning tools, and not just summative
assessments for making high-stakes decisions,
sustainable gains may be made in student learning
across the nation. The result may be that student
performance on high-stakes achievement tests will
improve along with learning and engagement in the
classroom.
References:
Agarwal, P. K., Roediger, H. L., McDaniel, M. A., &
McDermott, K. B. (2010). Improving student learning
through the use of classroom quizzes: Three years of
evidence from the Columbia Middle School project.
Poster presented at the 2010 Annual Conference of
the Society for Research on Educational Effectiveness,
Washington, DC.
Black, P., & Wiliam, D. (1998). Inside the Black
Box: Raising Standards Through Classroom
Assessment. Phi Delta Kappan, 80, 139-148.
Buck, S., Ritter, G. W., Jensen, N. C., & Rose, C. P.
(2010). Teachers say the most interesting things:
an alternative view of testing. Phi Delta Kappan,
91, 50-54.
Carpenter, S. K., Pashler, H. & Cepeda, N. J. (2009).
Using tests to enhance 8th-grade students’
retention of U.S. history factors. Applied Cognitive
Psychology, 23, 760-771. doi: 10.1002/acp.1507.
Cohen, J. (1988). Statistical Power Analysis for the
Behavioral Sciences. 2nd ed. Hillsdale NJ:
Lawrence Erlbaum.
Duncan, D. (2005). Clickers in the classroom: How to
enhance science teaching using classroom response
systems. San Francisco: Pearson/Addison-Wesley.
Fritz, C. O., Morris, P. E., Nolan, D., & Singleton, J.
(2007). Expanding retrieval practice: An effective
aid to preschool children’s learning. The Quart erly
Journal of Experimental Psychology, 60, 991-1004.
doi:10.1080/17470210600823595.
Guisbond, L., & Monty, N. (2004). Failing our children:
No Child Left Behind undermines quality and equity
in education. The Clearing House: A Journal of
Educational Strategies, Issues and Ideas, 78, 12-16.
Continued on next page
The WERA Educational Journal
Page 26/August 2013
Karpicke, J. D., Butler, A. C., & Roediger, H. L.
(2009). Metacognitive strategies in student
learning: Do students practice retrieval when they
study on their own? Memory, 17, 471– 479. doi:
10.1080/09658210802647009.
Karpicke, J. D., & Roediger, H. L. (2008). The critical
importance of retrieval for learning. Science, 319,
966 –968. doi:10.1126/science.1152408.
Kornell, N., & Bjork, R. A. (2009). A stability bias in
human memory: Overestimating remembering and
underestimating learning. Journal of Experimental
Psychology: General, 138, 449-468. doi:
10.1037/a0017350
Alexandria, VA: Association for Supervision and
Curriculum Development.
Roediger, H. L., & Karpicke, J. D. (2006). Testenhanced learning: Taking memory tests improves
long-term retention. Psychological Science, 17,
249-255.
Rohrer, D., & Pashler, H. (2010). Recent research on
human learning challenges conventional
instructional strategies. Educational Researcher,
39, 406-412.
Salend, S. J., (2012). Teaching students not to sweat
the test. Phi Delta Kappan, 93, 20-25.
Kromann, C. B., Bohnstedt, C., Jensen, M. L., &
Ringsted, C. (2010). The testing effect on skills
might last 6 months. Advances in Health Sciences
Education, 15, 395-401. doi:10.1111/j.13652923.2008.03245.x
Spitzer, H. F. (1939). Studies in retention. Journal of
Educational Psychology, 30, 641-656.
Leeming, F. C. (2002). The exam-a-day procedure
improves performance in psychology classes.
Teaching of Psychology, 29, 210–212.
Stiggins, R., & DuFour, R. (2009). Maximizing the
power of formative assessments. Phi Delta Kappan,
90, 640–644
Marsh, E. J., Roediger, H. L., Bjork, R. A., & Bjork, E.
L. (2007). The memorial consequences of multiplechoice testing. Psychonomic Bulletin & Review, 6,
194-199.
Sumowski, J. F., Chiaravalloti, N., & DeLuca, J.
(2010). Retrieval practice improves memory in
multiple sclerosis: Clinical application of the
testing effect. Neuropsychology, 24, 267-272.
Mayer, R. E., Stull, A., DeLeeuw, K., Almeroth, K.,
Bimber, B., Chun, D., Bulger, M., Campbell, J.,
Knight, A. & Zhang, H. (2009). Clickers in college
classrooms: Fostering learning with questioning
methods in large lecture classes. Contemporary
Educational Psychology, 34, 51-57.
Squires, D. (2012). Curriculum alignment research
suggests that alignment can improve student
achievement. The Clearing House: A Journal of
Educational Strategies, Issues and Ideas, 85, 129135.
Shepard, L. A. (2005). Formative assessment: Caveat
emptor. ETS Invitational Conference: New York.
Authors:
McDaniel, M. A., Agarwal, P. K., Huelser, B. J.,
McDermott, K. B., & Roediger, H. L. (2011). Testenhanced learning in a middle school science
classroom: The effects of quiz frequency and
placement. Journal of Educational Psychology,
103, 399-414. doi: 10.1037/a0021782.
Misco, T. (2008). Was that a result of my teaching? A
brief exploration of value-added assessment. The
Clearing House: A Journal of Educational
Strategies, Issues and Ideas, 82, 11-14.
Pashler, H., Rohrer, D., Cepeda, N. J., & Carpenter, S.
K. (2007). Enhancing learning and retarding
forgetting: Choices and consequences. Psychonomic
Bulletin and Review, 14, 187-193.
Popham, W. J. (2008). Transformative Assessment.
Olusola O. Adesope, Ph.D. is an Assistant Professor of
Educational
Psychology
at
Washington
State
University, Pullman.
Dominic Trevisan, M.A. is a graduate of Simon Fraser
University, Burnaby.
Correspondence concerning this work should be sent
to Olusola O. Adesope, Department of Educational
Leadership & Counseling Psychology, College of
Education, Washington State University-Pullman, WA,
99164-2114. E-mail: [email protected]
The WERA Educational Journal
Page 27/August 2013
How Education Funding Relates to Student Outcomes
Page 27De/M
by Peter J. Bylsma
The McCleary v. Washington Supreme Court decision
requires the Legislature to increase funding for K-12
education. Some legislators resist providing more
funding because they believe additional funds do not
produce better student outcomes and that more
reforms and accountability are needed before
meeting the Court’s mandate. This study examines
the relationship between the amount of funds
available to states and districts in Washington state
and student achievement as measured by aggregated
results on the NAEP and WASL assessments. The
analyses found that funding has a positive correlation
with student achievement – as funding increases, test
scores increase – both nationwide and in Washington
state. In addition, the analyses found that compared
to other states, Washington educators provide a
significant “bang for the buck” – we have less than
the national average in funding but have better than
average NAEP scores. Finally, when the wealth of
states is factored into the analyses using per capita
income (adjusted for cost of living), Washington
would need to spend an additional $2.9 billion per
year to meet the average level of funding based on
the state’s relative capacity to fund education.
Introduction
In January 2012 the Washington Supreme Court
unanimously affirmed a lower court’s McCleary v.
Washington decision that the state was not meeting
its constitutional mandate to provide ample funding
for basic education. As standards-based school reforms
evolved in the 1990s with higher expectations for all
students, district local levies were used more and
more to fund some basic education programs. Use of
local
levies
for
this
purpose
was
ruled
unconstitutional, just like it was in 1978 when the
Court struck down the state’s education funding
system in the “Doran” ruling (Stallings, 2010; National
Education Access Network, 2013). The Court gave the
state until 2018 to provide adequate funding.1
Despite this mandate, some legislators resist the idea
of providing additional funding without taking
measures to increase educational productivity,
especially while the public is averse to paying higher
taxes and the economy slowly digs itself out of the
Great Recession. A State Senate hearing and a related
editorial cited a national study as a basis to conclude
that additional funding does not produce better
student outcomes (Boser, 2011; Litzow, 2013, TVW,
2013).
Education advocates cite the myriad reforms that have
been implemented after expectations rose when HB
1209 passed in 1992. They also note that student
outcomes (e.g., test scores and graduation rates) have
improved substantially over the past 15 years despite
increased levels of students who come from lowincome homes and do not speak English. They also
mention the recent reforms reshaping the K-12
landscape. These include more rigorous Common Core
standards and assessments, stronger teacher and
principal evaluation systems, mandatory state
intervention in schools that consistently have very low
student outcomes, and charter schools.
Much research has been done to analyze the
relationship between expenditures and student
outcomes (Ladd & Hansen, 1999; Hanushek, 2001).
Most research has found a positive relationship
between funding levels and test scores. But there is
also general agreement that additional funding must
be well spent to make significant improvements in
student outcomes. There are many examples of high
spending, low achieving schools and districts, although
in many of these cases, circumstances outside
educators’ control have a strong influence on student
outcomes (Berliner & Biddle, 1995; Rothstein, 1993,
1998, 2004, 2012; Bracey, 2004, 2005; Cuban, 2006;
Barton & Coley, 2007).
Objectives And Methodology
This study answers the following questions:
1. What is the relationship between state and
district expenditure levels and student outcomes,
and how does Washington compare to other
states in terms of student outcomes relative to
expenditure levels?
2. To what extent does the state have the ability to
provide additional K-12 compared to what other
states provide?
1
OSPI summarized the issues in the McCleary v. State of Washington case in its Publication No. 13-0020 (Nov. 13, 2012),
found at http://www.k12.wa.us/Communications/HotTopics/HotTopic-McCleary.pdf. The Court’s opinion is available on the
State Supreme Court Web site.
Continued on next page
The WERA Educational Journal
Page 28/August 2013
Two sets of expenditure data were used to answer
these questions. When making state to state
comparisons, total per pupil current expenditures
(excluding capital expenses) from all sources (federal,
state, local) from school year 2009-10 as reported in
Education Week’s Quality Counts (2013) were used.1
These data were adjusted based on a state’s cost of
living using the Comparable Wage Index from 2005 to
make the data comparable across states.1 Analyses
comparing states did not include data from the
District of Columbia (DC), but national averages
included DC data. When comparing district
expenditures within Washington state, data reported
by the Center for American Progress (Boser, 2011)
were used from school year 2007-08, which were the
most recent at the time of the report. The Center’s
study adjusted the data for regional cost of living and
three types of student need: percent low income, ELL,
and special education. These three student groups
receive additional funding in the state formula, so this
adjustment is needed to ensure expenditure data are
comparable across districts.1 Regression analyses to
determine the trend lines were weighted by student
enrollment.
then adjusted using the 2002 Comparable Wage Index
to make the analysis comparable across states.
Appendix A provides state data used in the analyses.
Results
Relationship between Expenditures and Student
Outcomes
When looking at data for all 50 states, the relationship
between spending and student outcomes is positive
(see Figure 1). When the cost adjustments are made
to total current expenditures for all states, the trend
line shows that as spending increases, student
achievement increases. The correlation is strong and
statistically significant.
Two sets of student outcome data were used. When
comparing states to each other, the average scale
score from the grade 4 and 8 tests in reading and math
from the 2011 National Assessment of Educational
Progress (NAEP) was used. NAEP administers the same
assessments across the nation so results are
comparable for all states. Results for 2011 are the
most recent available.1 When analyzing student
outcomes in Washington, the average percent meeting
standard on the 2008 math and reading WASL in grades
4, 8, and 10 were used. The 2008 results were used
because expenditure data available from the Center
for American Progress were from that year.
To measure states’ ability to fund K-12 education, the
analysis used state per capita personal income data
for 2010 as reported by the Commerce Department’s
Bureau of Economic Analysis.1 These income data were
1
Continued on next page
Expenditure data were from the U.S. Department of Education.
Cost adjustment data are available from the Education Finance Statistics Center at the National Center for Education Statistics
(http://nces.ed.gov/edfin/adjustments.asp).
1
Making these adjustments is standard practice in school finance research. The Center used a weight of 1.4 for low income (FRL) students,
1.4 for English-language learners, and 2.1 for students in special education. The Comparable Wage Index from 2005 was used in the Center’s
analysis. More details about the study’s methodology and the data used are found on the Center’s website.
1
NAEP scores can be downloaded at http://nces.ed.gov/nationsreportcard/naepdata/dataset.aspx. More information about NAEP can be
found at http://nces.ed.gov/nationsreportcard/.
1
These data are available at http://www.bea.gov/newsreleases/regional/spi/2011/pdf/spi0311.pdf.
1
The WERA Educational Journal
FIGURE 1: State Education Spending and Student Performance
Page 29/August 2013
Page 29De/M
Washington compares very favorably with other states. Despite our low level of K-12 spending, Washington
students achieve above the national average. Using an average of the reading and math NAEP scores in
grades 4 and 8 from 2011, Washington’s average scale score was 255, which is above the national average of
252.8 and well above the average line based on our adjusted per pupil spending ($9,145). These two
economic indicators, when looked at together, show that Washington has a high performing, low spending K12 education system. In other words, our educators are providing more bang for the buck compared to most
other states. While the expenditure data are adjusted to make state spending comparable, more complex
analysis should still be conducted to take into account differences in student demographics (e.g., percent
low income, ELL, special education).
When comparing districts within Washington state, the relationship between spending and outcomes is
more controversial. A study by the Center for American Progress (Boser, 2011) showed that as spending
increases among Washington districts, WASL scores did not increase. Figure 2 reproduces the Center’s
analysis. The Senate Early Learning and K-12 Education Committee recently viewed these data to show that
higher spending districts do not have better student outcomes, with the implication that additional K-12
funding may not be justified without additional reforms and accountability (TVW, 2013).
Continued on next page
The WERA Educational Journal
Page 30/August 2013
FIGURE 2: Center for American Progress Analysis of Washington Districts
Average
Percent
Meeting
Standard on
Math and
Reading WASL
(Grades 4, 8,
10), Spring
2008
2007-08 Per Pupil Expenditures, Adjusted for Cost of Living and Student Need (Pct. Low Income, ELL, Special Ed)
The Center’s study was co-sponsored by the U.S.
Chamber of Commerce and Frederick Hess of the
American Enterprise Institute, a conservative
think-tank based in Washington, DC. The study’s
methodology was sound in many ways.
•
It conducted separate analyses for each state
and did not compare states because of
differences in states’ testing systems.
•
It combined test scores from multiple grades
and subjects to give a more comprehensive
measure of student performance. For
Washington, the Center averaged the
percentage of students meeting the state’s
standard on the reading and mathematics
WASL in grades 4, 8, and 10 from Spring 2008.
•
It used “current” expenditures from 2007-08
(excludes capital costs) obtained data from an
authoritative source (National Center for
Education
Statistics/U.S.
Department
of
Education).
•
It used adjusted expenditure data for districts’
cost of living in 2005 (the most recent
available) using the commonly-used
Comparable Wage Index.
• It adjusted current expenditures for differences
in student demographics (percent ELL, low
income, and special education) using
appropriate weights (although some percentages
reported were incorrect and made districts
appear more inefficient).
However, the Center’s overall conclusion is based on
a flaw in their analysis. Specifically, the Center did
not consider the impact that small districts had on
the results due to diseconomies of scale. The Center
excluded some very small districts from its analysis
(those with fewer than 250 students), but many
small districts were still included. Moreover, the
Center did not control for transportation costs which
can be very high among rural districts and can make
small districts appear inefficient. When districts
with fewer than 1,000 students are excluded from
the analysis (less than 6% of all Washington students
attend these districts), there is a statistically
significant positive correlation between spending
and student achievement (see Figure 3), and the
trend is similar to that shown in Figure 1. While
there is still a range in efficiency among states and
the districts in Washington, more funding is
associated with better student outcomes.
Continued on next page
The WERA Educational Journal
Page 31/August 2013
Page 31De/M
FIGURE 3: New Analysis of Washington Districts
Average Percent
Meeting
Standard on
Math and
Reading WASL
(Grades 4, 8,
10), Spring 2008
2007-08 Per Pupil Expenditures, Adjusted for Cost of Living and Student Need (Pct. Low Income, ELL, Special Ed)
Comparison of States’ Ability to Fund K-12 Education
Washington is one of the worst in the nation in funding K-12 education based on its ability to pay. After adjusting
expenditures for the cost of living in different states, Washington spent $9,145 per student during the 2009-10
school year, which ranked 41st in the nation in per pupil spending and 23% below the national average of $11,824 per
student. When considering the amount of K-12 spending compared to state per capita income, a measure of a state’s
ability to pay for public services (adjusted for a state’s cost of living), the situation is worse. Washington’s per
capita income ranked 16th in the nation and 3% above the national average. Compared to national averages,
Washington was one of the few states with higher incomes but lower K-12 spending (see Figure 4). New Hampshire,
which had the 3rd highest NAEP scores, has about the same per capital income as Washington. It spends $14,045 per
student, which is $4,900 more than Washington.
Continued on next page
Page 32/August 2013
The WERA Educational Journal
FIGURE 4: Per Capita Income and Per Pupil Expenditures (Adjusted for Cost of Living)
The distance between where Washington is shown in Figure 4 and where it would be if it funded education at an
average level based its relative wealth (the distance from its spot and the trend line directly above it) is about $2,855
more per student (we would need to spend $12,000 per student). With K-12 enrollment of slightly more than 1 million
students, about $2.9 billion more funding was needed during the 2009-10 school year for Washington to spend the
average amount based on its per capita income.
It makes sense that when state wealth increases, more is spent on K-12 education, as shown by the figure’s trend line
that slopes upward.1 But that is not the case in Washington. The distance between Washington’s per pupil spending and
the trend line is one of the largest in the nation. Appendix A shows the ratio between per capital income and per
student education spending (both adjusted for differences in regional costs). Washington ranked 45th in the nation.
While we have the fiscal capacity to spend much more, we have been unwilling to do so.
1
The correlation would change little if three high spending outliers (resource-rich Wyoming and Alaska and Vermont) were
excluded from the analysis because these states have relatively few students.
Continued on next page
The WERA Educational Journal
Conclusion
Our state's education system is worthy of the taxpayer’s
investment, not only because it is the paramount duty of
the state, but also because Washington’s system is one of
the most efficient in the country. The analyses show that
as funding increases, student outcomes increase. In order
to help all students meet more rigorous standards and pass
harder assessments, the state’s K-12 education will need
more resources. Increasing taxes to pay more for
education may not be popular, but Washington’s funding
of K-12 education is far below what other states are
providing, even though we have the ability to do so. The
Supreme Court is now holding the Legislature accountable
to do its job and provide ample funding for basic
education. Time will tell how they respond to that
increased accountability.
References:
Barton, P. and Coley, R. (2007). The Family: America’s
Smallest School. Princeton, NJ: Educational Testing
Service.
Berliner, D. and Biddle, B. (1995). The Manufactured
Crisis: Myths, Fraud, and the Attach on America’s
Schools. Cambridge, MA: Perseus Books.
Boser, U. (2011), Return on Educational Investment: A
District-by-District Evaluation of U.S. Educational
Productivity. Washington, DC: Center for American
Progress.
http://www.americanprogress.org/issues/education/report/
Bracey, G. W. (2004). Setting the Record Straight:
Responses to Misconceptions About Public Education in
the U.S. Portsmouth, NH: Heinemann.
Bracey, G. W. (2005). “Oh, Those NAEP Achievement
Levels,” in Principal Leadership, September 2005, 6(1).
Reston, VA: National Association for Secondary School
Principals.
Cuban, L. (2004). “A Solution That Lost Its Problem:
Centralized Policymaking and Classroom Gains,” in
Who’s in Charge Here?, Epstein, N. (Ed.). Washington,
DC: Brookings.
Education Week (2013). Quality Counts 2013. Bethesda,
MD: Editorial Projects in Education.
http://www.edweek.org/ew/toc/2013/01/10/
Hanushek, E. (2001). Efficiency and Equity in Education.
Cambridge, MA: National Bureau of Economic Research.
Page 33/August 2013
Page 33De/M
Ladd, H. and Hansen, J., Eds. (1999). Making Money
Matter: Financing America’s Schools. Washington,
DC: National Academy Press
Litzow, S. (2013). “School funding should be tied to
improvement in student learning,” in Seattle
Times, February 7. National Education Access
Network (2013). School Funding Cases in
Washington, downloaded from
http://schoolfunding.info/2012/01/
Rothstein, R. (1993). “The Myth of Public School
Failure,” in The American Prospect, 13(Spring), 2034.
Rothstein, R. (1998). The Way We Were?: The Myths
and Realities of America's Student Achievement.
New York, NY: Century Foundation Press.
Rothstein, R. (2004). Class and Schools: Using Social,
Economic, and Educational Reform to Close the
Black-White Achievement Gap. Washington, DC:
Economic Policy Institute.
Rothstein, R. (2012). “Addressing unfair expectations
for the next wave of educators,” in Economic
Policy Institute Blog. May 15, 2012.
Stallings, D. (2010). Washington State’s Duty to Fund
K-12 Schools: Where the Legislature Went Wrong
and What It Should Do to Meet Its Constitutional
Obligation. Seattle, WA: Washington Law Review
Association.
TVW (2013).
Senate Early Learning & K-12 Education Committee.
January 16.
Author:
Peter J. Bylsma, Ed.D. and MPA, is the Director of
Assessment and Student Information in the Renton
School District and is the Past President of the
Washington Educational Research Association.
He can be contacted about this article at
[email protected]
Page 34/August 2013
Appendix:
The WERA Educational Journal
The WERA Educational Journal
Stupid Excel Tricks: Calculated Fields
by Pat Cummings
Page 35/August 2013
Page 35De/M
A friend recently showed me a pivot table feature that has transformed how I display data. This may seem a bit
challenging at first, but if you stick with me I guarantee you will find ways to apply this “trick” to many data display
opportunities. This article is an introduction to the powerful “calculated field” in the Pivot Table.
Working With 1s and 0s
In this example, I use one of those ubiquitous data sets that display “1” and “0” for test scores. For example, we have
MSP or HSPE “meeting standard” (1) and “not meeting standard” (0). Recently I crunched numbers for the WaKIDS
assessment that summarized student performance for six domains. Each of our kindergarten students received a score
of “Met” (to be converted to 1) or “Not Met” (to be converted to 0). They also could have received a score of “Domain
Not Completed” (neither a 1 or 0, but considered “blank”) Here is how the data was originally displayed in the Excel
file and how it looked when the file was converted to "1", "0", and blanks:
FIGURE 1: Original Data organized using Microsoft Excel
When you convert the file (Met =1), (Not Met=0), and (Domain Not Completed = blank) the file looks like this:
FIGURE 2: Data Converted to 1, 0, and Blanks
Now Insert a Pivot Table (Chart)
If we take this data and convert it to a pivot chart (stacking) for the Social Emotional Domain it will look like Figure 3.
(You might have to right click on the chart and add data labels or change the type of chart from your default to the
100% stacked bar.)
Continued on next page
Page 36/August 2013
The WERA Educational Journal
FIGURE 3: Pivot Table and Stacked Bar Chart
So we have 2,510 students displayed, in the Social Emotional Domain and 1,897 met standard (1) and 613
did not meet standard (0). We could tweak the chart to display show the values as a % of Row Total and the
number (1897) would convert to percent (75.8%).
Now The Calculated Field
Here is the problem. I can change the pivot table from displaying the Social Emotional domain to another
domain and display a similar chart. But what I would like to do is have one chart with all six domains,
not six separate pivot charts. Try as you might by changing the Values and Fields you cannot get there
from here. Trust me, I have spent hours trying. Enter my friend and local Excel guru that works down the
hall, William Rainey. He looked at my data and said, “that’s easy to do… just use a “ONE calculate field” in
your pivot table.
Our goal was to create a simple chart that shows the percentage of students who Met expectations in each
domain. We want to calculated the percentage without including the students with "Domain Not
Completed." In order to calculate the denominator accurately, we will add six new columns to our table
that contain a “1” value for the non-blank cells, and still have a blank for "Domain Not Completed." Then
we can use these values to create a “calculated field”.
First, we go back to the data and add 6 columns, one for each domain. In the new columns, if the original
column has a “1” or “0” we insert “ 1”… if it is blank then leave it blank. Here is how the 6 new columns
would look (using naming conventions of ‘Social Emotional’ is “S-1”, “Physical” is “P-1”, etc.):
FIGURE 4: Data with Additional Columns to Indicate Non-Blank Scores in Col. D-I
Continued on next page
The WERA Educational Journal
Page 37/August 2013
Page 37De/M
Once that we have added the 6 extra “one” fields we need to go back and reselect our pivot table range to include
these extra column. Note that you will want to choose the option to put this in a New Worksheet. Trust me—it's much
easier to do the next step that way.
FIGURE 5: New Pivot Table Containing Original and New Columns in the Selected Range
ADDING CALCULATED FIELDS
Our next step is to add a “calculated field” for each of the six domains. In EXCEL 2010, you do it the following way
from within the Pivot Table Tools: 1) Option, 2) Field Items & Set and 3) Calculated Fields:
FIGURE 6: Adding the Calculated Fields
1) Option
2) Field Items & Set
3) Calculated Fields
(Note to Users of EXCEL 2007: A section at the end of this article shows how to find the Calculated Fields. It is
slightly different than in EXCEL 2010.)
Next we add a calculated field or each for each or our six domains. These fields have to contain a unique name so we
use the convention of adding an “x” before the domain (“xSocialEmotional). The formula will use the original domain
column as the numerator and the new “ONE” column as the denominator:
Continued on next page
The WERA Educational Journal
Page 38/August 2013
FIGURE 7: Creating a Unique Name and Formula For Calculated Fields
1) Unique Name for Calculated Field
2) Formula= SocialEmotional/’S-1’
After you have added a calculated field for each of the six domains and selected these as the fields to display, your
pivot table should look something like this:
FIGURE 8: Pivot Table After You Have Created Calculated Fields
The last step is to go back and re-visit the pivot charts now that we have added six new calculated fields to our data
set(xSocialEmotional, xPhyscial, xLanguage, etc). Now our new pivot chart has the ability to display all six domains on
a singular chart when we build the bar chart with these “ONE” calculated fields. (We will delete the legend, but I
wanted you to see it here.)
Continued on next page
The WERA Educational Journal
FIGURE 9: New Chart Showing All Six Domains
Page 39/August 2013
Page 39De/M
Cleaning things up a bit and adding some slicers make a wonderful tool for analysis. Here is a final WaKids chart using
the calculated field trick with slicers added (see previous posts on slicers). Oh, my! This is powerful data mining:
FIGURE 10: New Chart With Slicers Added
Continued on next page
Page 40/August 2013
The WERA Educational Journal
Note to EXCEL 2007 Users: Find "calculated fields" by going to Pivot Table Tools, then Tools, then Formulas.
Author:
Pat Cummings is Director of Research and Evaluation in the Tacoma Public Schools. He can be reached at
[email protected]
The WERA Educational Journal
Page 41/August 2013
Page 41De/M
BOOK REVIEWS
This issue includes two expanded book reviews, with one of the reviews covering a book about kindergarten
and the other covering a book about charter schools. Both are particularly relevant to changes occurring in
our state, as explained in the descriptions below. In addition, there are three other reviews, which are
focused on formative assessment, the disproportionate impact on African American individuals and
communities of some of our public policies, and the accuracy of statistical predictions.
Andrea Meld, Ph.D., is an Assessment Analyst for Kent School District and edits The Standard Deviation. She
attended kindergarten at K.T. Murphy Elementary in Stamford, CT. In this issue she provides the history of
kindergarten as outlined in Inventing Kindergarten, by Norman Brosterman.
Jill Hearne, PhD. and Emeritus member of WERA, provides a review of Charter Schools and the Corporate
Makeover of Public Education, by Michael Fabricant and Michelle Fine. The book provides an analysis of
the history, research, and policy development surrounding the charter school movement. It also addresses
the lack of demonstrable achievement benefits for charter schools and the alarming equity issues they are
already creating.
Nancy Katims, Ph.D., Director of Assessment, Research, and Evaluation for the Edmonds School District,
reviews Embedded Formative Assessment by Dylan Wiliam. The reviewer states Wiliam states that his two
purposes are to provide (1) simple, practical ideas about changes every teacher can make to develop his or
her teaching practice, and (2) the evidence that these changes will result in improved outcomes for
learners.
Karen Banks, educational research consultant and editor of WEJ, has reviewed Michelle Alexander's
provocative book The New Jim Crow: Mass Incarceration In An Age Of Colorblindness. The author points
out through meticulous reporting of statistics that there is a huge human and economic toll of incarcerating
large segments of the population. Such policies ensure segregation from society and make many types of
employment impossible for individuals who commit certain minor offenses.
Pat Cummings, Director of Research and Evaluation for Tacoma Public Schools, reviews The Signal and the
Noise: Why So Many Predictions Fail — but Some Don't, by Nate Silver. The focus of the review is the
accuracy with which the author was able to predict the 2012 presidential election and various aspects of
education, how they interact, and how educators might use data to make more accurate relevant
predictions.
Thank-you, Donnita Hawkins!
Donnita has served as book review editor for the last year and is leaving that role as she assumes the role of
Director of Secondary Special Education in North Thurston Public Schools. We will miss her, but are excited
about her new opportunity!
The WERA Educational Journal
Page 42/August 2013
Inventing Kindergarten, by Norman Brosterman
Reviewed by: Andrea Meld
When I first contemplated writing this as an expanded
review, a title for the review itself popped into my
head:
Kindergarten and the “Lost Paradise of
Childhood.” Much has changed since kindergarten
became commonplace, and not all of those changes
were for the better.
At first glance, kindergarten appears simple. However,
the recent spotlight on early learning brings with it
several policy debates:
1. Should kindergarten be full-day or half-day?
(Kauerz, June 2010;. Lee, et al, 2005)
2. Academic or developmental? (Russell, 2008)
3. Should kindergarten be aligned to Common
Core Standards? (Meisels, Nov. 29, 2011; NAEY,
2011);
FIGURE 1: Mondrian-inspired design, and a German
stamp honoring artist Joseph Albers. Brosterman noted
a striking similarity to craftwork produced in Froebel’s
kindergartens.
4. More like work or like play? (Doughetery,
2012)
5. Can programs, such as WaKIDS, help reduce
achievement gaps?
These are important questions, outlined to provide
context for this book review. Often overlooked in this
discussion is the original purpose of kindergarten, as
envisioned by its founder, Friedrich Froebel. This is
the approach taken by Norman Brosterman, in his
charming book, Inventing Kindergarten (1997).
Oversized like a children’s book or an art book, with a
bright red cover, Inventing Kindergarten documents
the history of kindergarten. Open the book, and a
wealth of photographs and colorful illustrations
unfold.
FIGURE 2: Falling Waters, by Frank Lloyd Wright and a
house designed by Le Corbusier.
Both bear a likeness to the blocks used in a Froebelian
kindergarten.
Brosterman connects the Victorian experience of
kindergarten, which used a wide variety of geometric
learning activities with the Modern Art movement that
began at the turn of the 20th Century, and provides
visual and historic evidence linking these two
seemingly disparate worlds. Artists such as Piet
Mondrian, Paul Klee, and Joseph Albers and architects
including Frank Lloyd Wright and Le Corbusier all
attended a prototype of kindergarten that made use
of materials known as “gifts” to foster spatial
reasoning and geometric skills (see Figures 1 and 2).
Continued on next page
The WERA Educational Journal
Brosterman compares the kindergarten experience
that most of us remember as a weak version of the
original kindergarten that came into being in the mid19th Century, the dream and invention of a visionary
German crystallographer and educational reformer,
Friedrich Froebel (shown as a young man in Figure 3).
"Unfortunately, kindergarten for us, and for most of
the generations born in this century, was a distortion,
a diluted version of what originated as a radical and
highly spiritual system of abstract-design activities
intended to teach the recognition and appreciation of
natural harmony. Kindergarten’s universal, perfect,
alternative language of geometric form cultivated
children’s innate ability to observe, reason, express,
and create." (Brosterman, 1997, p. 12).
The word kindergarten, itself, not only translates to
"children’s garden.” Children were actually given their
own garden plots to attend and nurture in the
Page 43/August 2013
Page 43De/M
Froebelian version (Brosterman, p. 37), and the
teachers who were trained in these methods were
called “kindergartners.” Froebel’s book, Paradise Of
Childhood A Practical Guide To Kindergartners, was
later published by Milton Bradley in 1869.
Froebel believed that young girls and boys, rich and
poor, should attend the same kindergarten classrooms,
a radical notion in his time. A kindergarten day might
start with singing, and a chatting circle. The children
would then be seated at communal worktables to
engage in block or tile design, paper folding or
weaving. More physical activities, such as gymnastics,
nature walks or gardening would follow. The purpose
of instruction was to develop in young children: ...an
uninhibited curiosity and genuine respect for nature,
family, and society; a reasoning and creative child
who would later have few problems learning the three
Rs or anything else, while gracefully incorporating all
manner of diverse knowledge and experience into a
unified and supple life. (Brosterman, p. 39).
FIGURE 3: Friedrich Froebel ( 1782-1852)
A Kindergarten Timeline
1840
1848
1849
1851
1856
1861
1869
1876
1877
Froebel opens the first kindergarten in Blankenburg, Germany.
Over 50 kindergartens have been established in Germany.
Froebel meets with Bertha Von Bülow who will later have an important role in training
kindergarten teachers.
The England Infant Garden Place, the first kindergarten in England, opens.
Margaretje Schurz established the first American kindergarten Watertown, MA.
Elizabeth Palmer Peabody opens a kindergarten in Boston.
Milton Bradley publishes Froebel’s Paradise of Childhood and produces Froebel’s gifts and
other educational materials.
Anna Lloyd Wright attends the Philadelphia Centennial where kindergarten is being
showcased and purchases Froebel materials for her son, Frank.
The first kindergarten opens in Japan (Figure 4.)
Continued on next page
Page 44/August 2013
The WERA Educational Journal
FIGURE 4: Kindergarten picnic in Tokyo, undated photograph
1883
Kindergartens are established in all St. Louis public schools and in Toronto (Figure 5).
FIGURE 5: Student teachers working with kindergarten children, Toronto, 1898
1920
Kindergartens are included in most American public schools.
Continued on next page
The WERA Educational Journal
Why did the original idea of kindergarten change?
Brosterman does not provide a definitive answer, but
speculates that the world has changed dramatically
since the dawn of the Industrial Age, and that children
today may be vastly different from their Victorian
counterparts. Over time, many of the nuances and
details of this approach faded as newer generations of
teachers learned from books and manuals rather than
from Froebel himself or his early followers.
Brosterman (p. 40) notes:
The real kindergarten is long gone, the
gifts have been transformed, the
education objective for what is left of the
occupations have been lost or corrupted,
the world today is radically different than
it was at the beginning of the Industrial
Revolution, and superficially, at least,
today’s children may not be comparable
to their 19th century peers. Also gone is
the crucial element which is always most
difficult to maintain – a cadre of qualified
kindergarten teachers versed in the
subtleties of the system.
Although we can’t go back to the past, are there
things that we might learn from Froebel’s approach
that we could apply to the kindergarten and other
early learning experiences of children today? I
encourage you to read Inventing Kindergarten to find
out.
References:
Brosterman, N. (1997). Inventing Kindergarten. Harry
N. Abrams, Inc: NY.
Original photography by Kiyoshi Tagashi. ISBN 0-81093526-0 (clothbound).
Lee, V., Burkam, D. T., Ready. D., and Honigman, J. ,
and Miesels, S. (Rev. 2005). Full-Day vs. Half-Day
Kindergarten: In Which Program Do Children Learn
More? An earlier version was presented at the
American Sociological Association, Anaheim, CA,
August 2002. Available at
http://school.elps.k12.mi.us/kindergarten-study/
Doughetery, J. (2012) Kindergarten: The Changes from
Play to Work. Posted on Course Syllabus for Education
300: Education Reform, Past and Present, Trinity
College, Hartford, CT.
http://commons.trincoll.edu/edreform/2012/05/
Kauerz, K. (June 2010). PreK-3rd: Putting Full-Day
Kindergarten in the Middle. PreK-3rd Policy Action
Brief No. Four. Foundation for Child Development.
Available at
http://fcd-us.org/sites/default/files/ .
Page 45/August 2013
Page 45De/M
Meisels, S. (Nov. 29, 2011) Common Core standards
pose dilemmas for early childhood. Posted in “The
Answer Sheet” by Valerie Strauss, Washington Post.
Available at
http://www.washingtonpost.com/blogs/answer-sheet.
National Association for the Education of Young
Children (2011). The Common Core State Standards:
Caution and Opportunity for Early Childhood
Education. Available online at
http://www.naeyc.org/files/naeyc/.
Russell, J. (2008). Defining Kindergarten Education in
an Era of Accountability
http://www.californiakindergartenassociation.org/.
If you would like to order the book, the information is
as follows: Inventing Kindergarten , by Norman
Brosterman. The prices from various online retailers
start at around $50, plus shipping. ISBN: 0810935260.
Sources For Figures
All figures in this review were downloaded from Wiki
Commons, as public domain and freely-licensed
educational media images, available at
http://commons.wikimedia.org/wiki/Main_Page.
Reviewer:
Andrea Meld, Ph.D., is an Assessment Analyst for Kent
School District, and outgoing editor of The Standard
Deviation.
She can be reached at [email protected].
Page 46/August 2013
The WERA Educational Journal
Charter Schools and the Corporate Makeover of Public Education: What’s at Stake?
By Michael Fabricant and Michelle Fine
Reviewed by: Jill Hearne
So, what’s at stake? This book by Michael Fabricant
and Michelle Fine is an analysis of the history,
research, and policy development surrounding the
charter school movement. The book examines the
present
evidence
regarding
charter
school
effectiveness and consequent social effects on
communities and student achievement. The authors
provide substantial evidence of how the charter school
movement has been “corralled away from democracy
and toward privatization of public education.” (p.ix)
Chapter 2 relates the history of the charter school
movement, which in the 1980s was educator and
community designed, based very much on local control
and often supported by the American Federation of
Teachers. Charter schools were considered inspirations
and models of educational reform. One hallmark for
charter schools was a strong multicultural and social
justice curriculum.
Charter School ideology took a right turn in the 1990s,
increasingly emphasizing teachers and unions as
causes, not solutions in educational decline. Policy
support for this change in focus came from a growing
number of foundations, state advocacy groups, and
charter management organizations, (CMOs). Increased
media negativity and a focus on high stakes testing
resulted in a great increase in charters as a solution
and alternative to public education.
Corporate educational entrepreneurs collaborated
with conservative groups like Democrats for Education
Reform to discredit public schools and market charters
in poor communities. Once an alternative within
public education, charter schools became an
alternative against public education.
In 2012, there were more than 5,000 charter schools
across 40 states educating more than 1.5 million
students. Laws and funding vary by state, and the
result is a wide range of charter schools.
For
example, these schools now include everything from
free market charters, to small community-based
schools, or unique--usually elementary models—and
even to commercial companies and franchises.
It is this last group that repudiates the original
precept that innovation and grass roots creativity are
the advantages to charter schools as these CMOs have
set curricula and often prescribed instructional
methodology. States differ on charters’ autonomy,
contract length, and accountability. Ironically, the
authors cite studies that show that most CMOs often
offer no curricular freedom or building level decisionmaking.
Also, the authors report an inverse
relationship between “the rapid proliferation of
charters and rates of student achievement.” (p.33)
In Chapter 3, Fabricant and Fine analyze the multiple
studies of charter schools as of 2012 using five
criteria: Student Achievement; Equity; Parent
Engagement; Experience, Quality and Retention of
Educators; and Innovation.
Key points of the synthesized data conclude that
charter schools do not deliver on the dream, but
rather serve as potential long term destructive forces
to democracy. Indeed, multiple data sets confirm
that public schools are modestly outperforming
charters. (p.61)
Student Achievement data: “Most charters do as
well as or less well than traditional public
schools." Only 17% of charter schools outperform
public schools.” (p.38)
Equity: Every published study of charter
admissions and recruitment documents underenrollment of English language learners and
students in special education.
Studies from
Detroit and Minneapolis indicate that charters are
more racially segregated than other public
schools.
Parent Engagement: Some individual charter
schools appear to be quite committed to parental
engagement. Some evidence suggests that higher
levels of parental satisfaction diminish over time.
Experience,
Quality,
and
Retention
of
Educators: Charter educators tend to be less
experienced, less qualified, and less well paid
than their traditional school educators.
Continued on next page
The WERA Educational Journal
Innovation: Some innovation has been reported in
exemplary charter schools but there is no
evidence of widespread curricular or pedagogical
innovation.
In Minnesota, after two decades of experience,
charter schools still perform worse on average than
comparable traditional public schools. More troubling,
however is the authors’ report that “charters continue
to stratify students by race, class, and possibly
language, and are more racially isolated than
traditional public schools in virtually every state and
large metropolitan areas in the country.” (p.46)
Charters also provide students with a choice of
classmates.
As the level of integration of a
community increases, the level of white students
enrolling in a predominantly white charter also
increases. (p.47) The authors posit that most charter
school seniors go on to college because it is “a selfcleansing system.” (p.48) Attrition rates, even at
exemplary charter schools such as KIPP, reach 60%
between 5th grade and 8th grade.
In particular,
students with lower entry test scores, non-Caucasian,
and those living in poverty were more likely to leave
charter schools.
The authors conclude that “charters are much more
likely to selectively admit, maintain and remove
students. Local public schools have to pick up the
pieces.” (p.51)
Chapter 4 examines the influence of private
philanthropy on charter school policy. The early social
justice movement, which spawned charter schools,
has evolved to the current corporate thrust toward
models that result in selective selection and
retention.
Led by major foundations such as Gates, Walton,
Broad, and Dell, the argument is that deregulation of
education is the only alternative to the ineffective
work of the bureaucratic system of schooling. Charter
schools are seen by proponents to be the solution to
transforming a deregulated educational marketplace
into academic excellence for all. Such is the influence
of these philanthropists that national public policy
through Race to the Top is supporting charter school
expansion.
This philosophy holds that entrepreneurial reformers
and creative market forces will align to produce
effective schooling practices. As noted in Chapter 3,
this promise does not bear true. Fabricant and Fine
hold that charter policy hides “a profound failure of
political will…to support the kinds of budgets,
taxations, and targeted investment needed to revive
public education as a key element of social and
economic development and racial justice in the
Page 47/August 2013
Page 47De/M
poorest communities.” (p.87) Further, as charter
schools proliferate, oversight decreases and their
achievement is reduced.
Chapter 5 examines in depth the public schools’
failure to address the issues of race and poverty and
highlights charter schools’ exacerbation of this failure.
The authors report multiple examples of charter
schools’ failure to provide academic improvement for
children from low income backgrounds.
In New Orleans, the Institute on Race and Poverty
found that charter schools skim the most motivated
students through selective admissions, disciplinary
actions, transportation policies as well as marketing
and recruitment. (p.94) In Chicago, the Renaissance
Project showed gains for student achievement, but
closer examination showed that they were not the
same students. Many schools that were closed and
reopened as charter schools were not accessible to
neighborhood children. Tracking 5,445 K-8 students
who had been displaced through school closures
revealed that most students were transferred to
equally weak schools; public, charter, and for profit
contract schools.
One year later, no significant
improvements in mathematics or reading scores could
be found. (p.97) In New York, Polling for Justice (PFJ)
mapped and surveyed school closures and charter
openings in the context of “zones of dispossession,”
areas of high dropout rates and negative youth-police
interaction. In these areas, charter schools were
opening, but not for neighborhood children. Selective
admission criteria further constrained the choices
these displaced students had. (p.102)
In Chapter 6 Fabricant and Fine set forth a discussion
of strategic investment strategies in opposition to
charter schools to address support for low-income
children, innovation, access, and accountability. Key
policies include support for equitable funding across
schools
and
districts,
school-community
transformation
projects,
strategic
investment,
oversight and accountability for public charter
schools, and increased progressive activism on the
part of parents and unions.
Proponents of charter schools claim that the flexibility
and freedom to innovate and implement new effective
approaches will result in increased academic success,
particularly among students of color. Examples such
as the Harlem Children’s Zone and the Green Dot
network of charter schools do accumulate impressive
Continued on next page
The WERA Educational Journal
Page 48/August 2013
differences in the academic performance of students.
But much has been written about exemplary public
schools as well. Exemplars exist in both models, but
in the final analysis, a number of studies have
indicated that the academic outcomes of charter
schools are not significantly different than public
schools.
In the main, achievement scores are
equivalent or slightly lower for charter schools. (p.38)
It is ironic that the early promise of charter schools,
“freed from red tape and formulaic practice,” has
developed into corporate models of prescribed
curriculum and assessments, steep per pupil back
office management fees, and indifferent student
achievement.
The authors conclude, after extensively reporting on
existing studies and reports over several decades, that
“the efforts of individual parents to secure a quality
education for their children through charter schooling
is both rational and escalating, but as a collective
strategy it is a delusion.”(p.35)
Fabricant and Fine wind up with the challenge:
Either we are prepared to struggle; for a future
build on a rock-solid foundation of a well-funded
education system available for all children or we
all suffer in the quicksand of shifting resources
from a starved public education system to
privatized alternatives. Nothing less than the very
future of public schooling and a larger democratic
culture and politics is at stake. (p.130)
Reference:
Fabricant, M. and Fine, M. Charter Schools and the
Corporate Makeover of Education. (2012) New York:
Teachers College Press
If you would like to order this book, the information is
as follows: Charter Schools and the Corporate Makeover
of Public Education: What's at Stake? by Michael
Fabricant and Michelle Fine, 2012. Teachers College
Press, 1234 Amsterdam Avenue, New York, NY 10027.
ISBN: 978-0-8077-5285-2.
Reviewer:
Jill Hearne, PhD., is a WERA Emeritus member and a
former principal and central office administrator. She
currently consults in the area of accountability and
assessment. She has worked at the local, state, and
national level in school effectiveness and written
extensively about equity and assessment. Additionally
Dr. Hearne taught at several northwest universities in
school administration; including curriculum and
assessment,
multicultural
education,
and
organizational change. She divides her time between
the Pacific Northwest and her sailboat, currently in
Trinidad. Jill can be reached via email at:
[email protected]
The WERA Educational Journal
Page 49/August 2013
Embedded Formative Assessment by Dylan Wiliam
Page 49De/M
Reviewed by: Nancy Katims
In every current instructional framework, teacher
evaluation system, or simple list of factors positively
associated with student achievement, the use of
student assessments for making instructional decisions
is a key element. Most educators would classify the
use of assessment for instructional decision making as
“formative assessment.”
And yet, formative
assessment can and does take many forms, from
interpreting the results of the state assessment to
plan for next year’s instruction, to using benchmark
interim assessments in a pre-post fashion, to groups of
teachers using common unit assessments they have
developed collaboratively.
Dylan Wiliam points out that while all of the above
examples can be considered formative assessment if
the results are used to move student learning forward,
the most powerful type of formative assessment for
positively impacting student learning is formative
assessment implemented as a process. The word
“embedded” in the title of his book is significant.
Wiliam not only makes a research-based case for
teachers to embed formative assessment into their
instruction on a minute-by-minute daily basis, he also
paints a very concrete picture of what strategies and
techniques a teacher can use.
Wiliam begins his book by persuading the reader that
improving educational achievement is critical for the
good of society, and that the way to accomplish this
objective is by improving the quality of all teachers.
He follows this with an analysis of some popular
professional development topics, explaining why these
have failed. This leads to his main thesis – that the
body of research “that shows a large impact on
student achievement across different subjects, across
different age groups, and across different countries . .
. is the research on formative assessment” (p. 33).
After summarizing the history of formative assessment
and describing the elements that must be in place for
assessment to improve learning, Wiliam launches into
the main part of his book – a chapter on each of the
five strategies that comprise his conceptualization of
the formative assessment process:
1. Clarifying, sharing, and understanding learning
intentions and criteria for success
2. Engineering effective classroom discussions,
activities, and learning tasks that elicit evidence
of learning
3. Providing feedback that moves learning forward
4. Activating learners as instructional resources for
one another
5. Activating learners as the owners of their own
learning
Each of Wiliam’s five “strategy” chapters is laced with
descriptions of research supporting and clarifying that
strategy, issues associated with implementing the
strategy, and practical techniques teachers can use to
embed the strategy throughout their instruction. In
addition, Wiliam sprinkles each chapter with engaging
real-life examples that deepen our understanding of
the research, issues, and techniques associated with
each formative assessment strategy. At the end of the
book, Wiliam treats us to a helpful list of 53
techniques and the page numbers in the book where
each technique is described.
This book is very much like listening to Dylan Wiliam
speak – engaging, smart, logical, persuasive, and
tightly packed with content. Each time I re-read parts
of it, I learn something new or find something that
connects to another topic on which I am working.
What is not in the book is advice on how to weave
these five strategies together into a seamless use of
the formative assessment process in one’s classroom
or across all classrooms in a school. Perhaps that will
be in a sequel.
In his introduction, Wiliam states that his two
purposes are to provide (1) simple, practical ideas
about changes every teacher can make to develop his
or her teaching practice, and (2) the evidence that
these changes will result in improved outcomes for
learners.
Wiliam definitely achieves these two
purposes. For anyone who cares about increasing
student achievement, whether as an administrator
Continued on next page
Page 50/August 2013
The WERA Educational Journal
interested in providing concrete feedback to teachers to improve classroom practices, or as a teacher interested in
improving learning for all their students (or simply to score well on their teacher evaluation), this book provides the
guidance and practical techniques to achieve these ends.
Reference:
Wiliam, D. (2011). Embedded Formative Assessment. Bloomington, IN: Solution Tree Press.
If you would like to order the book, the information is as follows: Wiliam, Dylan (2011). Embedded Formative
Assessment. Solution Tree Press, 162 pages, $29.95. ISBN: 978-1-935249-33-7.
Reviewer:
Nancy Katims, Ph.D. is the Director of Assessment, Research, and Evaluation for the Edmonds School District. She is
also the immediate Past President of WERA (and frequent a contributor to WEJ with useful tips for practitioners in
local districts.) Contact her at [email protected].
The WERA Educational Journal
Page 51/August 2013
Page
51De/M
The New Jim Crow: Mass Incarceration in the Age of Colorblindness by
Michelle
Alexander
Reviewed by: Karen Banks
What were Washington voters thinking when they
voted to legalize small amounts of marijuana
possession for personal use? Perhaps they realized
that the "War on Drugs" has been extremely costly,
both in dollars and in the toll on human lives. That
human toll includes both immediate and long-term
impacts of incarcerating people for drug offenses,
even for small quantities of drugs, along with the
related violence that we saw explode previously in the
1920s during prohibition. Policymakers may have
honestly believed the "War on Drugs" was the correct
direction for our country, but they had clearly
forgotten that alcohol prohibition led to a large
increase in organized crime for the purpose of
importing, distributing, or manufacturing alcohol.
Meanwhile, turning drug users into criminals has taken
a toll on communities left struggling with violence,
the families of those incarcerated, and finally, the
individual drug user.
This human toll is the focus of Michelle Alexander's
new book, The New Jim Crow: Mass Incarceration in
the Age of Colorblindness. The author contends that
a new caste system has evolved in America. The roots
of this caste system stem from our history of slavery,
but it was advanced by social policy initiatives such as
the "War on Drugs" and "Get Tough On Crime."
After the Civil War, Jim Crow laws emerged to prevent
African Americans from voting and to segregate them
from whites-only neighborhoods and housing, schools,
hospitals, bathrooms, and restaurants. The author
contends that America’s incarceration of AfricanAmericans, especially for drug possession, has the
effect of continuing to deny them their basic civil
rights.
She backs this up with a slew of data. For example,
we have more African Americans in jail than we had
enslaved in 1850. Also, almost all of these
incarcerated people were unable to afford sufficient
legal representation to adequately defend themselves.
She describes in detail the indirect but real costs to
the men’s families and their communities. The burden
of their convictions lives on long after their departure
from jail as they are denied the rights to vote, serve
on juries, and to be free of discrimination in
employment, housing, education, and public benefits.
Some criticism has emerged concerning this book. The
NY Times, while generally lauding the book, states
that, “while drug offenders make up less than 25
percent of the nation’s total prison population …
violent offenders — who receive little mention in The
New Jim Crow — make up a much larger share.” The
article goes on, “Even if every single one of these drug
offenders were released tomorrow the United States
would still have the world’s largest prison system.”
Alexander's rejoinder: this discussion ignores the
entirety of the issue, that not only the 2.4 million
incarcerated are affected, but the 4.8 million who
have served their ‘time’ and are now released or
under non-prison supervision. Their lives, as well, as
their
families
and
communities
have
been
permanently affected. (This reviewer would remind
readers that we have historical data to show an
increase in violent crime due to prohibition.)
While some may disagree with Alexander’s
conclusions, she provides extensive data for readers to
use in making up their own minds and for shedding
light on this huge American social issue.
Reference:
Alexander, M. (2012) The New Jim Crow: Mass
Incarceration in the Age of Colorblindness. New York,
NY: The New Press.
If you would like to order the book, the information is
as follows. The New Jim Crow: Mass Incarceration in
the Age of Colorblindness, by Michelle Alexander.
Available from various sources, including the publisher
at $19.95, and other online used and new booksellers.
ISBN: 978-1-59558-103-0.
Reviewer:
Karen Banks, Ph.D., is the editor of WEJ and a
consultant in the areas of district improvement and
program evaluation. As a (former) assistant
superintendent, she focused on developing fairer
measures of school performance and supporting
schools with better access to data, while striving to
create equitable opportunities and outcomes for all
students. She can be reached via email at
[email protected].
Page 52/August 2013
The WERA Educational Journal
The Signal and the Noise: Why So Many Predictions Fail –but Some Don’t by Nate
Silver
Reviewed by: Pat Cummings
I wonder how many educational statisticians entered
the field via the childhood route of crunching baseball
statistics, batting averages, and on-base percentage.
The 10,000 hours of practice he spent easily scanning
and processing the stats on the back of a baseball card
was not simply a misplaced youthful diversion for Nate
Silver, author of the New York Times political blog "
FiveThirtyFive "and the new book, The Signal and the
Noise.
Just how good a statistician is Nate Silver? Well,
despite being ridiculed by the pundits, in 2012 he
outdid even his 2008 presidential election predictions.
In 2008, his statistical model missed only in predicting
the outcome for Indiana (which went to Obama by
0.1%). Otherwise, he correctly called 49 out of 50
states. In the 2012 elections, Silver's model correctly
predicted 50 out of 50 states. In baseball terms, that
is a perfect game.
The Signal and the Noise looks at the world of
predictions, separating what is true “signal” for what
distracts us, i.e. “noise”. Silver examines a variety of
fields—e.g., weather forecasters, stock market, earth
quakes, politics, and professional poker players—and
and why most of their predictions fail. It usually is the
result of “over confidence.” Silver calls this the
“prediction paradox,” where we mistake more
confident predictions for more accurate ones.
And what does The Signal and Noise have to do with
educational research? We are often asked to separate
the signal from noise in many areas of research. For
example, what is the impact of the a new reading
intervention program,
as opposed to student
demographic features, in predicting whether reading
skills will improve? How confident are we in our
predictions? What data do we have that truly will
separate the facts from our opinions? What is the
proper balance between curiosity and skepticism in
helping us make sound predictions? Silver responds:
"The more eagerly we commit to scrutinizing and
testing our theories, the more readily we accept that
our knowledge of the world is uncertain, the more
willingly we acknowledge that perfect prediction is
impossible, the less we will live in fear of our
failures, and the more freedom we will have to let
our minds flow freely. By knowing more about what
we don't know, we may get a few more predictions
right."
This book is not for everyone. Some may find Silver’s
constant attraction to Bayes's theorem (which
concerns conditional probability: the probability that
a theory or hypothesis is true if some event has
happened) leads to something less than page turner.
If you want a real page turner then I would
recommend “Gone Girl” by Gillian Flynn…but that is
another book review. For those of us who enjoy the
world of deciphering the back of a baseball card
and/or educational statistics, this book is highly
recommended.
Reference:
Silver, N. (2012). The Signal and the Noise: Why So
Many Predictions Fail—but Some Don't. New York, N.Y:
Penguin Press.
If you would like to order this book, it is available in
hardback from the publisher for $27.95, and in both
paperback and hardback from various used and new
booksellers. ISBN: 978-1-59420-411-1.
Reviewer:
Pat Cummings is Director of Research and Evaluation
in the Tacoma Public Schools. He can be reached at
[email protected]
The WERA Educational Journal
Page 53/August 2013
Page 53De/M
The WERA Educational Journal
Editor
Karen Banks, Ph.D.
Consultant in Research, Evaluation & Assessment
Washington Educational
Research Association
PO Box 15822
Seattle, WA 98115
www.wera-web.org
360-891-3333
[email protected]
Book Review Editor (Through May 2013)
Donnita Hawkins, Ed.D.
North Thurston School District
Technical Consultants
Patrick Cummings,
Tacoma School District
The WERA Educational Journal
Advisory Board:
Pam Farr, Ph.D.
Shelton School District
Jill Hearne, Ph.D.
Educational Consultant
Peter Hendrickson, Ph.D.
WEJ Editor Emeritus
Duncan MacQuarrie, Ed.D.
Council of Chief State School Officers
Jack Monpas-Huber, Ed.D.
WERA Board Liaison
Brian Rick
Bellingham Public Schools
Bruce Denton
Mukilteo Public Schools
Editorial Assistant
Andrea Meld, Ph.D.
Kent School District
Layout Designer
Michelle Sekulich
The Boeing Company
Executive Secretary
Sharon Rockwood
WERA
The WERA Educational Journal is published twice a year
as a peer-reviewed online journal. Submissions are
welcomed from WERA members and others. Kindly
submit articles for consideration using APA 16th Edition
format. Submission deadlines are October 1st and March
1st. Publication dates are normally in December and
June.
Kathryn Sprigg, Ph.D.
Highline School District
Cathy Taylor, Ph.D.
University of Washington
Michael Trevisan, Ph.D.
Washington State University
The following individuals served as reviewers for
Volume 5 of the WEJ. In addition to my personal
thanks, I know that all the authors are appreciative of
the hard work reviewers do, so that we can continue to
improve the WEJ. Our Thanks to all of you!
David Anderson
John Bash
Linda Elman
Peter Hendrickson
Carolyn Lint
Hilary Loeb
Benjamin Jones
Andrea Meld
Duncan MacQuarrie
Juliana Muli
Peter Scott
Fly UP