As states moved to implement new, more rigorous college/career-readiness standards in English/language arts and mathematics, they faced a challenge: how would they assess student progress on the new standards? Writing high quality assessment items that truly measure student mastery of the standards would be no small task. It would be both time consuming and expensive.
In Kentucky, due to the mandates of Senate Bill 1 (2009) to implement new standards and aligned assessments in 2011-12, the Kentucky Department of Education contracted with vendors to provide end-of-the year tests for students in grades 3-8, and an on-demand writing test and end-of-course exams in Algebra II, English II, Biology and U.S. History at the high school level. The majority of the tests were traditional, multiple choice, fill-in-the-bubble tests that were really more narrowly focused than the standards demanded, but were nonetheless valid and reliable.
Meanwhile, in 2010, the U.S. Department of Education (USED) awarded $330 million to two assessment consortia to develop a new generation of tests designed to provide ongoing feedback to teachers during the course of the school year, measure annual student growth, and more accurately gauge students’ understanding and application of the standards. Through the consortia, states would benefit from having their dollars used in highly leveraged ways to support goals that would not otherwise be achieved without an infusion of federal funding.
Based on their applications, the Partnership for Assessment of Readiness for College and Careers (PARCC) planned to test students’ ability to read complex text, complete research projects, excel at classroom speaking and listening assignments, and work with digital media. The SMARTER Balanced Assessment Consortium (SBAC) would test students using computer adaptive technology that would ask students tailored questions based on their previous answers.
The consortia would develop periodic assessments throughout the school year to inform students, parents and teachers about whether students were on track.
The requirements of the grant provided that the consortia “…make all assessment content (i.e., assessments and assessment items) developed with funds from this competition freely available to the States, technology platform provides and other that request it for the purposes of administering assessments, provided they comply with the consortium or state requirements for test and item security.”
This provision was designed to ensure that content developed with public funds was widely available – including to states that were not part of grantee consortia. Initially, Kentucky was a participating state in each consortium, meaning we were monitoring but not leading the work. Eventually, due to capacity issues and a potential conflict of interest if either or both of the consortia would bid on Kentucky’s testing contract, the state withdrew from each.
Now, in an effort to save millions of dollars, the Kentucky Department of Education is seeking access to consortia-developed assessment items at the end of the 2014-15 school year so that we may enhance Kentucky’s assessment item pool for the 2015-16 state assessments. Of course, before any new items are added to state K-PREP tests, they would move through the normal state review process.
It is my understanding, however, that several states have already contacted the consortia to request access to assessment items and have been denied access or told they would have to pay for access to assessment items. Both of these conditions seem to violate the program requirements of the publicly funded grant.
So, the question is, who owns the assessment items and the consortia-developed assessments? Are they yours (the consortia’s), mine (the states’) or ours (the federal government’s)?
I have written Secretary of Education Arne Duncan to ask for clarification. Kentucky and several other states anxiously await his response. Stay tuned.