Table of Contents

Glossary



Download a copy of the Glossary.

 

Alternate assessment:

A test used to evaluate the performance of students who are determined to be unable to participate in general state assessments even with accommodations or modifications. Alternate assessments provide a mechanism for students with the most significant cognitive disabilities, and for other students with disabilities who may need alternate ways to access assessments, to be included in an educational accountability system.

Assessment Targets:

In the Smarter Balanced Assessment system, assessment targets provide a description of the expectations of what will be assessed by the items and tasks.

Bias:

Bias in assessment exists when one or more items on an assessment offend or unfairly penalize students because of the student’s personal characteristics such as race, gender, socioeconomic status, or religion. Bias in assessment results in systematically lower or higher scores for identifiable groups of students.

California Alternate Performance Assessment (CAPA):

CAPA is an alternate assessment component of the Standardized Testing and Reporting (STAR) Program. The CAPA measures test takers’ achievement of California’s content standards for English–language arts, mathematics, and science. It is for students with an individualized education program (IEP) who have significant cognitive disabilities and who have been determined to be unable to take the California Standards Tests or the California Modified Assessment with accommodations or modifications.

California English-Language Development Standards
(CA ELD standards):

These are the standards that describe the key knowledge, skills, and abilities in English language development for English learners. The CA ELD standards are aligned with the CA CCSS in English–language arts, but they do not replace the CA CCSS for identified English learners. The three parts of the standards (Interacting in Meaningful Ways, Learning About How English Works, and Using Foundational Literacy Skills) should be interpreted as complementary and interrelated dimensions of what must be addressed in a rigorous instructional program.

California English Language Development Test (CELDT):

The CELDT is a California test administered annually that measures limited English-proficient (LEP) students’ achievement of CA ELD Standards for kindergarten through grade twelve (K–12). Three purposes for the CELDT are specified in state law: (1) identifying students as LEP; (2) determining the level of English-language proficiency for students who are LEP; and (3) assessing the progress of LEP students in acquiring the skills of listening, speaking, reading, and writing in English.

California High School Exit Examination (CAHSEE):

The high school exit examination for California contains two parts: English–language arts and mathematics. Passing both parts of the CAHSEE is currently a requirement that all students in California public schools, except eligible students with a disability, must satisfy in order to earn a California high school diploma. Students take the CAHSEE beginning in grade ten. The purpose of the CAHSEE is to improve student achievement in high school and help ensure that students who graduate can demonstrate grade-level competencies in the content standards for reading, writing, and mathematics.

California Modified Assessment (CMA):

An alternate assessment and a component of the STAR Program, the CMA measures test takers’ achievement of California’s content standards for English–language arts, mathematics, and science on the basis of modified achievement standards for eligible students who have an IEP and meet additional CMA eligibility criteria. IEP teams are responsible for making the decision on whether an identified special education student participates in the CMA.

California Standards Tests (CSTs):

The CSTs have been the cornerstone of the STAR Program. The CSTs were designed to measure how well students in grades two through eleven are achieving California’s content standards for English–language arts, mathematics, science, and history–social science.

Common Core State Standards (CCSS):

CCSS are English–language arts/literacy and mathematics standards developed collaboratively with the National Governors Association, the Council of Chief State School Officers, teachers, school administrators, and other experts. The CCSS define the knowledge and skills students should acquire during their education in K–12 in order to graduate from high school with the ability to succeed in entry-level, credit-bearing academic college courses and/or in workforce training programs. California adopted the CA CCSS in 2010.

Constructed-response method/item:

Constructed Response (CR) assessment prompts students to generate a text or numerical response in order to collect evidence about their knowledge or understanding of a given assessment target. Short constructed response items may require test-takers to enter a single word, phrase, sentence, number, or set of numbers, whereas extended constructed response items require more elaborated answers and explanations of reasoning. These kinds of constructed response items allow students to demonstrate their use of complex thinking skills to support development of the 21st Century Learning Skills and college and career readiness.

Criterion-referenced assessment:

Criterion-referenced assessment is a type of assessment that is designed to provide evidence of student performance in terms of clearly defined learning tasks/targets such as content standards.  The California STAR program and the CalMAPP assessment system are designed to provide criterion-referenced data specifically in regard to student learning of the skills and knowledge delineated in the content standards.

Diagnostic assessments:

As defined in California Education Code (EC) Section 60603, diagnostic assessments are interim assessments of the current level of achievement of a pupil that serve both of the following purposes: (1) the identification of particular academic standards or skills a pupil has or has not yet achieved; and (2) the identification of possible reasons that a pupil has not yet achieved particular academic standards or skills.

English learner:

A student in K–12 who, based on an objective language proficiency assessment, has not developed listening, speaking, reading, and writing proficiencies in English sufficient for full access to and participation in the regular school program is termed an English learner. State and federal laws require that local educational agencies (LEAs) administer a state test of English language proficiency to newly enrolled students whose primary language is not English and to English learners as an annual assessment. Since 2001, this test for California’s public school students has been the CELDT.

English Learner Proficiency Assessment for California (ELPAC):

The CA CCSS aligned California language proficiency assessment under development is called the ELPAC. This assessment will be aligned to the 2012 ELD Standards for California Public Schools, as well as the CA CCSS. The goal for the system is to maximize assessment information on language development to support English learners’ access to the CA CCSS.

Fairness in assessment:

Assessment results/scores yielding score interpretations that are valid and reliable for all students taking the test are considered fair. Regardless of race, national origin, gender, or disability, academic assessments must measure the same knowledge of content for all students who take the assessment. Resulting scores must not systematically underestimate or overestimate the knowledge of students of a particular group.

Feedback:

Feedback is information provided to learners designed to cause an improvement in learning.  Effective, meaningful feedback helps the student to see where they are now in regard to a specific learning target or targets and where they need to be.

Formative assessment practices:

As defined in California Education Code [EC] 60603(i), “formative assessment” refers to assessment tools and processes that are embedded in instruction and used by teachers and pupils to provide timely feedback for purposes of adjusting instruction to improve learning.

High-quality assessment:

As defined in EC 60603(j), a high-quality assessment is an assessment designed to measure a pupil’s knowledge of, understanding of, and ability to apply critical concepts through the use of a variety of item types and formats, including, but not limited to, items that allow for open-ended responses and items that require the completion of performance-based tasks. A high-quality assessment system is designed to give accurate data to the multiple users of assessment results that meet their specific informational needs.

Individualized Education
Program (IEP):

An IEP is a written plan that is designed by an LEA team (required membership is designated in federal law) to meet the unique educational needs of a student with disabilities, as defined by federal regulations. The IEP must be tailored to the individual student’s needs as identified by the evaluation process and should describe how the student learns, how the student best demonstrates what is learned, and what teachers and service providers must do to help the student learn more effectively. It must state how the student will participate in the state assessment system which is determined by the IEP team in accordance with guidelines provided by CDE.

Individuals with Disabilities Education Act (IDEA):

This is a federal law to ensure that appropriate services are provided to students with disabilities throughout the nation. The IDEA governs how states and public agencies provide early intervention, special education, and related services to eligible infants, toddlers, children, and youths with disabilities. Detailed regulations and guidelines are provided in the law in regard to the details of eligibility, procedures, and service delivery issues.

Interim assessment:

As defined in EC 60603(k), an interim assessment is an assessment that is given at regular and specified intervals throughout the school year, is designed to evaluate a pupil’s knowledge and skills relative to a specific set of academic standards, and produces results that can be aggregated by course, grade level, school, or LEA in order to inform teachers and administrators at the pupil, classroom, school, and LEA levels. These assessments may also be referred to as benchmarks in some districts/schools and are used as a progress monitoring tool typically two-four times during the school year.

Learning Progression (or continuum):

The term Learning Progression refers to a curriculum map, graphic or narrative description of the skills and knowledge students need to learn in the sequence in which they typically progress from novice to more expert performance.

Local Educational Agency (LEA):

LEA refers to a government agency that supervises local public primary and secondary schools in the delivery of instructional and educational services. LEAs include school districts, county offices of education, state special schools, and independent public charter schools.

National Center and State Collaborative (NCSC):

A project led by 5 centers and 27 states (18 core states and 9 Tier II states) to build an alternate assessment based on alternate achievement standards (AA-AAS) for students with the most significant cognitive disabilities. California is currently a Tier 2 participating member in this collaborative.

Norm-referenced assessment:

Norm-referenced assessment refers to an assessment designed to provide information that is interpreted in terms of the individual student’s relative performance/position within a defined group. For example, Susan’s keyboarding accuracy was better than 90% of the class members.

Percentile score/rank:

A percentile score/rank is a score generated from a norm-referenced assessment that indicates the student’s relative ranking in a norm group by stating the percentage of students that scored the same or a lower score. For example, a test score that is greater than 75% of the scores of people taking the test is described as at the 75th percentile.

Portfolios:

Classroom portfolios are an organizing tool/mechanism to tell a story about student learning.  Artifacts are collected, based on analysis and reflection, to communicate about/demonstrate student learning. Portfolios are not a form of assessment but can be used as a tool in the assessment process to organize and display artifacts that document growth, achievement and/or competence.

Performance-based assessment method/item type:   

Performance tasks (PTs) measure a student’s ability to integrate knowledge and skills across multiple standards—a key component of college and career readiness. Performance tasks are used to measure capacities such as depth of understanding, research skills, and complex analysis, which cannot be adequately assessed with selected or constructed-response items. These item types challenge students to apply their knowledge and skills to respond to complex real-world problems.

Primary language assessment:

A primary language assessment is an assessment explicitly designed for native speakers or second language learners. The Standards-based Tests in Spanish (STAR Program) are an example of a primary language assessment designed for Spanish-speaking English learners.

Summative assessment:

An assessment administered at the conclusion of a unit of instruction (chapter, unit, end of grade level/course) to comprehensively assess student learning across multiple content standards is considered a summative assessment. Data from summative assessments are also used to evaluate the effectiveness of an instructional method or program.

Raw score:

The raw score on a test is simply the number of items correct or points received on the test.

Reliability:

Reliability is the degree of confidence that both scores and student performance are repeatable over time and across different circumstances. Repeatability of scores means that the same student response will receive the same score(s) no matter who scores the assessment or when it is scored. It is the test score that is or is not reliable, not the test.

Rubric:

A rubric is a scoring tool that provides a detailed description of the quality indicators for a work product. The quality indicators are described for multiple levels of performance ranging from beginning to advanced and each level is assigned a numerical score. Rubrics are particularly useful tools for evaluating constructed response assessment items and performance tasks.

Sample size:

Sample size refers to how many items/how much evidence is needed to be able to make a relatively confident evaluation of the level of student achievement in relation to a learning target/objective/standard. Multiple factors are involved in determining the answer to the question on sample size of how much evidence is sufficient to support a valid conclusion about student learning.

Selected-response assessment method/item type:

Selected-response (SR) items prompt students to select one or more correct responses from a set of choices. Carefully constructed and reviewed selected-response items allow students to demonstrate their use of complex thinking skills, such as developing comparisons or contrasts; identifying cause and effects; identifying patterns or conflicting points of view; or categorizing, summarizing, or interpreting information.

Task Analysis:

Task analysis is the process of breaking a standard down into the building blocks necessary for student mastery of that content standard. It requires the identification of the teachable and measurable sub-skills that lead to proficiency in the skills, knowledge, or concepts of a content standard. The task analysis process provides a road map for sequencing instruction and assessment to fully address the content standard.

Universal design principles:

This term refers to the concept of designing assessments to be accessible to the greatest extent possible to all students, regardless of disability or English proficiency. Rather than retrofitting existing assessments through accommodations or alternative tests so all students can participate, universal design calls for new assessments to be designed and developed from the beginning to allow for the participation of the widest possible range of students. Principles of Universal Design for Learning have been developed by CAST (http://www.cast.org) as a planning tool for use in the design of instruction and assessment.

Validity:

Validity can be defined as the degree to which evidence and theory support the interpretations of test scores. It is the interpretation of the test score and how it is used that is validated, not the test.  Validity can be thought of as the extent to which test scores accurately reflect the relevant knowledge and skills of test takers.