1 Overview

The Dynamic Learning Maps® (DLM®) Alternate Assessment System assesses student achievement in English language arts (ELA), mathematics, and science for students with the most significant cognitive disabilities in grades 3–8 and high school. The purpose of the system is to improve academic experiences and outcomes for students with the most significant cognitive disabilities by setting high and actionable academic expectations and providing appropriate and effective supports to educators. Results from the DLM alternate assessment are intended to support interpretations about what students know and are able to do and to support inferences about student achievement in the given subject. Results provide information that can guide instructional decisions as well as information for use with state accountability programs.

The DLM system is developed and administered by Accessible Teaching, Learning, and Assessment Systems (ATLAS), a research center within the University of Kansas’s Achievement and Assessment Institute. The DLM system is based on the core belief that all students should have access to challenging, grade-level or grade-band content. Online DLM assessments give students with the most significant cognitive disabilities opportunities to demonstrate what they know in ways that traditional paper-and-pencil assessments cannot.

A complete technical manual was created after the first operational administration in 2015–2016. After each annual administration, a technical manual update is provided to summarize updated information. The current technical manual provides updates for the 2022–2023 administration. Only sections with updated information are included in this manual. For a complete description of the DLM assessment system, refer to previous technical manuals, including the 2015–2016 Technical Manual—Science (Dynamic Learning Maps Consortium, 2017).

Due to differences in the development timeline for science, separate technical manuals were prepared for ELA and mathematics (see Dynamic Learning Maps Consortium, 2023a, 2023b).

1.1 Current DLM Collaborators for Development and Implementation

During the 2022–2023 academic year, DLM assessments were available to students in the District of Columbia and 20 states: Alaska, Arkansas, Colorado, Delaware, Illinois, Iowa, Kansas, Maryland, Missouri, New Hampshire, New Jersey, New Mexico, New York, North Dakota, Oklahoma, Pennsylvania, Rhode Island, Utah, West Virginia, and Wisconsin. One partner, Colorado, only administered assessments in ELA and mathematics; one partner, District of Columbia, only administered assessments in science. The DLM Governance Board is composed of two representatives from the education agencies of each member state or district. Representatives have expertise in special education and state assessment administration. The DLM Governance Board advises on the administration, maintenance, and enhancement of the DLM system.

In addition to ATLAS and the DLM Governance Board, other key partners include the Center for Literacy and Disability Studies at the University of North Carolina at Chapel Hill and Assessment and Technology Solutions Previously known as Agile Technology Solutions. at the University of Kansas.

The DLM system is also supported by a technical advisory committee (TAC). The DLM TAC members possess decades of expertise, including in large-scale assessments, accessibility for alternate assessments, diagnostic classification modeling, and assessment validation. The DLM TAC provides advice and guidance on technical adequacy of the DLM assessments.

1.2 Theory of Action and Interpretive Argument

The theory of action that guided the design of the DLM system for science was similar to the theory of action for the ELA and mathematics assessments, which was formulated in 2011, revised in December 2013, and revised again in 2019. It expresses the belief that high expectations for students with the most significant cognitive disabilities, combined with appropriate educational supports and diagnostic tools for educators, result in improved academic experiences and outcomes for students and educators.

The DLM Theory of Action expresses a commitment to provide students with the most significant cognitive disabilities access to an assessment system that is capable of validly and reliably evaluating their achievement. Ultimately, students will make progress toward higher expectations, educators will make instructional decisions based on data, educators will hold higher expectations of students, and state and district education agencies will use results for monitoring and resource allocation.

Assessment validation for the DLM assessments uses a three-tiered approach, which includes the specifications of 1) the DLM Theory of Action defining statements in the validity argument that must be in place to achieve the goals of the system, 2) an interpretive argument defining propositions that must be evaluated to support each statement in the DLM Theory of Action, and 3) validity studies to evaluate each proposition in the interpretive argument.

After identifying these overall guiding principles and anticipated outcomes, specific elements of the DLM Theory of Action were articulated to inform assessment design and to highlight the associated validity argument. The DLM Theory of Action includes the assessment’s intended effects (long-term outcomes); statements related to design, delivery and scoring; and action mechanisms (i.e., connections between the statements). In Figure 1.1, the chain of reasoning in the DLM Theory of Action is demonstrated broadly by the order of the four sections from left to right. Dashed lines represent connections that are present when the optional instructionally embedded assessments are utilized. Design statements serve as inputs to delivery, which inform scoring and reporting, which collectively lead to the long-term outcomes for various stakeholders. The chain of reasoning is made explicit by the numbered arrows between the statements.

Figure 1.1: Dynamic Learning Maps Theory of Action

The DLM Theory of Action, which shows each statement and how it is connected to other statements through the chain of reasoning.

1.3 Technical Manual Overview

This manual provides evidence collected during the 2022–2023 administration of science assessments.

Chapter 1 provides a brief overview of the DLM system, including collaborators for development and implementation, the DLM Theory of Action and interpretive argument, and a summary of contents of the remaining chapters. While subsequent chapters describe the individual components of the assessment system separately, key topics such as validity are addressed throughout this manual.

Chapter 2 was not updated for 2022–2023. For a full description of the development of the EEs, including the intended coverage with the Framework (National Research Council, 2012) and the NGSS (NGSS Lead States, 2013), see the 2015–2016 Technical Manual—Science (Dynamic Learning Maps Consortium, 2017).

Chapter 3 describes assessment design and development in 2022–2023, including a description of test development activities and external review of content. The chapter then presents evidence of item quality including results of field testing, operational item data, and differential item functioning.

Chapter 4 describes assessment delivery, including updated procedures and data collected in 2022–2023. The chapter provides information on administration time, device usage, adaptive routing, and accessibility support selections. The chapter also provides evidence from test administrators, including user experience with the DLM system and student opportunity to learn.

Chapter 5 provides a brief summary of the psychometric model used in scoring DLM assessments. This chapter includes a summary of the 2022–2023 calibrated parameters. For a complete description of the modeling method, see the 2021–2022 Technical Manual Update—Science (Dynamic Learning Maps Consortium, 2022).

Chapter 6 was not updated for 2022–2023; no changes were made to the cut points used for determining performance levels on DLM assessments. See the 2015–2016 Technical Manual—Science (Dynamic Learning Maps Consortium, 2017) for a description of the methods, preparations, procedures, and results of the original standard-setting meeting and the follow-up evaluation of the impact data. For a description of the changes made to the cut points used in scoring DLM assessments for grade 3 and grade 7 during the 2018–2019 administration, see the 2018–2019 Technical Manual Update—Science (Dynamic Learning Maps Consortium, 2019).

Chapter 7 reports the 2022–2023 operational results, including student participation data. The chapter details the percentage of students achieving at each performance level; subgroup performance by gender, race, ethnicity, and English-learner status; and the percentage of students who showed mastery at each linkage level. Finally, the chapter describes changes to score reports and data files during the 2022–2023 administration.

Chapter 8 summarizes reliability evidence for the 2022–2023 administration, including a brief overview of the methods used to evaluate assessment reliability and results by linkage level, EE, conceptual area, subject (overall performance), and student groups. For a complete description of the reliability background and methods, see the 2021–2022 Technical Manual Update—Science (Dynamic Learning Maps Consortium, 2022).

Chapter 9 describes updates to the professional development offered across states administering DLM assessments in 2022–2023, including participation rates and evaluation results.

Chapter 10 synthesizes the evidence provided in the previous chapters. It evaluates how the evidence supports the claims in the DLM Theory of Action as well as the long-term outcomes. The chapter ends with a description of our future research and ongoing initiatives for continuous improvement.