4 Assessment Delivery

Chapter 4 of the Dynamic Learning Maps® (DLM®) Alternate Assessment System 2015–2016 Technical Manual—Science (Dynamic Learning Maps Consortium, 2017) describes general test administration and monitoring procedures. This chapter describes updated procedures and data collected in 2022–2023, including a summary of administration time, device use, adaptive routing, accessibility support selections, test administration observations, data forensics reports, and test administrator survey responses regarding user experience.

Overall, intended administration features remained consistent with the 2021–2022 implementation, including the availability of instructionally embedded testlets, spring operational administration of testlets, the use of adaptive delivery during the spring window, and the availability of accessibility supports.

For a complete description of test administration for DLM assessments–including information on the Kite® Suite used to assign and deliver assessments, testlet formats, accessibility features, the First Contact survey used to recommend testlet linkage level, available administration resources and materials, and information on monitoring assessment administration–see the 2015–2016 Technical Manual—Science (Dynamic Learning Maps Consortium, 2017).

4.1 Overview of Key Features of the Science Assessment Model

This section describes DLM test administration for 2022–2023. For a complete description of key administration features, including information on assessment delivery, the Kite® Suite, and linkage level assignment, see Chapter 4 of the 2015–2016 Technical Manual—Science (Dynamic Learning Maps Consortium, 2017). Additional information about changes in administration can also be found in the Test Administration Manual (Dynamic Learning Maps Consortium, 2023e) and the Educator Portal User Guide (Dynamic Learning Maps Consortium, 2023d).

4.1.1 Assessment Administration Windows

Assessments are administered in the spring assessment window for operational reporting. Optional assessments are available during the instructionally embedded assessment window for educators to administer for formative information. Additional descriptions of how Essential Elements (EEs) and linkage levels are assigned during the spring assessment window can be found in the Adaptive Delivery section later in this chapter.

4.1.1.1 Instructionally Embedded Assessment Window

During the instructionally embedded assessment window, testlets are optionally available for test administrators to assign to their students. When choosing to administer the optional testlets during the instructionally embedded assessment window, educators decide which EEs and linkage levels to assess for each student using the Instruction and Assessment Planner in Educator Portal. The assessment delivery system recommends a linkage level for each EE based on the educator’s responses to the student’s First Contact survey, but educators can choose a different linkage level based on their own professional judgment. The dates for the instructionally embedded assessment window are determined by which assessment model each state participates in for English language arts (ELA) and mathematics (i.e., Instructionally Embedded or Year-End). States that only participate in the science assessments follow the dates for the Year-End model. In 2022–2023, the instructionally embedded assessment window occurred between September 12, 2022, and February 22, 2023, for states that participate in the Year-End model and between September 12, 2022, and December 16, 2022, for states that participate in the Instructionally Embedded model. States were given the option of using the entire window or setting their own dates within the larger window. Across all states, the instructionally embedded assessment window ranged from 14 to 23 weeks.

4.1.1.2 Spring Assessment Window

During the spring assessment window, students are assessed on all of the EEs on the assessment blueprint in science. The linkage level for each EE is determined by the system. As with the instructionally embedded assessment window, dates for the spring assessment window are determined by which assessment model is used for ELA and mathematics. In 2022–2023, the spring assessment window occurred between March 13, 2023, and June 9, 2023, for states that participate in the Year-End model and between February 6, 2023, and May 19, 2023, for states that participate in the Instructionally Embedded model. States were given the option of using the entire window or setting their own dates within the larger window. Across all states, the spring assessment window ranged from 6 to 15 weeks.

4.2 Evidence From the DLM System

This section describes evidence collected by the DLM system during the 2022–2023 operational administration of the DLM alternate assessment. The categories of evidence include administration time, device use, adaptive routing, administration incidents, and accessibility support selections.

4.2.1 Administration Time

Estimated testlet administration time varies by student and subject. Testlets can be administered separately across multiple testing sessions as long as they are all completed within the testing window.

The published estimated total testing time per testlet is around 5–15 minutes (Dynamic Learning Maps Consortium, 2023e). The estimated total testing time is 45–135 minutes per student in the spring assessment window. Published estimates are slightly longer than anticipated real testing times where students are interacting with the assessment because of the assumption that test administrators need time for setup. The actual amount of testing time per testlet for a student varies depending on each student’s unique characteristics.

Kite Student Portal captured start dates, end dates, and time stamps for every testlet. The differences between these start and end times were calculated for each completed testlet. Table 4.1 summarizes the distribution of test times per testlet. The distribution of test times in Table 4.1 is consistent with the distribution observed in prior years. Most testlets took around 4 minutes or less to complete. Time per testlet may have been affected by student breaks during the assessment or use of accessibility supports. Testlets with shorter than expected administration times are included in an extract made available to each state education agency. State agency staff can use this information to monitor assessment administration and address as necessary. Testlets time out after 90 minutes.

Table 4.1: Distribution of Response Times per Testlet in Minutes
Grade Min Median Mean Max 25Q 75Q
Elementary 0.1 2.3 3.1 88.9 1.5 3.6
Middle school 0.1 2.0 2.8 89.8 1.3 3.3
High school 0.1 2.2 3.0 88.6 1.4 3.6
Biology 0.0 2.2 2.8 61.0 1.3 3.4
Note. Min = minimum; Max = maximum; 25Q = lower quartile; 75Q = upper quartile.

4.2.2 Device Use

Testlets may be administered on a variety of devices. Kite Student Portal captured the operating system used for each testlet completed. Although these data do not capture specific devices used to complete each testlet (e.g., SMART Board, switch system, etc.), they provide high-level information about how students access assessment content. For example, we can identify how often an iPad is used relative to a Chromebook or traditional personal computer. Figure 4.1 shows the number of testlets completed on each operating system by subject and linkage level for 2022–2023. Overall, 48% of testlets were completed on a Chromebook, 26% were completed on an iPad, 21% were completed on a personal computer, and 6% were completed on a Mac.

Figure 4.1: Distribution of Devices Used for Completed Testlets

A bar graph showing the number of testlets completed on each device, by subject and linkage level.

Note. PC = personal computer.

4.2.3 Adaptive Delivery

The science assessments are adaptive between testlets. In spring 2023, the same routing rules were applied as in prior years. That is, the linkage level associated with the next testlet a student received was based on the student’s performance on the most recently administered testlet, with the specific goal of maximizing the match of student knowledge and skill to the appropriate linkage level content.

  • The system adapted up one linkage level if the student responded correctly to at least 80% of the items measuring the previously tested EE. If the previous testlet was at the highest linkage level (i.e., Target), the student remained at that level.
  • The system adapted down one linkage level if the student responded correctly to less than 35% of the items measuring the previously tested EE. If the previous testlet was at the lowest linkage level (i.e., Initial), the student remained at that level.
  • Testlets remained at the same linkage level if the student responded correctly to between 35% and 80% of the items on the previously tested EE.

The linkage level of the first testlet assigned to a student was based on First Contact survey responses. See Chapter 4 of the 2015–2016 Technical Manual—Science (Dynamic Learning Maps Consortium, 2017) for more details. The correspondence between the First Contact complexity bands and first assigned linkage levels are shown in Table 4.2.

Table 4.2: Correspondence of Complexity Bands and Linkage Levels
First Contact complexity band Linkage level
Foundational Initial
Band 1 Initial
Band 2 Precursor
Band 3 Target

Following the spring 2023 administration, analyses were conducted to determine the mean percentage of testlets that the system adapted from the first to second testlet administered for students within a grade or course and complexity band. The aggregated results can be seen in Table 4.3.

For the majority of students across all grades who were assigned to the Foundational Complexity Band by the First Contact survey, the system did not adapt testlets to a higher linkage level after the first assigned testlet (ranging from 59% to 70%). A similar pattern was seen for students assigned to Band 3, with the majority of students not adapting down to a lower linkage level after the first assigned testlet (ranging from 52% to 88%). In contrast, students assigned to Band 1 tend to adapt up to a higher linkage level after their first testlet (ranging from 51% to 78%). Consistent patterns were not as apparent for students who were assigned to Band 2. Results indicate that linkage levels of students assigned to higher complexity bands are more variable with respect to the direction in which students move between the first and second testlets. However, this finding of more variability in adaptation patterns in the higher complexity bands is consistent with prior years, which showed the same trend. Several factors may help explain these results, including more variability in student characteristics within this group and content-based differences across grades. For a description of previous findings, see Chapter 4 of the 2015–2016 Technical Manual—Science (Dynamic Learning Maps Consortium, 2017) and the subsequent technical manual updates (Dynamic Learning Maps Consortium, 2018a, 2018b, 2019, 2021, 2022).

Table 4.3: Adaptation of Linkage Levels Between First and Second Science Testlets (N = 41,822)
Foundational
Band 1
Band 2
Band 3
Grade Adapted up (%) Did not adapt (%) Adapted up (%) Did not adapt (%) Adapted up (%) Did not adapt (%) Adapted down (%) Did not adapt (%) Adapted down (%)
Grade 3 35.8 64.2 77.6 22.4 38.5 38.5 23.1 * *
Grade 4 40.7 59.3 74.3 25.7 22.2 49.6 28.2 60.3 39.7
Grade 5 41.2 58.8 73.5 26.5 25.1 47.1 27.9 63.7 36.3
Grade 6 32.3 67.7 74.4 25.6 28.8 38.9 32.3 51.6 48.4
Grade 7 31.9 68.1 69.3 30.7 39.8 38.0 22.2 70.6 29.4
Grade 8 35.5 64.5 70.4 29.6 37.7 42.2 20.1 68.1 31.9
Grade 9 33.2 66.8 65.2 34.8 46.3 37.3 16.4 88.1 11.9
Grade 10 30.0 70.0 69.5 30.5 39.8 39.2 21.0 88.0 12.0
Grade 11 38.7 61.3 62.5 37.5 40.5 41.0 18.5 87.2 12.8
Grade 12 * * * * * * * * *
Biology 31.2 68.8 50.9 49.1 19.4 39.2 41.4 61.9 38.1
Note. Foundational and Band 1 correspond to the testlets at the lowest linkage level, so the system could not adapt testlets down a linkage level. Band 3 corresponds to testlets at the highest linkage level in science, so the system could not adapt testlets up a linkage level.
* These data were suppressed because n < 50.

After the second testlet is administered, the system continues to adapt testlets based on the same routing rules. Table 4.4 shows the total number and percentage of testlets that were assigned at each linkage level during the spring assessment window. In ELA, testlets were fairly evenly distributed across the three linkage levels, with the Initial and Precursor linkage levels being assigned slightly more often.

Table 4.4: Distribution of Linkage Levels Assigned for Assessment
Linkage level n %
Initial 134,785 36.7
Precursor 123,883 33.7
Target 108,614 29.6

4.2.4 Administration Incidents

DLM staff annually evaluate testlet assignment to promote correct assignment of students to testlets. Administration incidents that have the potential to affect scoring are reported to state education agencies in a supplemental Incident File. One incident occurred during the spring 2023 administration in which the complexity band was calculated incorrectly. Complexity bands are used to assign the linkage level of the first testlet administered in spring. This occurred because responses to the First Contact survey for some students were changed while complexity bands were still being calculated in the system from the original responses. Because complexity band calculation was already in progress, a new calculation was not triggered when the responses were updated. As soon as this miscalculation was identified, the system was updated to allow for additional complexity band calculations to queue behind ongoing calculations, and additional quality control procedures were put in place to ensure that the miscalculation is not repeated. However, prior to the correction, two students completed testlets with an incorrect complexity band, and therefore began their assessment at an unintended linkage level. State education agencies were given the option to revert students to the beginning of the assessment at the intended linkage level or proceed. Both affected students proceeded without a test reset.

As in previous years, an Incident File was delivered to state partners with the General Research File, which provided the list of students who did not have their assessment reset. States were able to use this file during the 2-week review period to make decisions about invalidation of records at the student level based on state-specific accountability policies and practices. For more information on the General Research File and supplemental files, see Chapter 7 of the 2015–2016 Technical Manual—Science (Dynamic Learning Maps Consortium, 2017).

4.2.5 Accessibility Support Selections

Accessibility supports provided in 2022–2023 were the same as those available in previous years. The DLM Accessibility Manual (Dynamic Learning Maps Consortium, 2023c) distinguishes accessibility supports that are provided in Kite Student Portal via the Personal Needs and Preferences Profile, require additional tools or materials, or are provided by the test administrator outside the system. Table 4.5 shows selection rates for the three categories of accessibility supports. Overall, 41,403 students (>99%) had at least one support selected. The most commonly selected supports in 2022–2023 were human read aloud, spoken audio, and test administrator enters responses for student. For a complete description of the available accessibility supports, see Chapter 4 of the 2015–2016 Technical Manual—Science (Dynamic Learning Maps Consortium, 2017).

Table 4.5: Accessibility Supports Selected for Students (N = 41,403)
Support n %
Supports provided in Kite Student Portal
Spoken audio 26,799 64.7
Magnification   6,215 15.0
Color contrast   3,855   9.3
Overlay color   1,683   4.1
Invert color choice   1,136   2.7
Supports requiring additional tools/materials
Individualized manipulatives 19,643 47.4
Calculator 14,973 36.2
Single-switch system   1,789   4.3
Alternate form–visual impairment   1,084   2.6
Two-switch system      544   1.3
Uncontracted braille       46   0.1
Supports provided outside the system
Human read aloud 36,601 88.4
Test administrator enters responses for student 25,689 62.0
Partner-assisted scanning   3,991   9.6
Language translation of text      763   1.8
Sign interpretation of text      605   1.5

4.3 Evidence From Monitoring Assessment Administration

DLM staff monitor assessment administration using various materials and strategies. As in prior years, DLM staff made available an assessment administration observation protocol for use by DLM staff, state education agency staff, and local education agency staff. Project staff also reviewed Service Desk requests and hosted regular check-in calls with state education staff to monitor common issues and concerns during the assessment window. This section provides an overview of the assessment administration observation protocol and its use.

4.3.1 Test Administration Observations

Consistent with previous years, the DLM Consortium used a test administration observation protocol to gather information about how educators in the consortium states deliver testlets to students with the most significant cognitive disabilities. This protocol gave observers, regardless of their role or experience with DLM assessments, a standardized way to describe how DLM testlets were administered. The test administration observation protocol captured data about student actions (e.g., navigation, responding), educator assistance, variations from standard administration, engagement, and barriers to engagement. For a full description of the test administration observation protocol, see Chapter 4 of the 2015–2016 Technical Manual—Science (Dynamic Learning Maps Consortium, 2017).

During 2022–2023, there were 67 assessment administration observations collected in eight states. Table 4.6 shows the number of observations collected by state. Of the 67 total observations, 47 (70%) were of computer-delivered assessments and 20 (30%) were of educator-administered testlets.

Table 4.6: Educator Observations by State (N = 67)
State n %
Arkansas 39 58.2
Iowa   2   3.0
Kansas   4   6.0
Missouri   9 13.4
New York   3   4.5
North Dakota   2   3.0
West Virginia   8 11.9

Observations for computer-delivered testlets are summarized in Table 4.7; behaviors on the test administration observation protocol were identified as supporting, neutral, or nonsupporting. For example, clarifying directions (found in 46.8% of observations) removes student confusion about the task demands as a source of construct-irrelevant variance and supports the student’s meaningful, construct-related engagement with the item. In contrast, using physical prompts (e.g., hand-over-hand guidance) indicates that the test administrator directly influenced the student’s answer choice. Overall, 57% of observed behaviors were classified as supporting, with 0% of observed behaviors reflecting nonsupporting actions.

Table 4.7: Test Administrator Actions During Computer-Delivered Testlets (n = 47)
Action n %
Supporting
Read one or more screens aloud to the student 32 68.1
Navigated one or more screens for the student 23 48.9
Clarified directions or expectations for the student 22 46.8
Repeated question(s) before student responded 15 31.9
Neutral
Used pointing or gestures to direct student attention or engagement 19 40.4
Used verbal prompts to direct the student’s attention or engagement (e.g., “look at this.”) 17 36.2
Entered one or more responses for the student 12 25.5
Used materials or manipulatives during the administration process   9 19.1
Asked the student to clarify or confirm one or more responses   5 10.6
Allowed student to take a break during the testlet   4   8.5
Repeated question(s) after student responded (gave a second trial at the same item)   4   8.5
Nonsupporting
Physically guided the student to a response   0   0.0
Reduced the number of answer choices available to the student   0   0.0
Note. Respondents could select multiple responses to this question.

For DLM assessments, interaction with the system includes interaction with the assessment content as well as physical access to the testing device and platform. The fact that educators navigated one or more screens in 49% of the observations does not necessarily indicate the student was prevented from engaging with the assessment content as independently as possible. Depending on the student, test administrator navigation may either support or minimize students’ independent, physical interaction with the assessment system. While not the same as interfering with students’ interaction with the content of the assessment, navigating for students who are able to do so independently conflicts with the assumption that students are able to interact with the system as intended. The observation protocol did not capture why the test administrator chose to navigate, and the reason was not always obvious.

Observations of student actions taken during computer-delivered testlets are summarized in Table 4.8. Independent response selection was observed in 68% of the cases. Nonindependent response selection may include allowable practices, such as test administrators entering responses for the student. The use of materials outside of Kite Student Portal was seen in 4% of the observations. Verbal prompts for navigation and response selection are strategies within the realm of allowable flexibility during test administration. These strategies, which are commonly used during direct instruction for students with the most significant cognitive disabilities, are used to maximize student engagement with the system and promote the type of student-item interaction needed for a construct-relevant response. However, they also indicate that students were not able to sustain independent interaction with the system throughout the entire testlet.

Table 4.8: Student Actions During Computer-Delivered Testlets (n = 47)
Action n %
Selected answers independently 32 68.1
Selected answers after verbal prompts 17 36.2
Navigated screens independently 15 31.9
Navigated screens after verbal prompts   8 17.0
Navigated screens after test administrator pointed or gestured   5 10.6
Revisited one or more questions after verbal prompt(s)   3   6.4
Independently revisited a question after answering it   2   4.3
Used materials outside of Kite Student Portal to indicate responses to testlet items   2   4.3
Asked the test administrator a question   1   2.1
Skipped one or more items   1   2.1
Note. Respondents could select multiple responses to this question.

Observers noted whether there was difficulty with accessibility supports (including lack of appropriate available supports) during observations of educator-administered testlets. Of the 20 observations of educator-administered testlets, observers noted difficulty in zero cases (0%). For computer-delivered testlets, observers noted students who indicated responses to items using varied response modes such as gesturing (23%) and using manipulatives or materials outside of the Kite system (4%). Of the 67 test administration observations collected, students completed the full testlet in 47 cases (70%). In all instances where the testlet was not completed, no reason was provided by the observer.

Finally, DLM assessment administration observation intends for test administrators to enter student responses with fidelity, including across multiple modes of communication, such as verbal, gesture, and eye gaze. Table 4.9 summarizes students’ response modes for educator-administered testlets. The most frequently observed behavior was gestured to indicate response to test administrator who selected answers.

Table 4.9: Primary Response Mode for Educator-Administered Testlets (n = 20)
Response mode n %
Gestured to indicate response to test administrator who selected answers 11 55.0
Verbally indicated response to test administrator who selected answers   8 40.0
No observable response mode   1   5.0
Eye gaze system indication to test administrator who selected answers   0   0.0
Note. Respondents could select multiple responses to this question.

Observations of computer-delivered testlets when test administrators entered responses on behalf of students provided another opportunity to confirm fidelity of response entry. This support is recorded on the Personal Needs and Preferences Profile and is recommended for a variety of situations (e.g., students who have limited motor skills and cannot interact directly with the testing device even though they can cognitively interact with the onscreen content). Observers recorded whether the response entered by the test administrator matched the student’s response. In 12 of 47 (26%) observations of computer-delivered testlets, the test administrator entered responses on the student’s behalf. In 10 (83%) of those cases, observers indicated that the entered response matched the student’s response, while the remaining two observers either responded that they could not tell if the entered response matched the student’s response, or they left the item blank.

4.4 Evidence From Test Administrators

This section describes evidence collected from the spring 2023 test administrator survey. Test administrators receive one survey per rostered DLM student, which annually collects information about that student’s assessment experience. As in previous years, the survey was distributed to test administrators in Kite Student Portal, where students completed assessments. Instructions indicated the test administrator should complete the survey after administration of the spring assessment; however, users can complete the survey at any time. The survey consisted of three blocks. Blocks 1 and 3 were administered in every survey. Block 1 included questions about the test administrator’s perceptions of the assessments and the student’s interaction with the content, and Block 3 included questions about the test administrator’s background, to be completed once per administrator. Block 2 was spiraled, so test administrators received one randomly assigned section. In these sections, test administrators were asked about one of the following topics per survey: relationship of the assessment to ELA, mathematics, or science instruction.

4.4.1 User Experience With the DLM System

A total of 11,826 test administrators responded to the survey (65%) about 23,945 students’ experiences. Test administrators are instructed to respond to the survey separately for each of their students. Participating test administrators responded to surveys for between 1 and 28 students, with a median of 1 student. Test administrators reported having an average of 11 years of experience in science and teaching students with significant cognitive disabilities.

The following sections summarize responses regarding both educator and student experience with the system.

4.4.1.1 Educator Experience

Test administrators were asked to reflect on their own experience with the assessments as well as their comfort level and knowledge administering them. Most of the questions required test administrators to respond on a 4-point scale: strongly disagree, disagree, agree, or strongly agree. Responses are summarized in Table 4.10.

Nearly all test administrators (97%) agreed or strongly agreed that they were confident administering DLM testlets. Most respondents (93%) agreed or strongly agreed that the Required Test Administrator Training prepared them for their responsibilities as test administrators. Most test administrators agreed or strongly agreed that they had access to curriculum aligned with the content that was measured by the assessments (88%) and that they used the manuals and the Educator Resource page (92%).

Table 4.10: Test Administrator Responses Regarding Test Administration
SD
D
A
SA
A+SA
Statement n % n % n % n % n %
I was confident in my ability to deliver DLM testlets. 72 1.0 118 1.7 2,941 41.3 3,996 56.1 6,937 97.4
Required Test Administrator Training prepared me for the responsibilities of a test administrator. 144 2.0 349 4.9 3,349 47.1 3,269 46.0 6,618 93.1
I have access to curriculum aligned with the content measured by DLM assessments. 203 2.9 686 9.6 3,407 47.9 2,814 39.6 6,221 87.5
I used manuals and/or the DLM Educator Resource Page materials. 155 2.2 439 6.2 3,649 51.2 2,880 40.4 6,529 91.6
Note. SD = strongly disagree; D = disagree; A = agree; SA = strongly agree; A+SA = agree and strongly agree.

4.4.1.2 Student Experience

The spring 2023 test administrator survey included three items about how students responded to test items. Test administrators were asked to rate statements from strongly disagree to strongly agree. Results are presented in Table 4.11. The majority of test administrators agreed or strongly agreed that their students responded to items to the best of their knowledge, skills, and understandings; were able to respond regardless of disability, behavior, or health concerns; and had access to all necessary supports to participate.

Table 4.11: Test Administrator Perceptions of Student Experience with Testlets
SD
D
A
SA
A+SA
Statement n % n % n % n % n %
Student responded to items to the best of his/her knowledge, skills, and understanding. 822 3.8 1,622 7.5 11,364 52.7 7,773 36.0 19,137 88.7
Student was able to respond regardless of his/her disability, behavior, or health concerns. 1,305 6.0 1,864 8.6 11,060 51.2 7,393 34.2 18,453 85.4
Student had access to all necessary supports to participate. 688 3.2 1,125 5.2 11,372 52.7 8,380 38.9 19,752 91.6
Note. SD = strongly disagree; D = disagree; A = agree; SA = strongly agree; A+SA = agree and strongly agree.

Annual survey results show that a small percentage of test administrators disagree that their student was able to respond regardless of disability, behavior, or health concerns; had access to all necessary supports; and was able to effectively use supports. In spring 2020, DLM staff conducted educator focus groups with educators who disagreed with one or more of these survey items to learn about potential accessibility gaps in the DLM system (Kobrin et al., 2022). A total of 18 educators from 11 states participated in six focus groups. The findings revealed that many of the challenges educators described were documented in existing materials (e.g., wanting clarification about allowable practices that are described in the Test Administration Manual, such as substituting materials; desired use of not-allowed practices like hand-over-hand that are used during instruction). DLM staff are using the focus group findings to review existing materials and develop new resources that better communicate information about allowable practices to educators.

4.4.2 Opportunity to Learn

The spring 2023 test administrator survey also included items about students’ opportunity to learn. Table 4.12 reports the opportunity to learn results.

Approximately 55% of responses (n = 11,824) reported that most or all science testlets matched instruction.

Table 4.12: Educator Ratings of Portion of Testlets That Matched Instruction
None
Some (< half)
Most (> half)
All
Not applicable
Subject n % n % n % n % n %
Science 1,817 8.5 6,388 29.8 7,381 34.4 4,443 20.7 1,416 6.6

A subset of test administrators were asked to indicate the approximate number of hours they spent instructing students on each of the DLM science core ideas and in the science and engineering practices. Educators responded using a 6-point scale: 0 hours, 1–5 hours, 6–10 hours, 11–15 hours, 16–20 hours, or more than 20 hours. Table 4.13 and Table 4.14 indicate the amount of instructional time spent on DLM science core ideas and science and engineering practices, respectively. For all science core ideas and science and engineering practices, the most commonly selected response was 1–5 hours.

Table 4.13: Instructional Time Spent on Science Core Ideas
Number of hours
0
1–5
6–10
11–15
16–20
>20
Core Idea Median n % n % n % n % n % n %
Physical Science
Matter and its interactions 1–5 1,644 23.7 2,216 32.0 1,273 18.4 763 11.0 558 8.0 478 6.9
Motion and stability: Forces and interactions 1–5 1,850 26.9 2,150 31.3 1,225 17.8 746 10.8 493 7.2 413 6.0
Energy 1–5 1,717 25.1 2,131 31.2 1,251 18.3 774 11.3 532 7.8 425 6.2
Life Science
From molecules to organisms: Structures and processes 1–5 2,088 30.5 1,969 28.7 1,108 16.2 731 10.7 516 7.5 437 6.4
Ecosystems: Interactions, energy, and dynamics 1–5 1,492 21.8 2,054 30.0 1,272 18.6 874 12.8 644 9.4 513 7.5
Heredity: Inheritance and variation of traits 1–5 2,538 37.1 1,906 27.8    996 14.6 617   9.0 426 6.2 362 5.3
Biological evolution: Unity and diversity 1–5 2,416 35.3 1,920 28.0 1,023 14.9 661   9.7 449 6.6 376 5.5
Earth and Space Science
Earth’s place in the universe 1–5 1,617 23.6 2,068 30.2 1,254 18.3 840 12.3 589 8.6 481 7.0
Earth’s systems 1–5 1,610 23.6 2,050 30.0 1,279 18.7 835 12.2 587 8.6 475 6.9
Earth and human activity 1–5 1,496 21.8 2,108 30.7 1,313 19.1 855 12.5 598 8.7 492 7.2
Table 4.14: Instructional Time Spent on Science and Engineering Practices
Number of hours
0
1–5
6–10
11–15
16–20
>20
Science and engineering practice Median n % n % n % n % n % n %
Developing and using models 1–5 1,850 26.8 2,474 35.8 1,113 16.1 634   9.2 428 6.2 402   5.8
Planning and carrying out investigations 1–5 1,641 24.0 2,309 33.7 1,225 17.9 747 10.9 502 7.3 422   6.2
Analyzing and interpreting data 1–5 1,481 21.7 2,186 32.0 1,298 19.0 786 11.5 573 8.4 516   7.5
Using mathematics and computational thinking 6–10 1,350 19.7 2,058 30.0 1,194 17.4 831 12.1 611 8.9 809 11.8
Constructing explanations and designing solutions 1–5 2,091 30.6 2,108 30.8 1,093 16.0 710 10.4 478 7.0 360   5.3
Engaging in argument from evidence 1–5 2,464 36.0 2,068 30.2    979 14.3 590   8.6 415 6.1 328   4.8
Obtaining, evaluating, and communicating information 1–5 1,679 24.5 2,130 31.0 1,192 17.4 724 10.6 563 8.2 572   8.3

Another dimension of opportunity to learn is student engagement during instruction. The First Contact survey contains two questions that ask educators to rate student engagement during computer- and educator-directed instruction. Table 4.15 shows the percentage of students who were rated as demonstrating different levels of attention by instruction type. Overall, 87% of students demonstrate fleeting or sustained attention to computer-directed instruction and 86% of students demonstrate fleeting or sustained attention to educator-directed instruction.

Table 4.15: Student Attention Levels During Instruction
Demonstrates
little or no attention
Demonstrates
fleeting attention
Generally
sustains attention
Type of instruction n % n % n %
Computer-directed (n = 38,921) 4,895 12.6 21,324 54.8 12,702 32.6
Educator-directed (n = 42,405) 5,997 14.1 26,173 61.7 10,235 24.1

4.5 Conclusion

Delivery of the DLM system was designed to align with instructional practice and be responsive to individual student needs. Assessment delivery options allow for necessary flexibility to reflect student needs while also including constraints to maximize comparability and support valid interpretation of results. The dynamic nature of DLM assessment administration is reflected in adaptive routing between testlets. Evidence collected from the DLM system, test administration monitoring, and test administrators indicates that students are able to successfully interact with the system to demonstrate their knowledge, skills, and understandings.