Dynamic Learning Maps
Acknowledgements
1
Overview
1.1
Current DLM Collaborators for Development and Implementation
1.2
Theory of Action and Interpretive Argument
1.3
Technical Manual Overview
2
Essential Element Development
3
Assessment Design and Development
3.1
Assessment Structure
3.2
Testlet and Item Writing
3.2.1
2023 Testlet and Item Writing
3.2.2
External Reviews
3.3
Evidence of Item Quality
3.3.1
Field-Testing
3.3.2
Operational Assessment Items for 2022–2023
3.3.3
Evaluation of Item-Level Bias
3.4
Conclusion
4
Assessment Delivery
4.1
Overview of Key Features of the Science Assessment Model
4.1.1
Assessment Administration Windows
4.2
Evidence From the DLM System
4.2.1
Administration Time
4.2.2
Device Use
4.2.3
Adaptive Delivery
4.2.4
Administration Incidents
4.2.5
Accessibility Support Selections
4.3
Evidence From Monitoring Assessment Administration
4.3.1
Test Administration Observations
4.4
Evidence From Test Administrators
4.4.1
User Experience With the DLM System
4.4.2
Opportunity to Learn
4.5
Conclusion
5
Modeling
5.1
Psychometric Background
5.2
Model Evaluation
5.2.1
Model Fit
5.2.2
Classification Accuracy
5.3
Calibrated Parameters
5.3.1
Probability of Masters Providing Correct Response
5.3.2
Probability of Nonmasters Providing Correct Response
5.3.3
Item Discrimination
5.3.4
Base Rate Probability of Class Membership
5.4
Conclusion
6
Standard Setting
7
Reporting and Results
7.1
Student Participation
7.2
Student Performance
7.2.1
Overall Performance
7.2.2
Subgroup Performance
7.3
Mastery Results
7.3.1
Mastery Status Assignment
7.3.2
Linkage Level Mastery
7.4
Data Files
7.5
Score Reports
7.5.1
Individual Student Score Reports
7.6
Quality-Control Procedures for Data Files and Score Reports
7.7
Conclusion
8
Reliability
8.1
Background Information on Reliability Methods
8.2
Methods of Obtaining Reliability Evidence
8.2.1
Reliability Sampling Procedure
8.3
Reliability Evidence
8.3.1
Linkage Level Reliability Evidence
8.3.2
Essential Element Reliability Evidence
8.3.3
Domain and Topic Reliability Evidence
8.3.4
Subject Reliability Evidence
8.3.5
Performance Level Reliability Evidence
8.4
Conclusion
9
Training and Professional Development
9.1
Updates to Required Test Administrator Training
9.2
Instructional Professional Development
9.2.1
Professional Development Participation and Evaluation
9.3
Conclusion
10
Validity Evidence
10.1
Validity Evidence Summary
10.2
Continuous Improvement
10.2.1
Improvements to the Assessment System
10.2.2
Future Research
11
References
Appendix
A
Supplemental Information About Assessment Design and Development
A.1
Differential Item Functioning Plots
A.1.1
Uniform Model
A.1.2
Combined Model
2022–2023 Technical Manual Update
2022–2023 Technical Manual Update
Science
December 2023
Placeholder Title
Copyright © 2023 Accessible Teaching, Learning, and Assessment Systems (ATLAS)