Skip to content

yeatmanlab/roar-tech-manual-public

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 

Repository files navigation

ROAR: A bridge between the lab and the classroom

Assessments are typically time-consuming and resource-intensive to administer: Individually administering assessments to each student in a classroom means a substantial amount of lost instruction time and requires extensive training for teachers to accurately administer and score measures that are used for high-stakes decisions (e.g., access to intervention). Researchers face these same challenges creating a bottle-neck to research at scale. While education technology companies have built products that lower the demands on teachers, many of these products are expensive, grounded in opaque, proprietary technology, and lack a strong research backing. Hence, these products rarely get used in research, creating a disconnect between educational research and practice.

We launched ROAR envisioning a new model: an open-source, open-access assessment platform, grounded in ongoing academic research, and co-developed in collaboration with school-district stakeholders. Rather than a one-way street from the lab to society (often with a commercial intermediary), ROAR’s goal is to inculcate a virtuous cycle between research and practice. We aim to build a suite of completely automated, lightly gamified, online assessments that are grounded in ongoing cognitive neuroscience research and validated against the current “gold standard” of standardized, individually-administered assessments. Our approach is to partner with school districts and community based organizations at each stage of research and development to ensure that our research is grounded in real-world problems and inspired by the deep knowledge of educators who work with children and youth across a diversity of contexts. Through this “Research Practice Partnership” model, we endeavor towards a new assessment methodology that is more valid, precise, efficient, and informative. We aim to design this platform around the diversity of learners in the United States (and abroad). We prioritize transparency at every stage: whenever feasible, materials and technology are made public and each measure within ROAR is published in open-access, peer-reviewed journals with the goal of building more systemic connections between the lab, classroom, and society.

ROAR Vision and Mission

ROAR emerged out of more than a decade of research in the Brain Development & Education Lab on the neurobiological foundations of literacy. Our goal was to leverage the extensive literature on the cognitive neuroscience of reading development to develop a completely automated, lightly gamified online assessment platform that could replace the resource-intensive and time-consuming conventional approach of individually administering assessments that are scored based on verbal responses. In other words, we endeavored to create a platform that could assess an entire school district in the time typically required to administer an assessment to a single student. We envisioned a new approach to research and development, grounded in the principles of open-science, where each ROAR measure would be grounded in the extensive interdisciplinary literature on reading development, be validated adhering to the highest standards of rigor in each discipline, and be published in open-access journals to support scientific transparency.

Open-source ideology in educational assessment

The last decade has seen a revolution in scientific transparency. The open-science movement began as a grass roots movement to make science more transparent, accessible and reproducible through open the sharing of code and data to accompany publications in open-acess journals. The success of the open-science movement can be appreciated in new public mandates for data sharing by many of the major scientific funders in the United States and Europe, as well as proliferation of organizations like the Center for Open Science, and preprint servers like bioRxiv, that all make it easier to document, share and reproduce scientific research. In fields like cognitive neuroscience, it is now standard practice for software and algorithms to be open-source, and many journals even require various open-science practices. However, in education, most widely used assessments are grounded in proprietary products, with many of the technical details guarded by paywalls or made purposefully opaque to maintain a competative edge in the market. There are, of course, counter examples like DIBELS that have always maintained open-access printed materials, and with projects like the Item Response Warehouse which provides open-access to many educational datasets, their is a clear desire among many educational researchers for a move toward open-science.

We launched ROAR with the mission to bring the open-source ideology to educational assessment. Our lab has a long track record of developing and supporting open-source software for analysis and sharing of brain imaging data, and for modeling the interplay between brain development and learning. ROAR represents the next phase of this open-science mission: to build tools that fill the needs of educators to assess reading development while, simultaneously, opening the door to research at an unprecedented scale. Not every aspect of ROAR is completely open, but we consciously prioritize open-science at every stage of development including this technical manual which is written as an open-source quarto book (the full code base for the book will be made public as we meticulously check to ensure no identifying information has been commited to the repository).

Approach to validation

Each ROAR measure is rigorously validated both in an academic research setting (i.e., “in the lab”) as well as in a typical school setting (i.e., “in the classroom”). We take both these approaches to validation to ensure that ROAR meets the highest standards of rigor across applications in research and practice. Lab validation studies involve recruiting research participants through the typical recruitment avenues of the Brain Development & Education Lab and involve validating new ROAR measures against “gold standard” individually administered diagnostic assessments that are widely accepted by reading and dyslexia researchers. School validation studies are conducted through a Research Practice Partnership model in collaboration with school districts to ensure that ROAR is valid for the desired use cases in the school. Since the question for a school is often “how does ROAR relate to our standard of practice”, we report both a) validation of ROAR measures against the current assessments that are used in standard practice in our collaborating schools and b) validation of ROAR measures against validation measures administered by the ROAR research team to students in the district. Together these two approaches to validation have allowed us to extensively examine the accuracy and precision of ROAR relative to a) the constructs it was designed to measure and b) other related measures that are widely used across the United States.

The ROAR Assessment Suite

ROAR consists of a collection of measures, each designed to tap into a critical aspect of reading. Each individual measure can be run independently and returns raw scores, standard scores, and percentiles relative to national norms. Additionally, measures are also grouped into measurement suites that comprehensively evaluate different constructs in reading development, and produce composite scores and risk indices.

© Jason D. Yeatman, Stanford University. All rights reserved

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published