RPPL.Insights.Trailer.mp4
The RPPL Insights (RPPL Visualizer v2.0) platform is a fully modular, ELA Shared Measures-aligned data visualization system designed for secure research environments such as Stronghold. This version introduces a construct-driven architecture, modular chart pipelines, stronger local security via a custom Python server, and support for multiple interactive chart types including radar charts, trends-over-time line charts, milestone threshold charts, and scatterplots, all operating on locally stored survey data aligned to the first version of the ELA Shared Measures Toolkit.
Together, the four views give teams a complete understanding of their CBPL and HQIM implementation:
- Radar (Org Snapshot): Where are we today?
- Overall (Trends Over Time): How are we changing?
- Milestone (Progress Towards Goals): Did we accomplish our goals?
- Scatterplot (Item-Level Relationships): How do we compare to others?
- Open the usermap file at
config/usermap(this is essentially your “permissions list”). - Add a user if you want them to access an org’s dataset by adding a new line in this format:
username,org,password
Example:Neithan,org5,v9D2Q - Remove a user if you want to revoke access by deleting that user’s entire line (including the newline).
- Save the file.
What the usermap does (in plain terms):
It tells the Visualizer: “When this person logs in, which org dataset(s) are they allowed to see?”
note that the data inside orgdata folder have prefixes i.e. org1_teachersurvey.csv this is essentially the org that you allow a user to see when you add Neithan,org5,v9D2Q <- This one for instance means user Neithan will be shown the orgdata for org5 in his charts.
Do / Don’t
- ✅ Do: keep names exactly as used for login.
- ✅ Do: double-check spelling (one typo = no access).
- ❌ Don’t: rename the usermap file unless the code explicitly points to the new name.
There are two important files that define the “rules.” These two files basically decide (1) what the dashboard shows and (2) what your CSV must look like.
This file does NOT point to your CSV. It only defines the 6 construct boxes and the subconstruct blurbs/labels that appear in the UI.
For example, this entry:
'school-system': {
headerTitle: 'SCHOOL AND SYSTEM CONDITION',
groupA: { title: 'HQIM Coherence:' },
groupB: { title: 'Foundational Structures:' }
}means:
- You will see a box called SCHOOL AND SYSTEM CONDITION
- Inside it you will see Group A = HQIM Coherence and Group B = Foundational Structures
- This file is UI labels only (it does not decide which CSV columns get averaged)
✅ If you are only adding new org data, you usually do not touch this file.
This file is the one that actually connects the Visualizer to your CSV data.
It defines, per construct, which survey sets exist, which CSV file each one loads, and which columns (questions) it averages.
Here is a real example from your file:
{
label: "(A) HQIM Coherence - Teacher Survey",
fileOf: (org) => `orgdata/${org}_teacher_survey.csv`,
questions: [
"How well does your school leaders' vision for instruction align with your adopted curriculum?"
]
}This means, very literally:
- The Visualizer will look for a file named:
orgdata/<org>_teacher_survey.csv - Inside that CSV, it will look for a column named exactly:
How well does your school leaders' vision for instruction align with your adopted curriculum? - It will average that column’s values (after grouping by month) and plot it for that survey set.
So if your CSV column header is even slightly different (extra spaces, different punctuation, different capitalization), it won’t match, and it will look like “no data.”
In THIS version, the actual filename pattern is not the org<number>_NameOfDataAlignedWithELAMeasuresV1_csv thing — the pattern is whatever fileOf(org) returns inside js/school-system.data.js.
In your current config, it expects files like:
orgdata/org5_teacher_survey.csvorgdata/org5_school_leader_survey.csvorgdata/org5_admin_pulse_check.csvorgdata/org5_teacher_pulse_check.csvorgdata/org5_non_teacher_pl_participant.csvorgdata/org5_classroom_observation.csv
Quick example (for org5)
If your usermap gives someone access to org5, the Visualizer will try to load:
- Teacher Survey:
orgdata/org5_teacher_survey.csv - School Leader Survey:
orgdata/org5_school_leader_survey.csv - Classroom Observations (if used in a construct):
orgdata/org5_classroom_observation.csv
✅ So the “right” filenames are literally whatever the code says here:
fileOf: (org) => `orgdata/${org}_teacher_survey.csv`Meaning:
${org}becomesorg5- final filename becomes
orgdata/org5_teacher_survey.csv
Do / Don’t
- ✅ Do: put the files inside the
orgdata/folder (because the path includesorgdata/). - ✅ Do: name them exactly like the pattern above (including underscores).
- ❌ Don’t: rename files to “prettier” names unless you also update every
fileOf()that points to them.
For one survey set like:
fileOf: (org) => `orgdata/${org}_teacher_survey.csv`,
questions: [
"Do you have sufficient time to engage in professional learning focused on [curriculum]?"
]Your CSV must have:
- a
datecolumn (inDD/MM/YYYY) - a column header exactly:
Do you have sufficient time to engage in professional learning focused on [curriculum]? - numeric values in that column (e.g., 1–5)
If any of those are missing, the graph will look blank.
- Open your CSV.
- Make sure column headers match what the Visualizer expects exactly.
Most important column rules for V1:
- If the Visualizer expects a column called
date, it must bedate(notDate, notDATE). - If items are keyed by exact question text, your column header must match the exact question text.
- If items are keyed by item codes, your column header must match the item code.
If your columns don’t match, the Visualizer won’t “guess.” It will just show blanks.
Files you usually change:
- ✅
config/usermap(permissions) - ✅ your org CSV data files
- ✅ (only if needed)
school-system.data.jsto point to new filenames/paths
Files you should NOT change unless you’re updating the measurement model:
- ❌
school-system.constructs.js(this defines the framework; changing it changes the model) - ❌ core Visualizer graph code (unless you are doing dev work)
- Log in as a user who should have access (per usermap).
- Check that the org dataset appears.
- Open a basic graph:
- if nothing shows up, it’s usually:
- wrong filename
- wrong folder/path
- wrong column headers
- user not included in usermap
- if nothing shows up, it’s usually:
- Double-click
runserver.bat - Leave that window open (don’t close it).
- If it shows an error, copy-paste the last lines into chat so we can diagnose.
- Double-click
runclient.bat - A browser tab should open automatically (or it will print a local URL like
http://localhost:____). - Keep that window open while you use the Visualizer.
- If the client runs but nothing loads, the server might not be running.
- If you get a port error, it usually means something else is already using that port.
That's about everything :) if usermap + file names + column names match what the Visualizer expects, it will load.
RPPL Insights v2.0 runs entirely on a local Python server and serves fully static HTML/CSS/JS files. No internet access or external APIs are required. The system is designed to operate inside restricted research environments (including Stronghold) where data files must never be directly accessible through the browser.
- Python 3.8+ (already installed in Stronghold)
- Modern browser (Edge or Chrome recommended)
- Ability to run local
.batscripts inside the environment
RPPL-Insights/
├─ assets/ # Background images, explainer videos
│ ├─ (various .mp4/.png files)
│
├─ config/
│ └─ usermap.csv # Maps username → password → org
│
├─ js/ # Main modular JS files (v2.0)
│ ├─ login.js
│ ├─ rpplmasterscripts.js
│ ├─ school-system.constructs.js
│ ├─ school-system.data.js
│ ├─ school-system.index.js
│ ├─ school-system.milestone.js
│ ├─ school-system.org.js
│ ├─ school-system.overall.js
│ ├─ school-system.radar.js
│ ├─ school-system.scatter.js
│ └─ school-system.tutorial.js
│
├─ libraries/ # Chart.js, Luxon, PapaParse, Python server logic
│ ├─ chart.js
│ ├─ chart.umd.js
│ ├─ [email protected]
│ ├─ luxon.min.js
│ ├─ papaparse.min.js
│ ├─ client.py # Client-side Python helper (Stronghold)
│ └─ server.py # Secure Python server (blocks direct CSV access)
│
├─ orgdata/ # Organization CSV files (ELA framework aligned)
│ └─ (org-specific CSV files placed here)
│
├─ styles/
│ ├─ rpplmasterstyles.css # Main layout styles
│ └─ school-system.css # Visualizer interface + dynamic modal styling
│
├─ favicon.ico
│
├─ index.html # Home page (Dimensions → Constructs menu)
└─ visualizer.html # Visualization engine (Radar / Trends / Milestone / Scatter)
When launched, the Python server:
- Serves only the approved HTML/CSS/JS assets
- Blocks all directory access (no folder listings)
- Blocks all file-level access under
orgdata/ - Allows the JavaScript visualizer to read CSVs internally via
fetch()without exposing them to the browser
This ensures maximum compatibility with Stronghold’s isolation requirements while keeping the visualization fast and fully local.
RPPL Insights is designed so that framework logic lives in two files:
js/school-system.constructs.js— What the constructs are, how they’re named, and how the UI displays them.js/school-system.data.js— Which data each construct uses (CSV file, label, color, questions).
Understanding these two files gives you full control to adapt the visualizer to any instructional framework.
This file is the content map of the framework. It registers each construct by a stable id (e.g., school-system, professional-learning, instructional-practice, etc.) and defines:
- The construct’s dimension, title, and subtitle
- Subconstruct groups A, B, and optionally C
(with a badge color, badge text, title, and description)
Example structure:
const CONSTRUCTS = {
'school-system': {
id: 'school-system',
dimension: 'System Conditions',
title: 'School & System Conditions',
subtitle: 'HQIM implementation is supported by and integrated with existing infrastructure.',
groupA: {
badgeText: 'A',
badgeColor: '#A98FD4',
title: 'HQIM Coherence',
description: 'Alignment between vision, curriculum, and other systems.'
},
groupB: {
badgeText: 'B',
badgeColor: '#4C9AFF',
title: 'Foundational Structures',
description: 'Time, processes, and routines that support implementation.'
}
// groupC: {...} if needed
}
};When a user clicks a box in index.html, the app calls:
setCurrentConstructAndRefresh('school-system');visualizer.html then reads the construct config and automatically updates the headers, subconstruct badges, and radar chart title.
This file defines a single DATA_CONFIGS object that drives exactly which CSVs the visualizer loads and which questions each chart calculates averages for.
const DATA_CONFIGS = {
'school-system': {
radar: { surveySets: [...] },
overall: { surveySets: [...] },
milestone: { sets: [...] },
scatter: { SURVEY_SETS: [...] }
},
'professional-learning': { ... },
'instructional-practice': { ... },
'teacher-beliefs': { ... }
};The four visualizer views are always expressed in one of these schemas:
radar: {
surveySets: [
{
label: "(A) HQIM Coherence - Teacher Survey",
fileOf: (org) => `orgdata/${org}_teacher_survey.csv`,
questions: [
"How well does your school leaders' vision for instruction align with your adopted curriculum?"
]
},
// more...
]
}Each surveySets[] entry tells the visualizer:
- label → text on the radar axis
- fileOf(org) → which CSV to load
- questions[] → exact column names to average
overall: {
surveySets: [
{
label: '(A) HQIM Coherence - Teacher Survey',
color: '#A98FD4',
fileOf: (org) => `orgdata/${org}_teacher_survey.csv`,
questions: [ "…" ]
},
// more...
]
}This works exactly like radar but adds:
- color → controls line colors in the trend chart
- Support for global averages depending on
GLOBAL_BASELINEand the “Include my org in global” toggle
milestone: {
sets: [
{
label: '(A) HQIM Coherence - Teacher Survey',
color: '#A98FD4',
fileOf: (org)=>`orgdata/${org}_teacher_survey.csv`,
questions: [ "…" ]
},
// more...
]
}Formula is the same — sets[] instead of surveySets[].
scatter: {
SURVEY_SETS: [
{
label: "(A) HQIM Coherence — Teacher",
fileOf: (org)=>`orgdata/${org}_teacher_survey.csv`,
questions: [ "…" ]
},
// more...
]
}Again same structure, but optimized for org-vs-global monthly pairings and LOESS smoothing.
A: Two files (and they do different jobs):
-
js/school-system.constructs.js— UI labels only
This controls:- the construct box title (headerTitle/headerSubtitle)
- the A/B/C group titles + blurbs shown on the page
Example (this is why you see the “SCHOOL AND SYSTEM CONDITION” box with A/B labels):
'school-system': { headerTitle: 'SCHOOL AND SYSTEM CONDITION', headerSubtitle: 'HQIM implementation is supported by and integrated with existing infrastructure', groupA: { title: 'HQIM Coherence:' }, groupB: { title: 'Foundational Structures:' } }
-
js/school-system.data.js— data wiring
This controls:- which CSV file gets loaded per org (
fileOf(org)) - which question columns get averaged (
questions[]) - which survey sets appear in each chart type (radar/overall/milestone/scatter)
Example (this is literally where the Visualizer learns what to plot):
'school-system': { overall: { surveySets: [ { label: '(A) HQIM Coherence - Teacher Survey', color: '#A98FD4', fileOf: (org) => `orgdata/${org}_teacher_survey.csv`, questions: [ "How well does your school leaders' vision for instruction align with your adopted curriculum?" ] } ] } }
- which CSV file gets loaded per org (
✅ Then make sure your homepage button calls the new construct id:
setCurrentConstructAndRefresh('new-construct-id');✅ Important rule:
The new-construct-id must match in all 3 places:
- the key you add in
CONFIGS(constructs.js) - the key you add in
DATA_CONFIGS(data.js) - the string passed to
setCurrentConstructAndRefresh(...)
A: Edit only js/school-system.data.js.
Find the construct you want (example: 'school-system') and then add your question in the correct chart block.
Example: adding a question to the overall chart under (A) HQIM Coherence - Teacher Survey:
{
label: '(A) HQIM Coherence - Teacher Survey',
color: '#A98FD4',
fileOf: (org) => `orgdata/${org}_teacher_survey.csv`,
questions: [
"How well does your school leaders' vision for instruction align with your adopted curriculum?",
"NEW QUESTION TEXT HERE (must match CSV column header exactly)"
]
}Where you can add questions (depends on which chart you want it to appear in):
radar.surveySets[x].questions[]overall.surveySets[x].questions[]milestone.sets[x].questions[]scatter.SURVEY_SETS[x].questions[]
✅ The question text must exactly match the CSV column header (spacing/punctuation/capitalization must match). I removed the comma's just to be sure it doesn't conflict with the .csv nature of csv files.
A: Change the fileOf(org) function in js/school-system.data.js.
Example (existing pattern):
fileOf: (org) => `orgdata/${org}_teacher_survey.csv`If you want a new file:
fileOf: (org) => `orgdata/${org}_teacher_survey_2025.csv`✅ Then you must actually create those files, per org:
orgdata/org1_teacher_survey_2025.csvtoorgdata/org5_teacher_survey_2025.csv- etc.
A: You can often reshape your framework into the Visualizer’s shape:
construct
→ radar: surveySets[]
→ overall: surveySets[]
→ milestone: sets[]
→ scatter: SURVEY_SETS[]
Each entry is always the same idea:
- a label
- a CSV source (
fileOf(org)) - a list of questions (
questions[]) to average
Your CSV needs to look like:
date, Question1, Question2, Question3, ...
…and your config needs to point to it like your current configs do:
fileOf: (org) => `orgdata/${org}_teacher_survey.csv`,
questions: [
"Exact Question Column Header 1",
"Exact Question Column Header 2"
]If your raw data is messy (multiple levels, logs, coded events), do this:
- Create a cleaned CSV per org in
orgdata/ - Make sure it has:
datecolumn (DD/MM/YYYY)- question columns that match what you put in
questions[] - numeric values in those columns
Then the Visualizer works without touching chart code.
A: Yes, the port is hardcoded in multiple places, so you need to change it consistently.
1) libraries/server.py
At the top:
PORT = 8000And later it uses that same value here:
server_address = ("0.0.0.0", PORT)
print(f"[server] v5 running ... http://localhost:{PORT}/ ...")2) runserver.bat
This line opens the browser:
start msedge http://localhost:8000/index.html3) runclient.bat
This line defines the port:
set PORT=80004) client.bat (your “fixed host IP” launcher)
This line defines the host port:
set HOST_PORT=8000Pick a new port (example: 8011), then do these edits:
1) Edit libraries/server.py
Change:
PORT = 8000to:
PORT = 80112) Edit runserver.bat
Change:
start msedge http://localhost:8000/index.htmlto:
start msedge http://localhost:8011/index.html3) Edit runclient.bat
Change:
set PORT=8000to:
set PORT=80114) Edit client.bat (fixed IP launcher)
Change:
set HOST_PORT=8000to:
set HOST_PORT=8011- Close any old server windows that were running on
:8000 - Run
runserver.batagain- You should see a line like:
[server] v5 running ... http://localhost:8011/ ...
- You should see a line like:
- Run
runclient.bat(orclient.bat)- It should open the browser at
http://localhost:8011/...(orhttp://192.168.100.27:8011/...)
- It should open the browser at
- If you changed
runclient.batbut forgotserver.py, the browser opens:8011but the server is still on:8000→ you’ll get “site can’t be reached.” - If you changed
server.pybut forgotclient.bat, the fixed-IP launcher will still open:8000→ same problem.
Rule: the port must match in server.py + whichever BAT file you use to open the browser.
RPPL Insights v2.0 includes four fully modular visualization types. Each one answers a different kind of instructional question and pulls data dynamically from the framework definitions in school-system.data.js. Below is an overview of what each chart shows, what users can do with it, and how it supports instructional insight-making.
Org.Snapshot.Demo.mp4
The Org Snapshot Radar Chart gives a one-screen snapshot of how an organization performs across the subconstructs (A/B/C) within a dimension. Each axis corresponds to a survey set defined in surveySets[].
What it shows
- The average score for your organization on each subconstruct
- The global average across all other organizations
- A visual comparison of strengths and areas for improvement
What users can do
- Hover for exact values
- Toggle subconstruct groups (A, B, C)
- Use dynamic radar titles (pulled from constructs.js)
When to use it
- Start-of-session overview
- Quick comparative diagnostic
- Presenting construct summaries in meetings or reports
Trends.Over.Time.Chart.Demo.mp4
This chart shows how scores evolve month-by-month, letting leaders track change, momentum, and implementation stability.
What it shows
- Line for Your Org
- Line for Global Average
- Distinct dash patterns per subconstruct
What users can do
- Switch chart modes using Chart Mode:
lines(simple timeline)net(difference timeline)
- Toggle subconstruct groups A/B/C
- Enable/disable “Include my org in global average”
- Access detailed tooltips including per-question breakdowns
When to use it
- Monitoring implementation trends
- Presenting progress across the school year
- Comparing trajectory across subconstructs
Progress.Over.Time.Demo.mp4
The Progress Towards Goals view answers:
“When did we first meet our target?”
Users define a condition (e.g., >= 3.4) and the chart identifies the first month when that threshold is met for each survey set.
Users can also enter a specific month in the Month input box in the format, “November 2025” or “11/2025”, and the chart will place a marker on that month.
If the entered month is in the future (i.e., not present in the dataset yet), the chart will extend the timeline to include it and place the marker there.
What it shows
- First month each group meets the threshold
- Color coding per subconstruct
- Month filtering and dynamic recalculation
What users can do
- Enter milestone thresholds using
<,>,<=,>= - Pick which survey set to analyze
- Use the milestone month slider to refine the view
When to use it
- Accountability metrics
- Reporting district progress goals
- Tracking improvement over time
Item-Lvel.Relationships.Demo.mp4
The scatterplot shows the relationship between two selected measures at the item level.
Each point represents an item (i.e., a question / rubric item).
X = Item score for Measure A
Y = Item score for Measure B
A diagonal identity line marks parity (X = Y).
- Which items score higher on Measure A vs Measure B
- Whether the same items tend to be high/low together (alignment)
- Clusters (items behaving similarly) and outliers (items diverging sharply)
- Where strengths and gaps live at the item level, not averaged away
- Choose the two measures to compare (e.g., Subconstruct A vs Subconstruct B, or Construct vs Construct)
- Switch which item set is being plotted (ELA surveys vs Classroom Observations, depending on what’s selected/available)
- Hover items to see exact item text / item code
- Checking if two subconstructs “move together” at the item level
- Finding items where performance is strong in one measure but weak in the other
- Spotting misalignment (e.g., strong planning items but weak enactment items)
- Prioritizing action: outlier items are often the most actionable targets
Together, the four views give teams a complete understanding of their CBPL and HQIM implementation:
- Radar (Org Snapshot): Where are we today?
- Overall (Trends Over Time): How are we changing?
- Milestone (Progress Towards Goals): Did we accomplish our goals?
- Scatterplot (Item-Level Relationships): How do we compare to others?
RRPLConverter2.0Walkthrough.mp4
The RPPL Data Converter is a local-only, browser-based tool that takes messy, org-specific source files (CSV / XLS / XLSX) and converts them into clean, Visualizer-ready CSVs with:
- Consistent date format (
DD/MM/YYYY) - Standardized Likert / numeric values
- Framework-aligned question text headers
- One output file per org that the Visualizer can read directly
It is intentionally non-hard-coded: as long as a file has a date column and question columns with numeric or Likert responses, the converter can reshape it to match whatever question labels you’ve defined in school-system.data.js. This lets different organizations (with different export formats) all end up with the same normalized schema.
RPPL-Insights/
├─ converter/
│ ├─ index.html # Converter UI (can be renamed <preferred name>.html)
│ ├─ converter.css # Layout & theming
│ └─ converter.js # All converter logic (parsing, mapping, queue, export)
- Upload a source file (CSV/XLS/XLSX).
- Select which rows & columns to keep and how each question should be labeled.
- Generate a preview of the converted dataset.
- Download a single CSV or add it to a conversion queue.
- Use Convert All to batch-export multiple org files into folders, ready for
orgdata/.
The converter does not need to know anything about constructs/groups; it only needs:
- A date column
- A set of question columns
- The final question labels you want those columns to become
The Visualizer then handles which constructs/subconstructs those labels belong to.
The Data Converter UI is organized into four horizontal panels:
- Upload Source File
- Map to Framework
- Preview & Download
- Conversion Queue
The entire workspace scrolls horizontally so each panel has comfortable width and breathing room.
Goal: Load and inspect raw data from CSV/XLS/XLSX.
.csvvia PapaParse.xlsx/.xlsvia SheetJS (xlsx.full.min.js)
-
Modern upload control
A clean “Choose file” picker with inline status
(e.g.,myfile.xlsx loaded. 532 rows detected.) -
Live table preview with horizontal + vertical scroll
- Displays headers and up to N rows (configurable)
- Thin custom scrollbars with RPPL accent colors
- Sticky header row
-
Row exclusion control
- Each row has a checkbox in the first column
- Uncheck rows to exclude them from the conversion pipeline
- Excluded rows never appear in the preview or exports
-
Editable cells
- Double-click any cell to edit its value in place
- Edits are persisted into
sourceRowsand flow through to preview / export - Useful for fixing typos, cleaning weird values, or correcting dates
-
Column selection (for questions)
- Click any header to toggle selection for that column
- Selected columns are highlighted (header + all cells) with a darker accent background
- Selected question columns drive what appears in the mapping panel
// On file input change:
if (ext === 'csv') {
Papa.parse(file, { header: true, /* ... */ });
} else if (ext === 'xlsx' || ext === 'xls') {
const data = await file.arrayBuffer();
const workbook = XLSX.read(data, { type: 'array' });
const worksheet = workbook.Sheets[workbook.SheetNames[0]];
sourceRows = XLSX.utils.sheet_to_json(worksheet, { defval: '' });
}
renderSourceTable();
populateDateSelect();
updateColumnMappingsUI();
Goal: Say which org this file belongs to, how the final filename should look, which date column to use, and how to label each question.
- Target Org ID input (e.g.,
org1,org2,districtA) - Filename input with live prefix/suffix lock
When you type org1 in the Org ID box:
org1_<your_file_name>.csv
- Only the middle part (the editable filename) can be typed.
- The prefix (
org1_) and suffix (.csv) are locked/controlled by the tool.
Example:
- Target Org ID:
org1 - Output filename display:
org1_teacher_survey_may.csv
The computed final filename is stored internally and used by:
- Download CSV
- Conversion Queue
- A dropdown of all headers.
- The selected header becomes the
datecolumn in the output file.
MM/DD/YYYY (mdy)DD/MM/YYYY (dmy)
The converter normalizes all dates to:
DD/MM/YYYY
Used by the Visualizer.
Example helper:
function normalizeDate(value, inputFormat) {
// ...split, reorder, and return DD/MM/YYYY
}Shows every selected column (except the chosen date column).
For each mapping entry:
Source column: [original header][ Input box: "Framework-aligned question text" ]
You can:
- Keep the original column header, or
- Type the exact question wording used in
school-system.data.js.
This allows widely different exports (Qualtrics, Google Forms, SIS exports, etc.) to be renamed into a single canonical question set that the Visualizer understands.
Goal: Show exactly what the final converted CSV will look like.
Triggered by:
- Generate Preview
The converter validates:
- A file is loaded
- Org ID is set
- Filename middle is non-empty
- At least one question column is mapped
Then it builds a normalized dataset:
- Dates → always
DD/MM/YYYY - Likert text → numeric (
1–5) - Numeric values preserved
- Output stored in
previewData
The preview table:
- Displays normalized headers + rows
- Read-only
- Uses dashed borders + thin accent scrollbars
Actions:
-
Download Converted CSV
Exports using:
orgId_<filename>.csv -
Add to Conversion Queue
StorespreviewDataunder the selected org for batch export.
Goal: Batch-export multiple converted files for one or many orgs.
Displayed as a tree:
- Each org becomes a parent folder
- Each converted CSV appears as a child item
Features:
- Click to preview any queued file (loads it back into Preview panel)
- Remove an org or individual file (each has a small ✕)
- Convert All:
- Prompts for a destination folder
- Creates subfolders per org
- Writes each queued CSV using its final filename
- Can be run repeatedly (queue is not cleared automatically)
For questions, suggestions, or requests for framework integration, please reach out to the RPPL team or your project lead.
If you encounter issues running the Visualizer inside Stronghold or need help preparing CSVs for your framework, we’re happy to help.
RPPL Insights v2.0 is built for secure, offline environments.
All CSV data is kept locally inside orgdata/, never transmitted,
and fully protected by a custom Python server that blocks direct access.
For any deployment involving real student or teacher data, ensure machines remain within the approved research environment and follow all relevant data governance policies.
This project is licensed for internal use within RPPL and Brown University’s Stronghold environment.