This repository is used to standardize data from the government, hospitals, payors, claims, and electronic health records to the Payless Health Common Data Model, using the data build tool (dbt
).
You can see examples here:
https://github.com/onefact/data_build_tool_payless.health/tree/main/notebooks
And the data models here:
https://github.com/onefact/data_build_tool_payless.health/tree/main/payless_health/models/
And the data here:
For example, this folder in the Amazon simple storage service (S3) bucket, https://data.payless.health/#cms.gov/, corresponds to the following SQL query used to map the data to a standard schema (set of column names and data types using the duckdb
engine):
The data build tool dbt
takes this SQL query and compresses the data into a materialized
view corresponding to a parquet
file using the Apache Parquet compressed representation.
This compressed file can then be queried using the help of ChatGPT, Claude, and other tools to visualize and create analytics, insights, and build machine learning models on top of this national scale health data.
Here is a full example from the datathinking.org course:
- Install
dbt
usingpip3 install dbt-duckdb
(https://github.com/jwills/dbt-duckdb) - Clone this repo:
git clone https://github.com/onefact/data_build_tool_payless.health.git
- Navigate to the
payless_health
directory:cd payless_health
- Run a command to build a dataset, such as
dbt run --select cms.gov
to run the data models in this folder: https://github.com/onefact/data_build_tool_payless.health/blob/main/payless_health/models/cms.gov (you might need to download additional files that are needed as input by theduckdb
SQL queries) - Open the materialized parquet file using
duckdb
: openduckdb
on the command line (or use a Jupyter notebook to query the parquet file).
For easier development, we recommend testing your SQL queries first and using Claude and ChatGPT to help construct inital queries based on examples like this:
^ see specifically the prompt example for Claude to see how to accelerate this work and make scalable visualizations.
- Use
environment.yml
file - Run
dbt run
to build the parquet file - Test the parquet file by running
dbt test
. If the test fails, you can rundbt debug
to see the error message.