Skip to content

Open deep-learning based climate downscaling model for data poor areas

License

Notifications You must be signed in to change notification settings

thinkingmachines/project-cchain-climate-downscaling

Repository files navigation

Project CCHAIN - Deep learning climate downscaling model

Python Code style: ruff



📜 Description

The Project CCHAIN dataset contains daily climate variables over 12 Philippine cities that were designed to be used in health and other applied research. However, the values are from coarse global gridded data and are not resolved for barangay level (village level in the Philippines).

Computational methods called climate downscaling addresses this issue by using simulations or statistical processes to increase the resolution of coarse climate data. A technique called dynamical climate downscaling uses physics equations simulating large-scale atmospheric systems to approximate a desired finer resolution. However, this method takes large computational resources, both on hardware and runtime. Recent advances in the past decade has allowed machine learning models to help push downscaling forward by detecting and transferring patterns from available high resolution data (e.g., from weather stations, radar) to correct coarse resolution data. The end result is also higher resolution climate data but produced significantly faster and requires less resources.

Open source machine learning models that downscale climate data have been developed by leading institutions in developed nations. One such model is dl4ds, a python module that implements range of architectures for downscaling gridded data with deep neural networks (Gonzales, 2023).

Sample

With support from the Lacuna Fund, we are able to create this code that allows us to improve the resolution of the currently provided temperature and rainfall data to bring it down to the local level. It is our hope that local developers can use, contribute, and grow this code base to add more capabilities that may be useful to our stakeholders



⚠️ For data users: Using the provided output

The model yielded minimum temperature, maximum temperature and rainfall with enhanced resolution from the reanalysis scale (0.25°) to local scale (0.02°). However, given the uncertainties/biases in the magnitude of the downscaled temperature and rainfall, we advise users not to treat the output the way they would treat ground-measured data (e.g. station data) but focus on its bulk statistical characteristics (e.g., distribution, timing, spatial pattern) instead.

While we provide the downscaled climate data output of all the 12 cities as gridded netcdf files here, only those variables that passed our quality checks (QC) are included in the extracted data tables available for download in our project site. These are the following:

City tmin tmax pr
Palayan
Dagupan
Davao
Cagayan De Oro
Iloilo
Legazpi
Mandaue
Muntinlupa
Navotas
Mandaluyong
Tacloban
Zamboanga

You may view a more detailed showcase of results here in these slides. If you are uncertain, consider using the climate data from coarser data sources provided in the climate_atmosphere table instead.

⚙️ Local Setup for Development

This repo assumes the use of conda for simplicity in installing GDAL.

Requirements

  1. Python 3.10
  2. make
  3. conda

🐍 One-time Set-up

Run this the very first time you are setting-up the project on a machine to set-up a local Python environment for this project.

  1. Install miniforge for your environment if you don't have it yet.
wget https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-Linux-x86_64.sh
bash Miniforge3-Linux-x86_64.sh
  1. Create a local conda env climate-downscaling as named in environment.yml and activate it. This will create a conda env folder in your project directory.
make env
conda activate climate-downscaling
  1. Run the one-time set-up make command.
make setup

🐍 Testing

To run automated tests, simply run make test.

📦 Dependencies

Over the course of development, you will likely introduce new library dependencies. This repo uses uv to manage the python dependencies.

There are two main files involved:

  • pyproject.toml - contains project information and high level requirements; this is what we should edit when adding/removing libraries
  • requirements.txt - contains exact list of python libraries (including depdenencies of the main libraries) your environment needs to follow to run the repo code; compiled from pyproject.toml

When you add new python libs, please do the ff:

  1. Add the library to the pyproject.toml file in the dependencies list under the [project] section. You may optionally pin the version if you need a particular version of the library.

  2. Run make requirements to compile a new version of the requirements.txt file and update your python env.

  3. Commit both the pyproject.toml and requirements.txt files so other devs can get the updated list of project requirements.

Note: When you are the one updating your python env to follow library changes from other devs (reflected through an updated requirements.txt file), simply run uv pip sync requirements.txt

About

Open deep-learning based climate downscaling model for data poor areas

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published