Skip to content

Commit c2671ca

Browse files
authored
rename url's to terraref.org (#241)
* color by cultivar * - replaced terraref.ncsa.illinois.edu urls w/ terraref.org - replaced bety api v beta w/ v1
1 parent 41bb19e commit c2671ca

18 files changed

+152
-69
lines changed

sensors/01-meteorological-data.Rmd

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,7 @@ The key is to process each type of met data (site, reanalysis, forecast, climate
4848
### Using the API to get data
4949

5050
In order to access the data, we need to contruct a URL that links to where the
51-
data is located on [Clowder](https://terraref.ncsa.illinois.edu/clowder). The data is
51+
data is located on [Clowder](https://terraref.org/clowder). The data is
5252
then pulled down using the API, which ["receives requests and sends responses"](https://medium.freecodecamp.org/what-is-an-api-in-english-please-b880a3214a82)
5353
, for Clowder.
5454

@@ -72,7 +72,7 @@ observations taken every second.
7272
### Creating the URLs for all data table types
7373

7474
All URLs have the same beginning
75-
(https://terraref.ncsa.illinois.edu/clowder/api/geostreams),
75+
(https://terraref.org/clowder/api/geostreams),
7676
then additional information is added for each type of data table as shown below.
7777

7878
* Station: /sensors/sensor_name=[name]
@@ -85,9 +85,9 @@ For example, below are the URLs for the particular data being used in this
8585
vignette. These can be pasted into a browser to see how the data is stored as
8686
text using JSON.
8787

88-
* Station: https://terraref.ncsa.illinois.edu/clowder/api/geostreams/sensors?sensor_name=UA-MAC+AZMET+Weather+Station
89-
* Sensor: https://terraref.ncsa.illinois.edu/clowder/api/geostreams/sensors/438/streams
90-
* Datapoints: https://terraref.ncsa.illinois.edu/clowder/api/geostreams/datapoints?stream_id=46431&since=2017-01-02&until=2017-01-31
88+
* Station: https://terraref.org/clowder/api/geostreams/sensors?sensor_name=UA-MAC+AZMET+Weather+Station
89+
* Sensor: https://terraref.org/clowder/api/geostreams/sensors/438/streams
90+
* Datapoints: https://terraref.org/clowder/api/geostreams/datapoints?stream_id=46431&since=2017-01-02&until=2017-01-31
9191

9292
Possible sensor numbers for a station are found on the page for that station
9393
under "id:", and then datapoints numbers are found on the sensor page under
@@ -150,7 +150,7 @@ following is typed into the command line, it will download the datapoints data
150150
that we're interested in as a file which we have chosen to call `spectra.json`.
151151

152152
```{sh eval=FALSE}
153-
curl -o spectra.json -X GET https://terraref.ncsa.illinois.edu/clowder/api/geostreams/datapoints?stream_id=46431&since=2017-01-02&until=2017-01-31
153+
curl -o spectra.json -X GET https://terraref.org/clowder/api/geostreams/datapoints?stream_id=46431&since=2017-01-02&until=2017-01-31
154154
```
155155

156156
#### Using R
@@ -176,7 +176,7 @@ library(ncdf.tools)
176176
```
177177

178178
```{r get-weather-fromJSON}
179-
weather_all <- fromJSON('https://terraref.ncsa.illinois.edu/clowder/api/geostreams/datapoints?stream_id=46431&since=2018-04-01&until=2018-08-01', flatten = FALSE)
179+
weather_all <- fromJSON('https://terraref.org/clowder/api/geostreams/datapoints?stream_id=46431&since=2018-04-01&until=2018-08-01', flatten = FALSE)
180180
```
181181

182182
The `geometries` dataframe is then pulled out from these data, which contains
@@ -210,7 +210,7 @@ Here we will download the files using the Clowder API, but note that if you have
210210

211211
```{r met-setup2}
212212
knitr::opts_chunk$set(eval = FALSE)
213-
api_url <- "https://terraref.ncsa.illinois.edu/clowder/api"
213+
api_url <- "https://terraref.org/clowder/api"
214214
output_dir <- file.path(tempdir(), "downloads")
215215
dir.create(output_dir, showWarnings = FALSE, recursive = TRUE)
216216
```

sensors/01.2-pecan-met-utilities.Rmd

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ source("https://raw.githubusercontent.com/PecanProject/pecan/develop/models/bioc
3030
writeLines("
3131
<pecan>
3232
<clowder>
33-
<hostname>terraref.ncsa.illinois.edu</hostname>
33+
<hostname>terraref.org</hostname>
3434
<user>[email protected]</user>
3535
<password>ask</password>
3636
</clowder>

sensors/02-sensor-metadata.Rmd

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ theme_set(theme_bw())
1818

1919
# Introduction
2020

21-
This tutorial will demonstrate how to access sensor metadata from within R. All of the sensor metadata is public, and can be queried via the API using the url `https://terraref.ncsa.illinois.edu/clowder/api/datasets/<id>/metadata.jsonld`.
21+
This tutorial will demonstrate how to access sensor metadata from within R. All of the sensor metadata is public, and can be queried via the API using the url `https://terraref.org/clowder/api/datasets/<id>/metadata.jsonld`.
2222

2323
For further information about sensor metadata see the [Sensor Data Standards](/sensor-data-standards.md) section.
2424

@@ -27,16 +27,16 @@ For further information about sensor metadata see the [Sensor Data Standards](/s
2727

2828
### Example: RSR curves for PAR, PSII and NDVI
2929

30-
* par: https://terraref.ncsa.illinois.edu/clowder/api/datasets/5873a8ce4f0cad7d8131ad86/metadata.jsonld
31-
* pri: https://terraref.ncsa.illinois.edu/clowder/api/datasets/5873a9174f0cad7d8131b09a/metadata.jsonld
32-
* ndvi: https://terraref.ncsa.illinois.edu/clowder/api/datasets/5873a8f64f0cad7d8131af54/metadata.jsonld
30+
* par: https://terraref.org/clowder/api/datasets/5873a8ce4f0cad7d8131ad86/metadata.jsonld
31+
* pri: https://terraref.org/clowder/api/datasets/5873a9174f0cad7d8131b09a/metadata.jsonld
32+
* ndvi: https://terraref.org/clowder/api/datasets/5873a8f64f0cad7d8131af54/metadata.jsonld
3333

3434

3535
### PAR sensor metadata
3636

3737
```{r}
3838
39-
par_metadata <- jsonlite::fromJSON("https://terraref.ncsa.illinois.edu/clowder/api/datasets/5873a8ce4f0cad7d8131ad86/metadata.jsonld")
39+
par_metadata <- jsonlite::fromJSON("https://terraref.org/clowder/api/datasets/5873a8ce4f0cad7d8131ad86/metadata.jsonld")
4040
print(par_metadata$content)
4141
knitr::kable(par_metadata$content$rsr)
4242
@@ -57,7 +57,7 @@ ggplot(data = par_rsr, aes(x = wavelength, y = response), alpha = 0.4) +
5757

5858
```{r}
5959
60-
ndvi_metadata <- jsonlite::fromJSON("https://terraref.ncsa.illinois.edu/clowder/api/datasets/5873a8f64f0cad7d8131af54/metadata.jsonld")
60+
ndvi_metadata <- jsonlite::fromJSON("https://terraref.org/clowder/api/datasets/5873a8f64f0cad7d8131af54/metadata.jsonld")
6161
knitr::kable(t(ndvi_metadata$content[-21]), col.names = '')
6262
6363
```

sensors/06-list-datasets-by-plot.Rmd

Lines changed: 13 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
## Pre-requisites:
44

55
* if you have not already done so, you will need to 1) sign up for the [beta user program](https://terraref.org/beta) and 2)
6-
sign up and be approved for access to the the [sensor data portal](https://terraref.ncsa.illinois.edu/clowder) in order to get
6+
sign up and be approved for access to the the [sensor data portal](https://terraref.org/clowder) in order to get
77
the API key that will be used in this tutorial.
88

99
The terrautils python package has a new `products` module that aids in connecting
@@ -23,7 +23,7 @@ from terrautils.products import get_file_listing, extract_file_paths
2323

2424
The `get_sensor_list` and `get_file_listing` functions both require the *connection*,
2525
*url*, and *key* parameters. The *connection* can be 'None'. The *url* (called host in the
26-
code) should be something like `https://terraref.ncsa.illinois.edu/clowder/`.
26+
code) should be something like `https://terraref.org/clowder/`.
2727
The *key* is a unique access key for the Clowder API.
2828

2929
## Getting the sensor list
@@ -36,10 +36,10 @@ sensor list and provides a list of names suitable for use in the
3636
`get_file_listing` function.
3737

3838
To use this tutorial you will need to sign up for Clowder, have your
39-
account approved, and then get an API key from the [Clowder web interface](https://terraref.ncsa.illinois.edu/clowder).
39+
account approved, and then get an API key from the [Clowder web interface](https://terraref.org/clowder).
4040

4141
```{python eval = FALSE}
42-
url = 'https://terraref.ncsa.illinois.edu/clowder/'
42+
url = 'https://terraref.org/clowder/'
4343
key = 'ENTER YOUR KEY HERE'
4444
```
4545

@@ -94,13 +94,13 @@ TODO: move this to a separate tutorial page focused on using curl
9494
The source files behind the data are available for downloading through the API. By executing a series
9595
of requests against the API it's possible to determine the files of interest and then download them.
9696

97-
Each of the API URL's have the same beginning (https://terraref.ncsa.illinois.edu/clowder/api),
97+
Each of the API URL's have the same beginning (https://terraref.org/clowder/api),
9898
followed by the data needed for a specific request. As we step through the process you will be able
9999
to see how then end of the URL changes depending upon the request.
100100

101101
Below is what the API looks like as a URL. Try pasting it into your browser.
102102

103-
[https://terraref.ncsa.illinois.edu/clowder/api/geostreams/sensors?sensor_name=MAC Field Scanner Season 1 Field Plot 101 W](https://terraref.ncsa.illinois.edu/clowder/api/geostreams/sensors?sensor_name=MAC Field Scanner Season 1 Field Plot 101 W)
103+
[https://terraref.org/clowder/api/geostreams/sensors?sensor_name=MAC Field Scanner Season 1 Field Plot 101 W](https://terraref.org/clowder/api/geostreams/sensors?sensor_name=MAC Field Scanner Season 1 Field Plot 101 W)
104104

105105
This will return data for the requested plot including its id. This id (or identifier) can then be used for
106106
additional queries against the API.
@@ -127,7 +127,7 @@ we use the variable name of SENSOR_DATA to indicate the name of the plot.
127127

128128
``` {sh eval=FALSE}
129129
SENSOR_NAME="MAC Field Scanner Season 1 Field Plot 101 W"
130-
curl -o plot.json -X GET "https://terraref.ncsa.illinois.edu/clowder/api/geostreams/sensors?sensor_name=${SENSOR_NAME}"
130+
curl -o plot.json -X GET "https://terraref.org/clowder/api/geostreams/sensors?sensor_name=${SENSOR_NAME}"
131131
```
132132

133133
This creates a file named *plot.json* containing the JSON object returned by the API. The JSON object has an
@@ -141,7 +141,7 @@ the stream id. The names of streams are are formatted as "<Sensor Group> Dataset
141141
``` {sh eval=FALSE}
142142
SENSOR_ID=3355
143143
STREAM_NAME="Thermal IR GeoTIFFs Datasets (${SENSOR_ID})"
144-
curl -o stream.json -X GET "https://terraref.ncsa.illinois.edu/clowder/api/geostreams/streams?stream_name=${STREAM_NAME}"
144+
curl -o stream.json -X GET "https://terraref.org/clowder/api/geostreams/streams?stream_name=${STREAM_NAME}"
145145
```
146146

147147
A file named *stream.json* will be created containing the returned JSON object. This JSON object has an 'id' parameter that
@@ -153,7 +153,7 @@ We now have a stream ID that we can use to list our datasets. The datasets in tu
153153

154154
``` {sh eval=FALSE}
155155
STREAM_ID=11586
156-
curl -o datasets.json -X GET "https://terraref.ncsa.illinois.edu/clowder/api/geostreams/datapoints?stream_id=${STREAM_ID}"
156+
curl -o datasets.json -X GET "https://terraref.org/clowder/api/geostreams/datapoints?stream_id=${STREAM_ID}"
157157
```
158158

159159
After the call succeeds, a file named *datasets.json* is created containing the returned JSON object. As part of the
@@ -162,7 +162,7 @@ JSON object there are one or more `properties` fields containing *source_dataset
162162
```{javascript eval=FALSE}
163163
properties: {
164164
dataset_name: "Thermal IR GeoTIFFs - 2016-05-09__12-07-57-990",
165-
source_dataset: "https://terraref.ncsa.illinois.edu/clowder/datasets/59fc9e7d4f0c3383c73d2905"
165+
source_dataset: "https://terraref.org/clowder/datasets/59fc9e7d4f0c3383c73d2905"
166166
},
167167
```
168168

@@ -171,7 +171,7 @@ The URL of each **source_dataset** can be used to view the dataset in Clowder.
171171
The datasets can also be filtered by date. The following filters out datasets that are outside of the range of January 2, 2017 through June 20, 2017.
172172

173173
``` {sh eval=FALSE}
174-
curl -o datasets.json -X GET "https://terraref.ncsa.illinois.edu/clowder/api/geostreams/datapoints?stream_id=${STREAM_ID}&since=2017-01-02&until=2017-06-10"
174+
curl -o datasets.json -X GET "https://terraref.org/clowder/api/geostreams/datapoints?stream_id=${STREAM_ID}&since=2017-01-02&until=2017-06-10"
175175
```
176176

177177
### Getting file paths from dataset
@@ -181,7 +181,7 @@ Now that we know what the dataset URLs are, we can use the URLs to query the API
181181
Note the the URL has changed from our previous examples now that we're using the URLs returned by the previous call.
182182

183183
``` {sh eval=FALSE}
184-
SOURCE_DATASET="https://terraref.ncsa.illinois.edu/clowder/datasets/59fc9e7d4f0c3383c73d2905"
184+
SOURCE_DATASET="https://terraref.org/clowder/datasets/59fc9e7d4f0c3383c73d2905"
185185
curl -o files.json -X GET "${SOURCE_DATASET}/files"
186186
```
187187

@@ -219,7 +219,7 @@ For each file to be retrieved, the unique file ID is needed on the URL.
219219
``` {sh eval=FALSE}
220220
FILE_NAME="ir_geotiff_L1_ua-mac_2016-05-09__12-07-57-990.tif"
221221
FILE_ID=59fc9e844f0c3383c73d2980
222-
curl -o "${FILE_NAME}" -X GET "https://terraref.ncsa.illinois.edu/clowder/api/files/${FILE_ID}"
222+
curl -o "${FILE_NAME}" -X GET "https://terraref.org/clowder/api/files/${FILE_ID}"
223223
```
224224

225225
This call will cause the server to return the contents of the file identified in the URL. This file is then stored locally in *ir_geotiff_L1_ua-mac_2016-05-09__12-07-57-990.tif*.

traits/00-BETYdb-getting-started.Rmd

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ It contains trait (phenotype) data at the plot or plant level as well as meta da
1010

1111
### Introduction to BETYdb
1212

13-
The TERRA REF trait database (terraref.ncsa.illinois.edu/bety) uses the BETYdb data schema (structure) and web application.
13+
The TERRA REF trait database (terraref.org/bety) uses the BETYdb data schema (structure) and web application.
1414
The BETYdb software is actively used and developed by the [TERRA Reference](http://terraref.org) program as well as by the [PEcAn project](http://pecanproject.org).
1515

1616
For more information about BETYdb, see the following:

traits/01-web-access.Rmd

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4,10 +4,10 @@
44

55
### Web interface
66

7-
* Sign up for an account at https://terraref.ncsa.illinois.edu/bety
7+
* Sign up for an account at https://terraref.org/bety
88
* Sign up for the TERRA REF [beta user program](https://docs.google.com/forms/d/e/1FAIpQLScIUJL_OSL9BvBOdlczErds3aOg5Lwz4NIdNQnUiXdsLsYdhw/viewform)
99
* Wait for database access to be granted
10-
* Your API key will be sent in the email. It can also be found - and regenerated - by navigating to the Users page (data --> [users](https://terraref.ncsa.illinois.edu/bety/users)) in the web interface.
10+
* Your API key will be sent in the email. It can also be found - and regenerated - by navigating to the Users page (data --> [users](https://terraref.org/bety/users)) in the web interface.
1111

1212
TODO add signup info from handout
1313

@@ -18,7 +18,7 @@ On the Welcome page there is a search option for trait and yield data. This tool
1818

1919
### Download search results as as csv file from the web interface
2020

21-
* Point your browser to https://terraref.ncsa.illinois.edu/bety/
21+
* Point your browser to https://terraref.org/bety/
2222
* login
2323
* enter "NDVI" in the search box
2424
* on the next page you will see the results of this search

traits/02-betydb-api-access.Rmd

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ An API key is not needed to access public data includes sample datasets and meta
4545

4646
### Components of a URL query
4747

48-
* Base url: `terraref.ncsa.illinois.edu/bety`
48+
* Base url: `terraref.org/bety`
4949
* Path to the api: `/api/v1`
5050
* API endpoint: `/search` or `traits` or `sites`. For BETYdb, these are the names of database tables.
5151
* Query parameters: `genus=Sorghum`
@@ -57,23 +57,23 @@ An API key is not needed to access public data includes sample datasets and meta
5757

5858
First, lets construct a query by putting together a URL.
5959

60-
1. start with the database url: `terraref.ncsa.illinois.edu/bety`
60+
1. start with the database url: `terraref.org/bety`
6161
* this url brings you to the home page
6262
2. Add the path to the API, `/api/v1`
63-
* now we have terraref.ncsa.illinois.edu/bety/api/v1, which points to the API documentation for additional detail on available options
63+
* now we have terraref.org/bety/api/v1, which points to the API documentation for additional detail on available options
6464
3. Add the name of the table you want to query. Lets start with `variables`
65-
* terraref.ncsa.illinois.edu/bety/api/v1/variables
65+
* terraref.org/bety/api/v1/variables
6666
4. Add query terms by appending a `?` and combining with `&`. These can be done in any order. For example:
6767
* `type=trait` where the variable type is 'trait'
6868
* `name=~height` where the variable name contains 'height'
6969
5. Assembling all of this, you have a complete query:
70-
* `terraref.ncsa.illinois.edu/bety/api/v1/variables?type=trait&name=~height`
70+
* `terraref.org/bety/api/v1/variables?type=trait&name=~height`
7171
* This will query all trait variables that have 'height' in their name.
7272
* Does it return the expected values? There should be two.
7373

7474
## Your Turn
7575

76-
> What will the URL https://terraref.ncsa.illinois.edu/bety/api/v1/species?genus=Sorghum return?
76+
> What will the URL https://terraref.org/bety/api/v1/species?genus=Sorghum return?
7777
7878

7979
> Write a URL that will query the database for sites with "Field Scanner" in the name field. Hint: combine two terms with a `+` as in `Field+Scanner`
@@ -86,14 +86,14 @@ Type the following command into a bash shell (the `-o` option names the output f
8686

8787
```sh
8888
curl -o sorghum.json \
89-
"https://terraref.ncsa.illinois.edu/bety/api/v1/species?genus=Sorghum"
89+
"https://terraref.org/bety/api/v1/species?genus=Sorghum"
9090
```
9191

9292
If you want to write the query without exposing the key in plain text, you can construct it like this:
9393

9494
```sh
9595
curl -o sorghum.json \
96-
"https://terraref.ncsa.illinois.edu/bety/api/v1/species?genus=Sorghum"
96+
"https://terraref.org/bety/api/v1/species?genus=Sorghum"
9797
```
9898

9999
## Using the R jsonlite package to access the API with a URL query
@@ -107,7 +107,7 @@ library(jsonlite)
107107

108108
```{r text-api, warning = FALSE}
109109
sorghum.json <- readLines(
110-
paste0("https://terraref.ncsa.illinois.edu/bety/api/v1/species?genus=Sorghum&key=",
110+
paste0("https://terraref.org/bety/api/v1/species?genus=Sorghum&key=",
111111
readLines('.betykey')))
112112
113113
## print(sorghum.json)

traits/03-access-r-traits.Rmd

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ sorghum_info <- betydb_query(table = 'species',
6262
genus = "Sorghum",
6363
api_version = 'v1',
6464
limit = 'none',
65-
betyurl = "https://terraref.ncsa.illinois.edu/bety/")
65+
betyurl = "https://terraref.org/bety/")
6666
6767
```
6868

@@ -71,7 +71,7 @@ sorghum_info <- betydb_query(table = 'species',
7171
Notice all of the arguments that the `betydb_query` function requires? We can change this by setting the default connection options thus:
7272

7373
```{r 03-set-up, echo = TRUE}
74-
options(betydb_url = "https://terraref.ncsa.illinois.edu/bety/",
74+
options(betydb_url = "https://terraref.org/bety/",
7575
betydb_api_version = 'v1')
7676
```
7777

traits/04-danforth-indoor-phenotyping-facility.Rmd

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -19,12 +19,12 @@ library(traits)
1919

2020
## Connect to the TERRA REF Trait database
2121

22-
Unlike the first two tutorials, now we will be querying real data from the public TERRA REF database. So we will use a new URL, https://terraref.ncsa.illinois.edu/bety/, and we will need to use our own private key.
22+
Unlike the first two tutorials, now we will be querying real data from the public TERRA REF database. So we will use a new URL, https://terraref.org/bety/, and we will need to use our own private key.
2323

2424
```{r terraref-connect-options}
2525
2626
options(betydb_key = readLines('.betykey', warn = FALSE),
27-
betydb_url = "https://terraref.ncsa.illinois.edu/bety/",
27+
betydb_url = "https://terraref.org/bety/",
2828
betydb_api_version = 'v1')
2929
```
3030
### Query data from the Danforth Phenotyping Facility

traits/05-maricopa-field-scanner.Rmd

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ library(leaflet)
1313
1414
1515
options(betydb_key = readLines('.betykey', warn = FALSE),
16-
betydb_url = "https://terraref.ncsa.illinois.edu/bety/",
16+
betydb_url = "https://terraref.org/bety/",
1717
betydb_api_version = 'v1')
1818
```
1919

0 commit comments

Comments
 (0)