You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is the multi-user "backend" server. The "Hub" allows users to login, then launches the single-user Jupyter server for them. Hubs are usually installed and managed by system administrators, not Jupyter users.
61
61
62
-
NREL's "Europa" (Eagle-only) runs Jupyterhub. More on Europa later in this document.
62
+
A Jupyterhub server (kestrel-jhub) is available on Kestrel for use with your HPC data. More on KJHub later in this document.
63
63
64
64
### **Jupyter/Jupyter Server/Notebook server**
65
65
@@ -69,96 +69,90 @@ The single-user server/web interface. Use to create, save, or load .ipynb notebo
69
69
70
70
A Notebook is an individual .pynb file. It contains your Python code and visualizations, and is sharable/downloadable.
71
71
72
-
### **Jupyter lab**
72
+
### **Jupyter Lab**
73
73
74
-
A "nicer" redesigned web interface for your Jupyter Server - "Notebooks 2.0". Preferred by some, and promoted as the next evolution of Notebooks.
74
+
A redesigned web interface for your Jupyter Notebook Server - "Notebooks 2.0". Preferred by some, and promoted as the next evolution of Notebooks.
75
75
Lab has many new and different extensions, but many are also not compatible between Notebook and Lab. Lab is still under development, so is lacking some features of "classic" notebooks.
76
76
77
77
### **Kernel**
78
78
79
-
Kernels define the Python environments used by your notebooks. Derived from ipykernel, a predecessor project to Jupyter: you may see Jupyter kernels referred to as "ipykernels". Custom kernels require the "ipykernel" package installed in your Jupyter conda environment.
79
+
Kernels define the Python environments used by your notebooks. Derived from ipykernel, a predecessor project to Jupyter, and you may see Jupyter kernels referred to as "ipykernels". Custom kernels require the "ipykernel" package installed in your Jupyter conda environment.
80
80
81
81
More on kernels later.
82
82
83
-
## Eagle's "Europa" Jupyterhub Server
83
+
## JupyterHub Service on Kestrel (KJHub)
84
84
85
-
The NREL HPC team runs a Jupyterhub server called Europa that is available for internal (NREL) Eagle users only.
85
+
The NREL HPC team runs a JupyterHub service for HPC users to quickly access notebooks and data stored on Kestrel, Kestrel-JHub (KJHub.)
86
86
87
-
Europa is connected to Eagle's Lustre storage system for access to /projects data.
87
+
KJHub is available from the NREL VPN (onsite or offsite) for internal NREL users.
88
88
89
-
A replacement for Europa on Kestrel is in the planning stage.
89
+
This service is not directly accessible externally for non-NREL HPC users. However, it may be reached by using the [HPC VPN](https://www.nrel.gov/hpc/vpn-connection.html), or by using a [FastX Remote Desktop](https://nrel.github.io/HPC/Documentation/Viz_Analytics/virtualgl_fastx/) session via the DAV nodes.
90
90
91
-
### Europa's Advantages:
91
+
The JupyterHub service is accessible via web browser at [https://kestrel-jhub.hpc.nrel.gov](https://kestrel-jhub.hpc.nrel.gov)
92
92
93
-
* Fast and easy access to notebooks with no setup.
94
-
* Use regular Eagle credentials to log in.
95
-
* Great for simple tasks, including light to moderate data processing, code debugging/testing, and light to moderate visualization using standard/basic scientific and visualization libraries.
93
+
### JupyterHub Advantages:
96
94
97
-
### Europa's Disadvantages:
95
+
* Fast and easy access to notebooks with no setup.
96
+
* Use regular Kestrel credentials to log in.
97
+
* Great for simple tasks, including light to moderate data processing, code debugging/testing, and/or visualization using basic scientific and visualization libraries.
98
98
99
-
* Limited resources: Only 48 CPU cores and 190GB RAM total.
100
-
* Managed usage: Up to 8 cores/128GB RAM per user before automatic throttling will greatly slow down processing.
101
-
* Must compete with other users for CPU and RAM on a single machine.
102
-
* Limited list of scientific libraries and visualization tools are available, and may not be latest versions.
103
-
* Custom environments are difficult to configure.
104
-
* No access for external (non-NREL) users.
105
-
* Not available for Kestrel (yet).
99
+
### JupyterHub Disadvantages:
100
+
101
+
* Limited resources: KJHub is a single node with 128 CPU cores and 512GB RAM.
102
+
* Managed usage: Up to 8 cores/100GB RAM per user before automatic throttling will greatly slow down processing.
103
+
* Competition: Your notebook competes with other users for CPU and RAM on the KJHub nod.
104
+
* Slow updates: A limited list of basic scientific libraries are available in the default notebook kernel/environment.
106
105
107
-
### Simple Instructions to access Europa:
106
+
### Simple Instructions to access JupyterHub:
108
107
109
-
* Visit Europa at (https://europa.hpc.nrel.gov/) in a web browser and log in using your HPC credentials.
108
+
* Visit [https://kestrel-jhub.hpc.nrel.gov](https://kestrel-jhub.hpc.nrel.gov/) in a web browser and log in using your HPC credentials.
110
109
111
-
Europa opens a standard "notebooks" interface by default. Change the url ending from "/tree" to "/lab" in your web browser to use the Jupyter Lab interface, if preferred.
110
+
KJHub opens a standard JupyterLab interface by default. Change the url ending from "/lab" to "/tree" in your web browser to switch to the classic Notebooks interface.
112
111
113
112
114
113
## Using a Compute Node to Run Your Own Jupyter Notebooks
115
114
115
+
Kestrel supports running your own Jupyter Notebook server on a compute node. This is highly recommended over KJHub for advanced Jupyter use and heavy computational processing.
116
+
116
117
### Advantages:
117
118
118
119
* Custom conda environments to load preferred libraries.
119
-
* Full node usage: Exclusive access to the resources of the node your job is reserved on, including up to 36 CPU cores and up to ~750GB RAM on Eagle bigmem nodes, and up to 104 CPU cores and up to ~2TB RAM on Kestrel bigmem nodes. See the system specifications page for the cluster you are working on.
120
+
* Full node usage: Exclusive access to the resources of the node your job is reserved on, including up to 104 CPU cores and up to 248GB RAM on Kestrel CPU nodes and up to 2TB RAM on Kestrel bigmem nodes. (See the system specifications page for more information on the types of nodes available on Kestrel.)
120
121
* No competing with other users for CPU cores and RAM, and no Arbiter2 process throttling.
122
+
* Less than a whole node may be requested via the [shared node](https://nrel.github.io/HPC/Documentation/Systems/Kestrel/running/#shared-node-partition) queue, to save AUs.
121
123
122
124
### Disadvantages:
123
125
124
126
* Must compete with other users for a node via the job queue.
125
-
* Costs your allocation AU.
127
+
* Costs your allocation AU.
126
128
127
129
## Launching Your Own Jupyter Server on an HPC System
128
130
129
-
Both Kestrel and Eagle support running your own Jupyter Notebook server. This is highly recommended over Europa for advanced Jupyter use and heavy computational processing.
131
+
Before you get started, we recommend installing your own Jupyter inside of a conda environment. The default conda/anaconda3 module contains basic Jupyter Notebook packages, but you will likely want your own Python libraries, notebook extensions, and other features. Basic directions are included later in this document.
130
132
131
-
External (non-NREL) **Kestrel** users may follow the directions below for Kestrel, but please use `kestrel.nrel.gov` instead of `kestrel.hpc.nrel.gov`.
133
+
Internal (NREL) HPC users on the NREL VPN, or external users of the HPC VPN, may use the instructions below.
132
134
133
-
External (non-NREL) **Eagle** users will no longer be able to use Jupyter in this fashion as of February 2024. If you require Jupyter, please consider transitioning to Kestrel as soon as possible.
135
+
External (non-NREL) HPC users may follow the same instructions, but please use `kestrel.nrel.gov` in place of `kestrel.hpc.nrel.gov`.
134
136
135
137
## Using a Compute Node to run Jupyter Notebooks
136
138
137
139
Connect to a login node and request an interactive job using the `salloc` command.
138
140
139
141
The examples below will start a 2-hour job. Edit the `<account>` to the name of your allocation, and adjust the time accordingly. Since these are interactive jobs, they will get some priority, especially if they're shorter, so only book as much time as you will be actively working on the notebook.
140
142
141
-
Before you get started, we recommend installing your own Jupyter inside of a conda environment. The default conda/anaconda3 modules contain basic Jupyter Notebook servers, but you will likely want your own Python libraries, notebook extensions, and other features. Basic directions are included later in this document.
142
143
143
-
### Kestrel:
144
+
### On Kestrel:
145
+
146
+
Connect to the login node and launch an interactive job:
144
147
145
148
`[user@laptop:~]$ ssh kestrel.hpc.nrel.gov`
146
149
147
150
`[user@kl1:~]$ salloc -A <account> -t 02:00:00`
148
151
149
-
### Eagle:
150
-
151
-
`[user@laptop:~]$ ssh eagle.hpc.nrel.gov`
152
-
153
-
`[user@el1:~]$ salloc -A <account> -t 02:00:00`
154
-
155
-
156
152
## Starting Jupyter Inside the Job
157
153
158
154
Once the job starts and you are allocated a compute node, load the appropriate modules, activate your Jupyter environment, and launch the Jupyter server.
@@ -171,34 +165,13 @@ Also note the url that Jupyter displays when starting up, e.g. `http://127.0.0.1
171
165
172
166
The `<alphabet soup>` is a long string of letters and numbers. This is a unique authorization token for your Jupyter session. you will need it, along with the full URL, for a later step.
Take note of the node name that your job is assigned. (r2i7n35 in this example.)
183
-
184
-
Also note the url that Jupyter displays when starting up, e.g. `http://127.0.0.1:8888/?token=<alphabet soup>`.
185
-
186
-
The `<alphabet soup>` is a long string of letters and numbers. This is a unique authorization token for your Jupyter session. you will need it, along with the full URL, for a later step.
187
-
188
-
### On Your Own Computer
169
+
### On Your Own Computer:
189
170
190
171
Next, open an SSH tunnel through a login node to the compute node. Log in when prompted using your regular HPC credentials, and put this terminal to the side or minimize it, but leave it open until you are done working with Jupyter for this session.
Copy the full url and token from Jupyter startup into your web browser. For example:
@@ -208,16 +181,16 @@ Copy the full url and token from Jupyter startup into your web browser. For exam
208
181
209
182
## Using a Compute Node - The Easy Way
210
183
211
-
Scripted assistance with launching a Jupyter session on Eagle or Kestrel is available.
184
+
Scripted assistance with launching a Jupyter session on Kestrel is available.
212
185
213
186
214
187
### Internal NREL Users Only: pyeagle
215
188
216
-
The [pyeagle](https://github.nrel.gov/MBAP/pyeagle) package is available for internal users to handle launching and monitoring a jupyter server on a compute node. This package is maintained by an NREL HPC user group, and provides utilities for working on Eagle and Kestrel.
189
+
The [pyeagle](https://github.nrel.gov/MBAP/pyeagle) package is available for internal users to handle launching and monitoring a jupyter server on a compute node. This package is maintained by an NREL HPC user group and was originally written for use with Eagle, but now supports Kestrel.
217
190
218
191
### Auto-launching on Eagle With an sbatch Script
219
192
220
-
These scripts are designed for Eagle and may not yet be adapted for Kestrel, but may be downloaded and adapted manually.
193
+
These scripts are designed for Eagle and may not yet be adapted for Kestrel, but may be downloaded and adapted manually.
221
194
222
195
Full directions included in the [Jupyter repo](https://github.com/NREL/HPC/tree/master/general/Jupyterhub/jupyter).
223
196
@@ -235,20 +208,19 @@ That's it!
235
208
236
209
## Reasons to Not Run Jupyter Directly on a Login Node
237
210
238
-
* Data processing and visualization should be done via Europa or compute nodes.
239
-
* Uses a highly shared resource (login nodes): there will be competition for CPU, RAM, and network I/O for storage. Arbiter2 software will automatically throttle moderate to heavy usage on login nodes, greatly slowing down processing.
211
+
Data processing and visualization should be done via either KJHub or a compute node.
212
+
213
+
Login nodes are highly shared and limited resources. There will be competition for CPU, RAM, and network I/O for storage, and Arbiter2 software will automatically throttle moderate to heavy usage on login nodes, greatly slowing down your processing.
240
214
241
215
## Custom Conda Environments and Jupyter Kernels
242
216
243
217
On Kestrel, the module 'anaconda3' is available to run the conda command and manage your environments.
244
218
245
219
As an alternative, the module 'mamba' is available instead. Mamba is a conda-compatible environment manager with very similar usage. Most conda commands in this documentation may be used with mamba instead and they may generally be considered interchangeable.
246
220
247
-
On Eagle, the module 'conda' contains the conda command. The Eagle conda module also contains mamba installed as a conda package.
248
-
249
221
### Creating a Conda Environment
250
222
251
-
To add your own packages to conda on Kestrel or Eagle:
223
+
To add your own packages to conda on Kestrel:
252
224
253
225
Create an environment and install the base jupyter packages. Then activate the environment and install other libraries that you want to use, e.g. scipy, numpy, and so on.
254
226
@@ -345,5 +317,4 @@ You can also run shell commands inside a cell. For example:
0 commit comments