Skip to content

Commit 007dd22

Browse files
committed
saving work
1 parent ddffacd commit 007dd22

File tree

14 files changed

+728
-347
lines changed

14 files changed

+728
-347
lines changed

.yarnclean

Lines changed: 45 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,45 @@
1+
# test directories
2+
__tests__
3+
test
4+
tests
5+
powered-test
6+
7+
# asset directories
8+
docs
9+
doc
10+
website
11+
images
12+
assets
13+
14+
# examples
15+
example
16+
examples
17+
18+
# code coverage directories
19+
coverage
20+
.nyc_output
21+
22+
# build scripts
23+
Makefile
24+
Gulpfile.js
25+
Gruntfile.js
26+
27+
# configs
28+
appveyor.yml
29+
circle.yml
30+
codeship-services.yml
31+
codeship-steps.yml
32+
wercker.yml
33+
.tern-project
34+
.gitattributes
35+
.editorconfig
36+
.*ignore
37+
.eslintrc
38+
.jshintrc
39+
.flowconfig
40+
.documentup.json
41+
.yarn-metadata.json
42+
.travis.yml
43+
44+
# misc
45+
*.md

docs/sdks/hello/__init__.py

Lines changed: 0 additions & 1 deletion
This file was deleted.

docs/sdks/hello/test.py

Lines changed: 0 additions & 22 deletions
This file was deleted.

docs/sdks/python/api.md

Lines changed: 146 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -2,77 +2,203 @@
22
title: API
33
---
44

5+
import Tabs from '@theme/Tabs';
6+
import TabItem from '@theme/TabItem';
7+
8+
This document outlines the core functionalities provided by the RunPod API, including how to interact with endpoints, manage templates, create endpoints, and list available GPUs.
9+
These operations enable users to dynamically manage their computational resources within the RunPod environment.
10+
511
## Get Endpoints
612

7-
To fetch all available endpoints from the API, use the get_endpoints function.
8-
This function returns a list of endpoint configurations available for use.
13+
To retrieve a comprehensive list of all available endpoint configurations within RunPod, you can use the `get_endpoints()` function. This function returns a list of endpoint configurations, allowing you to understand what's available for use in your projects.
914

1015
```python
1116
import runpod
1217

18+
# Fetching all available endpoints
1319
endpoints = runpod.get_endpoints()
1420

21+
# Displaying the list of endpoints
1522
print(endpoints)
1623
```
1724

1825
## Create Template
1926

20-
You can create a new template in RunPod by specifying the name and the Docker image to use.
21-
This is useful for setting up environments with pre-defined configurations.
27+
Templates in RunPod serve as predefined configurations for setting up environments efficiently. The `create_template()` function facilitates the creation of new templates by specifying a name and a Docker image.
2228

2329
```python
2430
import runpod
2531

26-
2732
try:
33+
# Creating a new template with a specified name and Docker image
34+
new_template = runpod.create_template(name="test", image_name="runpod/base:0.1.0")
2835

29-
new_template = runpod.create_template(
30-
name="test",
31-
image_name="runpod/base:0.1.0"
32-
)
33-
36+
# Output the created template details
3437
print(new_template)
3538

3639
except runpod.error.QueryError as err:
40+
# Handling potential errors during template creation
3741
print(err)
3842
print(err.query)
3943
```
4044

4145
## Create Endpoint
4246

43-
Creating an endpoint involves first creating a template and then setting up the endpoint with the template ID.
44-
You can specify GPU requirements, the number of workers, and other configurations.
45-
Your Template name must be unique.
47+
Creating a new endpoint is straightforward with the `create_endpoint()` function. This function requires specifying a `name` and a `template_id`. Additional configurations such as GPUs, number of Workers, and more can also be specified to tailor the endpoint to your specific needs.
48+
49+
<Tabs>
50+
<TabItem value="python" label="Python" default>
4651

4752
```python
4853
import runpod
4954

5055
try:
51-
56+
# Creating a template to use with the new endpoint
5257
new_template = runpod.create_template(
53-
name="test",
54-
image_name="runpod/base:0.4.4",
55-
is_serverless=True
58+
name="test", image_name="runpod/base:0.4.4", is_serverless=True
5659
)
5760

61+
# Output the created template details
5862
print(new_template)
5963

64+
# Creating a new endpoint using the previously created template
6065
new_endpoint = runpod.create_endpoint(
6166
name="test",
6267
template_id=new_template["id"],
6368
gpu_ids="AMPERE_16",
6469
workers_min=0,
65-
workers_max=1
70+
workers_max=1,
6671
)
6772

73+
# Output the created endpoint details
6874
print(new_endpoint)
6975

7076
except runpod.error.QueryError as err:
77+
# Handling potential errors during endpoint creation
7178
print(err)
7279
print(err.query)
7380
```
7481

82+
</TabItem>
83+
<TabItem value="output" label="Output">
84+
85+
```json
86+
{
87+
"id": "Unique_Id",
88+
"name": "YourTemplate",
89+
"imageName": "runpod/base:0.4.4",
90+
"dockerArgs": "",
91+
"containerDiskInGb": 10,
92+
"volumeInGb": 0,
93+
"volumeMountPath": "/workspace",
94+
"ports": null,
95+
"env": [],
96+
"isServerless": true
97+
}
98+
{
99+
"id": "Unique_Id",
100+
"name": "YourTemplate",
101+
"templateId": "Unique_Id",
102+
"gpuIds": "AMPERE_16",
103+
"networkVolumeId": null,
104+
"locations": null,
105+
"idleTimeout": 5,
106+
"scalerType": "QUEUE_DELAY",
107+
"scalerValue": 4,
108+
"workersMin": 0,
109+
"workersMax": 1
110+
}
111+
```
112+
113+
</TabItem>
114+
</Tabs>
115+
116+
## Get GPUs
117+
118+
For understanding the computational resources available, the `get_gpus()` function lists all GPUs that can be allocated to endpoints in RunPod. This enables optimal resource selection based on your computational needs.
119+
120+
<Tabs>
121+
<TabItem value="python" label="Python" default>
122+
123+
```python
124+
import runpod
125+
import json
126+
127+
# Fetching all available GPUs
128+
gpus = runpod.get_gpus()
129+
130+
# Displaying the GPUs in a formatted manner
131+
print(json.dumps(gpus, indent=2))
132+
```
133+
134+
</TabItem>
135+
<TabItem value="output" label="Output">
136+
75137
```json
76-
{'id': 'cx829zvv9e', 'name': 'testing-01', 'imageName': 'runpod/base:0.4.4', 'dockerArgs': '', 'containerDiskInGb': 10, 'volumeInGb': 0, 'volumeMountPath': '/workspace', 'ports': None, 'env': [], 'isServerless': True}
77-
{'id': '838j9id2xmmwew', 'name': 'test', 'templateId': 'cx829zvv9e', 'gpuIds': 'AMPERE_16', 'networkVolumeId': None, 'locations': None, 'idleTimeout': 5, 'scalerType': 'QUEUE_DELAY', 'scalerValue': 4, 'workersMin': 0, 'workersMax': 1}
138+
[
139+
{
140+
"id": "NVIDIA A100 80GB PCIe",
141+
"displayName": "A100 80GB",
142+
"memoryInGb": 80
143+
},
144+
{
145+
"id": "NVIDIA A100-SXM4-80GB",
146+
"displayName": "A100 SXM 80GB",
147+
"memoryInGb": 80
148+
}
149+
// Additional GPUs omitted for brevity
150+
]
78151
```
152+
153+
</TabItem>
154+
</Tabs>
155+
156+
## Get GPU by Id
157+
158+
Use `get_gpu()` and pass in a GPU Id to retrieve details about a specific GPU model by its ID.
159+
This is useful when understanding the capabilities and costs associated with various GPU models.
160+
161+
<Tabs>
162+
<TabItem value="python" label="Python" default>
163+
164+
```python
165+
import runpod
166+
import json
167+
168+
gpus = runpod.get_gpu("NVIDIA A100 80GB PCIe")
169+
170+
print(json.dumps(gpus, indent=2))
171+
```
172+
173+
</TabItem>
174+
<TabItem value="output" label="Output">
175+
176+
```json
177+
{
178+
"maxGpuCount": 8,
179+
"id": "NVIDIA A100 80GB PCIe",
180+
"displayName": "A100 80GB",
181+
"manufacturer": "Nvidia",
182+
"memoryInGb": 80,
183+
"cudaCores": 0,
184+
"secureCloud": true,
185+
"communityCloud": true,
186+
"securePrice": 1.89,
187+
"communityPrice": 1.59,
188+
"oneMonthPrice": null,
189+
"threeMonthPrice": null,
190+
"oneWeekPrice": null,
191+
"communitySpotPrice": 0.89,
192+
"secureSpotPrice": null,
193+
"lowestPrice": {
194+
"minimumBidPrice": 0.89,
195+
"uninterruptablePrice": 1.59
196+
}
197+
}
198+
```
199+
200+
</TabItem>
201+
202+
</Tabs>
203+
204+
Through these functionalities, the RunPod API enables efficient and flexible management of computational resources, catering to a wide range of project requirements.

0 commit comments

Comments
 (0)