Skip to content

Commit

Permalink
Fireworks-ai inference support 🔥 (#2677)
Browse files Browse the repository at this point in the history
* LLM generated draft

* Update fireworks-ai.md

* add reference to models being added.

* pretty.

* Update fireworks-ai.md

Co-authored-by: CĂ©lina <[email protected]>

* better snippets.:

* annoying code snippets

* Update fireworks-ai.md

* Apply suggestions from code review

Co-authored-by: burtenshaw <[email protected]>

* Update fireworks-ai.md

Co-authored-by: Lucain <[email protected]>

* smooth flow on first paragraph

* use library usage flow from original blogpost

* screenshot

* Add thumbnail from the @gary149

* Update fireworks-ai.md

* fix img link

* Update fireworks-ai.md

* list of authors

---------

Co-authored-by: Vaibhavs10 <[email protected]>
Co-authored-by: CĂ©lina <[email protected]>
Co-authored-by: Wauplin <[email protected]>
Co-authored-by: Lucain <[email protected]>
Co-authored-by: burtenshaw <[email protected]>
  • Loading branch information
6 people authored Feb 14, 2025
1 parent ff3d728 commit 3c00687
Show file tree
Hide file tree
Showing 3 changed files with 139 additions and 1 deletion.
11 changes: 10 additions & 1 deletion _blog.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5530,4 +5530,13 @@
- math-verify
- open-llm-leaderboard
- leaderboard
- evaluation
- evaluation

- local: fireworks-ai
title: "Welcome Fireworks.ai on the Hub"
author: julien-c
thumbnail: /blog/assets/inference-providers/welcome-fireworks-2.jpg
date: Feb 14, 2025
tags:
- announcement
- hub
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
129 changes: 129 additions & 0 deletions fireworks-ai.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,129 @@
---
title: "Welcome Fireworks.ai on the Hub 🎆"
thumbnail: /blog/assets/inference-providers/welcome-fireworks-2.jpg
authors:
- user: teofeliu
guest: true
org: fireworks-ai
- user: shaunak-fireworks
guest: true
org: fireworks-ai
- user: julien-c
---

Following our recent announcement on [Inference Providers on the Hub](https://huggingface.co/blog/inference-providers), we're thrilled to share that **Fireworks.ai** is now a supported Inference Provider on HF Hub!

[Fireworks.ai](https://fireworks.ai) delivers blazing-fast serverless inference directly on model pages—making it easier than ever to deploy and experiment with your favorite models.

Among others, starting now, you can run serverless inference to the following models via Fireworks.ai:

- [deepseek-ai/DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1)
- [deepseek-ai/DeepSeek-V3](https://huggingface.co/deepseek-ai/DeepSeek-V3)
- [mistralai/Mistral-Small-24B-Instruct-2501](https://huggingface.co/mistralai/Mistral-Small-24B-Instruct-2501)
- [Qwen/Qwen2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct)
- [meta-llama/Llama-3.2-90B-Vision-Instruct](https://huggingface.co/meta-llama/Llama-3.2-90B-Vision-Instruct)

and many more, you can find the full list [here](https://huggingface.co/models?inference_provider=fireworks-ai).

Light up your projects with Fireworks.ai today!

<img src="https://huggingface.co/blog/inference-providers/welcome-fireworks-2.jpg" alt="Fireworks.ai supported as Inference Provider on Hugging Face"/>

## How it works

### In the website UI

![Fireworks.ai inference provider UI](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/inference-providers/fireworks.png)

Search for all models supported by Fireworks on HF **[here](https://huggingface.co/models?inference_provider=fireworks-ai)**.

### From the client SDKs

#### from Python, using huggingface_hub

The following example shows how to use DeepSeek-R1 using Fireworks.ai as your inference provider. You can use a [Hugging Face token](https://huggingface.co/settings/tokens) for automatic routing through Hugging Face, or your own Fireworks.ai API key if you have one.

Install `huggingface_hub` from source:

```bash
pip install git+https://github.com/huggingface/huggingface_hub
```

Use the `huggingface_hub` python library to call Fireworks.ai endpoints by defining the `provider` parameter.

```python
from huggingface_hub import InferenceClient

client = InferenceClient(
provider="fireworks-ai",
api_key="xxxxxxxxxxxxxxxxxxxxxxxx"
)

messages = [
{
"role": "user",
"content": "What is the capital of France?"
}
]

completion = client.chat.completions.create(
model="deepseek-ai/DeepSeek-R1",
messages=messages,
max_tokens=500
)

print(completion.choices[0].message)
```

#### from JS using @huggingface/inference

```js
import { HfInference } from "@huggingface/inference";

const client = new HfInference("xxxxxxxxxxxxxxxxxxxxxxxx");

const chatCompletion = await client.chatCompletion({
model: "deepseek-ai/DeepSeek-R1",
messages: [
{
role: "user",
content: "How to make extremely spicy Mayonnaise?"
}
],
provider: "fireworks-ai",
max_tokens: 500
});

console.log(chatCompletion.choices[0].message);
```

### From HTTP calls

Here's how you can call Llama-3.3-70B-Instruct using Fireworks.ai as the inference provider via cURL.

```
curl 'https://router.huggingface.co/fireworks-ai/v1/chat/completions' \
-H 'Authorization: Bearer xxxxxxxxxxxxxxxxxxxxxxxx' \
-H 'Content-Type: application/json' \
--data '{
"model": "Llama-3.3-70B-Instruct",
"messages": [
{
"role": "user",
"content": "What is the meaning of life if you were a dog?"
}
],
"max_tokens": 500,
"stream": false
}'
```

## Billing

For direct requests, i.e. when you use a Fireworks key, you are billed directly on your Fireworks account.

For routed requests, i.e. when you authenticate via the hub, you'll only pay the standard Fireworks API rates. There's no additional markup from us, we just pass through the provider costs directly. (In the future, we may establish revenue-sharing agreements with our provider partners.)

Important Note ‼️ PRO users get $2 worth of Inference credits every month. You can use them across providers. 🔥

Subscribe to the [Hugging Face PRO plan](https://hf.co/subscribe/pro) to get access to Inference credits, ZeroGPU, Spaces Dev Mode, 20x higher limits, and more.

0 comments on commit 3c00687

Please sign in to comment.