Skip to content

Commit 29fa164

Browse files
authored
Update 2025-10-29-cohere-coreweave-lmcache.md
Fixed double title
1 parent 5096a85 commit 29fa164

File tree

1 file changed

+0
-4
lines changed

1 file changed

+0
-4
lines changed

_posts/2025-10-29-cohere-coreweave-lmcache.md

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -7,10 +7,6 @@ author: Walter Beller-Morales (Cohere), Samuel Shen (Tensormesh), Kishor Aher (C
77
image: /assets/img/async.png
88
---
99

10-
# **Breaking the Memory Barrier: How LMCache and CoreWeave Power Efficient LLM Inference for Cohere**
11-
12-
By Walter Beller-Morales (Cohere), Samuel Shen (Tensormesh), Kishor Aher (CoreWeave)
13-
1410
### **The challenge: Scaling enterprise AI**
1511

1612
Enterprises today are racing to integrate large language models (LLMs) into their products and workflows, but doing it at scale brings challenges in performance, cost, and accuracy. Organizations need models to be based on their specific data, while making sure that this information remains private. [**Cohere**](https://cohere.com), one of the leading enterprise AI companies, built its North platform to help organizations use their own internal data safely and effectively to power retrieval-augmented generation (RAG). North allows enterprises to ground model outputs in trusted, private knowledge bases, delivering accurate, contextual responses tailored to their business.

0 commit comments

Comments
 (0)