Skip to content

Commit 8828da7

Browse files
add highlevel architecture diagram
1 parent fafd40d commit 8828da7

File tree

4 files changed

+7
-2
lines changed

4 files changed

+7
-2
lines changed
86.8 KB
Loading
59.6 KB
Loading

beta/serverless-fleets/tutorials/docling/README.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,9 @@
22

33
![](../../images/docling-picture.png)
44

5-
This tutorial provides a comprehensive guide on using Docling to convert PDFs into Markdown format using serverless fleets. It leverages cloud object storage for managing both the input PDFs and the resulting Markdown files. The process is streamlined using IBM’s Code Engine to build the Docling container, which is then pushed to a container registry. Users can run a serverless fleet, which autonomously spawns workers to run the Docling container for efficient, scalable conversion tasks.
5+
This tutorial provides a comprehensive guide on using [Docling](https://docling-project.github.io/docling/) to convert PDFs into Markdown format using serverless fleets. It leverages cloud object storage for managing both the input PDFs and the resulting Markdown files. The process is streamlined using IBM’s Code Engine to build the Docling container, which is then pushed to a container registry. Users can run a serverless fleet, which autonomously spawns workers to run the Docling container for efficient, scalable conversion tasks.
6+
7+
![](../../images/docling-highlevel-architecture.png)
68

79
Key steps covered in the Tutorial:
810
1. Upload the examples PDFs to COS

beta/serverless-fleets/tutorials/inferencing/README.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,9 +2,13 @@
22

33
This tutorial provides a comprehensive guide on using Serverless GPUs to perform batch inferencing which illustrates a generally applicable pattern where AI helps to extract information out of a set of unstructed data.
44

5+
![](../../images/inferencing-highlevel-architecture.png)
6+
7+
58
The concrete example extracts temperature and duration of a set of cookbook recipes (from [recipebook](https://github.com/dpapathanasiou/recipebook)) by using a LLM. Such a cookbook recipe looks like:
69
```
710
{
11+
"title": "A-1 Chicken Soup",
812
"directions": [
913
"In a large pot over medium heat, cook chicken pieces in oil until browned on both sides. Stir in onion and cook 2 minutes more. Pour in water and chicken bouillon and bring to a boil. Reduce heat and simmer 45 minutes.",
1014
"Stir in celery, carrots, garlic, salt and pepper. Simmer until carrots are just tender. Remove chicken pieces and pull the meat from the bone. Stir the noodles into the pot and cook until tender, 10 minutes. Return chicken meat to pot just before serving."
@@ -24,7 +28,6 @@ The concrete example extracts temperature and duration of a set of cookbook reci
2428
"language": "en-US",
2529
"source": "allrecipes.com",
2630
"tags": [],
27-
"title": "A-1 Chicken Soup",
2831
"url": "http://allrecipes.com/recipe/25651/a-1-chicken-soup/"
2932
}
3033
```

0 commit comments

Comments
 (0)