Skip to content

Commit e60e8ca

Browse files
authored
Add support for offline planning in case of multiple subgraphs (#1195)
In #947 support for multiple subgraphs was added to the memory planner. However, in case memory planning information was provided offline this would not work since it would only read the planning data for the first subgraph. This PR fixes that. The fix itself is simple and local to the function, assuming all offline planned tensors are just concatenated, e.g.: `[s0t0, s0t1, s1t0, s2t1, s1t2, s3t0, s3t1, s3t2, s3t3]`. The documentation is updated to reflect this. The metadata format also had a subgraph ID according to the documentation, but that was never read. I tried to use that but that wouldn't work either, because it would only hold one subgraph ID and we can only have one meta-data with a given name. So I'm not sure how this was intended to be used originally but I added a note to the documentation and the code to clarify that it is unused at the moment. BUG=see PR description
1 parent aaf6a30 commit e60e8ca

File tree

2 files changed

+8
-3
lines changed

2 files changed

+8
-3
lines changed

tensorflow/lite/micro/docs/memory_management.md

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -87,13 +87,18 @@ integers of the following format:
8787
| Offset | Value |
8888
|-|-|
8989
| 0 | Offline allocation format version |
90-
| 1 | Subgraph index to which this allocation applies |
90+
| 1 | Number of subgraphs |
9191
| 2 | Number offsets following: n |
9292
| 3 | Byte offset of tensor #0 or -1 to allocate at runtime |
9393
| 4 | Byte offset of tensor #1 or -1 to allocate at runtime |
9494
| ... | ... |
9595
| 3+(n-1) | Byte offset of tensor #(n-1) or -1 to allocate at runtime |
9696

97+
Note that offsets 0 (the version) and 1 (the number of subgraphs) are currently
98+
ignored by the micro memory allocator. In case of multiple subgraphs, it assumes
99+
all tensors for all subgraphs are concatenated: all tensors for the first
100+
subgraph are first, followed by those of the second subgraph, etc.
101+
97102
The `tflite::GreedyMemoryPlanner` treats the provided offline tensor allocation
98103
plan as constant fixed offset to the start of the head section and will attempt
99104
to fit any other tensors (such as scratch tensors added a runtime using the

tensorflow/lite/micro/micro_allocation_info.cc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -179,6 +179,7 @@ TfLiteStatus AllocationInfoBuilder::InitializeAllocationInfo(
179179
const int32_t* offline_offsets, SubgraphAllocations* allocations) {
180180
AllocationInfo* allocation_info = info_.allocation_info;
181181
// Initialize allocation info for every tensor in every subgraph.
182+
int offline_index = 0;
182183
for (size_t subgraph_idx = 0; subgraph_idx < model_->subgraphs()->size();
183184
subgraph_idx++) {
184185
const SubGraph* subgraph = model_->subgraphs()->Get(subgraph_idx);
@@ -203,15 +204,14 @@ TfLiteStatus AllocationInfoBuilder::InitializeAllocationInfo(
203204
(!subgraph->tensors()->Get(i)->is_variable()) &&
204205
(current->bytes != 0);
205206
if (offline_offsets) {
206-
current->offline_offset = offline_offsets[i];
207+
current->offline_offset = offline_offsets[offline_index++];
207208

208209
// Mark offline planned variable tensors so they can get an offline
209210
// offset and be handled offline.
210211
if (subgraph->tensors()->Get(i)->is_variable() &&
211212
current->offline_offset != kOnlinePlannedBuffer) {
212213
current->needs_allocating = true;
213214
}
214-
215215
} else {
216216
current->offline_offset = kOnlinePlannedBuffer;
217217
}

0 commit comments

Comments
 (0)