Skip to content

Commit eadc507

Browse files
mrocklinfacebook-github-bot
authored andcommitted
Use torch.save in _StorageBase.__reduce__ (pytorch#9184)
Summary: Previously this used the ``.toliist`` method, which converted the storage object into a list of Python objects, and then sent those to pickle. For storage objects of non-trivial size, this was very slow. Now we reuse the logic of the ``torch.save`` function to efficiently turn the Storage object into bytes, and send those instead. This reduces the semantic information (it's harder to interpret the bytes) but should be orders of magnitude more efficient when serializing data with the pickle protocol or with copy For future work it would be nice to develop a mechanism to get a buffer of bytes out of a Storage object, and use that alongside the current ``from_buffer`` method. See pytorch#9168 for context Closes pytorch#9184 Differential Revision: D8747794 Pulled By: soumith fbshipit-source-id: ac598e660c043788ed1ffab3d0303812886edf79
1 parent 7b25cbb commit eadc507

File tree

1 file changed

+9
-1
lines changed

1 file changed

+9
-1
lines changed

torch/storage.py

+9-1
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,5 @@
1+
import io
2+
13
import torch
24
from ._utils import _type, _cuda
35

@@ -28,7 +30,9 @@ def __deepcopy__(self, memo):
2830
return new_storage
2931

3032
def __reduce__(self):
31-
return type(self), (self.tolist(),)
33+
b = io.BytesIO()
34+
torch.save(self, b)
35+
return (_load_from_bytes, (b.getvalue(),))
3236

3337
def __sizeof__(self):
3438
return super(_StorageBase, self).__sizeof__() + self.element_size() * self.size()
@@ -116,5 +120,9 @@ def _new_shared(cls, size):
116120
return cls._new_using_fd(size)
117121

118122

123+
def _load_from_bytes(b):
124+
return torch.load(io.BytesIO(b))
125+
126+
119127
_StorageBase.type = _type
120128
_StorageBase.cuda = _cuda

0 commit comments

Comments
 (0)