-
-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
hdf_archive key enumeration recursion error #69
Labels
Comments
For those interested, a hack is to edit the hdf_archive class in _archives.py as below. def keys(self):
# if sys.version_info[0] >= 3:
# return KeysView(self) #XXX: show keys not dict
filename = self.__state__['id']
try:
f = hdf.File(filename, 'r')
if sys.version_info[0] >= 3:
_keys = [self._loadkey(bytes(key, 'ascii')) for key in f.keys()]
else:
_keys = [self._loadkey(key) for key in self._attrs(f).keys()]
except: #XXX: should only catch appropriate exceptions
f = None
raise OSError("error reading file archive %s" % filename)
finally:
if f is not None: f.close()
return _keys |
Sorry this has been ignored for so long... I can confirm this is an issue:
So, no issues when the archive serves as a backend to the cache.
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I have a HDF5 file I generated through hdf_archive which is a dictionary of dictionaries that include a few numpy arrays.
I would like to get the keys in the hdf_archive.
In dir_archive I can do:
However the pickling process is quite slow and not scalable for the resources I have.
HDF5 provides fast random access, so I thought I would try hdf_archive, but I get the following error:
The text was updated successfully, but these errors were encountered: