Introducing Archiver 4.0 (alpha) - a cross-platform, multi-format archive utility and Go library. A powerful and flexible library meets an elegant CLI in this generic replacement for several platform-specific or format-specific archive utilities.
arc
command, stick with v3 for now.
- Stream-oriented APIs
- Automatically identify archive and compression formats:
- By file name
- By header
- Traverse directories, archive files, and any other file uniformly as
io/fs
file systems: - Compress and decompress files
- Create and extract archive files
- Walk or traverse into archive files
- Extract only specific files from archives
- Insert (append) into .tar and .zip archives
- Read from password-protected 7-Zip files
- Numerous archive and compression formats supported
- Extensible (add more formats just by registering them)
- Cross-platform, static binary
- Pure Go (no cgo)
- Multithreaded Gzip
- Adjust compression levels
- Automatically add compressed files to zip archives without re-compressing
- Open password-protected RAR archives
- brotli (.br)
- bzip2 (.bz2)
- flate (.zip)
- gzip (.gz)
- lz4 (.lz4)
- lzip (.lz)
- snappy (.sz)
- xz (.xz)
- zlib (.zz)
- zstandard (.zst)
- .zip
- .tar (including any compressed variants like .tar.gz)
- .rar (read-only)
- .7z (read-only)
Tar files can optionally be compressed using any compression format.
Coming soon for v4. See the last v3 docs.
$ go get github.com/mholt/archiver/v4
Creating archives can be done entirely without needing a real disk or storage device since all you need is a list of File
structs to pass in.
However, creating archives from files on disk is very common, so you can use the FilesFromDisk()
function to help you map filenames on disk to their paths in the archive. Then create and customize the format type.
In this example, we add 4 files and a directory (which includes its contents recursively) to a .tar.gz file:
// map files on disk to their paths in the archive
files, err := archiver.FilesFromDisk(nil, map[string]string{
"/path/on/disk/file1.txt": "file1.txt",
"/path/on/disk/file2.txt": "subfolder/file2.txt",
"/path/on/disk/file3.txt": "", // put in root of archive as file3.txt
"/path/on/disk/file4.txt": "subfolder/", // put in subfolder as file4.txt
"/path/on/disk/folder": "Custom Folder", // contents added recursively
})
if err != nil {
return err
}
// create the output file we'll write to
out, err := os.Create("example.tar.gz")
if err != nil {
return err
}
defer out.Close()
// we can use the Archive type to gzip a tarball
// (compression is not required; you could use Tar directly)
format := archiver.Archive{
Compression: archiver.Gz{},
Archival: archiver.Tar{},
Extraction: archiver.Tar{},
}
// create the archive
err = format.Archive(context.Background(), out, files)
if err != nil {
return err
}
The first parameter to FilesFromDisk()
is an optional options struct, allowing you to customize how files are added.
Extracting an archive, extracting from an archive, and walking an archive are all the same function.
Simply use your format type (e.g. Zip
) to call Extract()
. You'll pass in a context (for cancellation), the input stream, and a callback function to handle each file.
// the type that will be used to read the input stream
var format archiver.Zip
err := format.Extract(ctx, input, func(ctx context.Context, f archiver.File) error {
// do something with the file
return nil
})
if err != nil {
return err
}
Have an input stream with unknown contents? No problem, archiver can identify it for you. It will try matching based on filename and/or the header (which peeks at the stream):
format, input, err := archiver.Identify(ctx, "filename.tar.zst", input)
if err != nil {
return err
}
// you can now type-assert format to whatever you need;
// be sure to use returned stream to re-read consumed bytes during Identify()
// want to extract something?
if ex, ok := format.(archiver.Extractor); ok {
// ... proceed to extract
}
// or maybe it's compressed and you want to decompress it?
if decomp, ok := format.(archiver.Decompressor); ok {
rc, err := decomp.OpenReader(unknownFile)
if err != nil {
return err
}
defer rc.Close()
// read from rc to get decompressed data
}
Identify()
works by reading an arbitrary number of bytes from the beginning of the stream (just enough to check for file headers). It buffers them and returns a new reader that lets you re-read them anew. If your input stream is io.Seeker
however, no buffer is created (it uses Seek()
instead).
This is my favorite feature.
Let's say you have a file. It could be a real directory on disk, an archive, a compressed archive, or any other regular file (or stream!). You don't really care; you just want to use it uniformly no matter what it is.
Use archiver to simply create a file system:
// filename could be:
// - a folder ("/home/you/Desktop")
// - an archive ("example.zip")
// - a compressed archive ("example.tar.gz")
// - a regular file ("example.txt")
// - a compressed regular file ("example.txt.gz")
fsys, err := archiver.FileSystem(ctx, filename, nil)
if err != nil {
return err
}
This is a fully-featured fs.FS
, so you can open files and read directories, no matter what kind of file the input was.
For example, to open a specific file:
f, err := fsys.Open("file")
if err != nil {
return err
}
defer f.Close()
If you opened a regular file, you can read from it. If it's a compressed file, reads are automatically decompressed.
If you opened a directory, you can list its contents:
if dir, ok := f.(fs.ReadDirFile); ok {
// 0 gets all entries, but you can pass > 0 to paginate
entries, err := dir.ReadDir(0)
if err != nil {
return err
}
for _, e := range entries {
fmt.Println(e.Extension())
}
}
Or get a directory listing this way:
entries, err := fsys.ReadDir("Playlists")
if err != nil {
return err
}
for _, e := range entries {
fmt.Println(e.Extension())
}
Or maybe you want to walk all or part of the file system, but skip a folder named .git
:
err := fs.WalkDir(fsys, ".", func(path string, d fs.DirEntry, err error) error {
if err != nil {
return err
}
if path == ".git" {
return fs.SkipDir
}
fmt.Println("Walking:", path, "Dir?", d.IsDir())
return nil
})
if err != nil {
return err
}
Important .tar note: Tar files do not efficiently implement file system semantics due to their roots in sequential-access design for tapes. File systems inherently assume random access, but tar files need to be read from the beginning to access something at the end. This is especially slow when the archive is compressed. Optimizations have been implemented to amortize ReadDir()
calls so that fs.WalkDir()
only has to scan the archive once, but they use more memory. Open calls require another scan to find the file. It may be more efficient to use Tar.Extract()
directly if file system semantics are not important to you.
It can be used with http.FileServer to browse archives and directories in a browser. However, due to how http.FileServer works, don't directly use http.FileServer with compressed files; instead wrap it like following:
fileServer := http.FileServer(http.FS(archiveFS))
http.HandleFunc("/", func(writer http.ResponseWriter, request *http.Request) {
// disable range request
writer.Header().Set("Accept-Ranges", "none")
request.Header.Del("Range")
// disable content-type sniffing
ctype := mime.TypeByExtension(filepath.Ext(request.URL.Path))
writer.Header()["Content-Type"] = nil
if ctype != "" {
writer.Header().Set("Content-Type", ctype)
}
fileServer.ServeHTTP(writer, request)
})
http.FileServer will try to sniff the Content-Type by default if it can't be inferred from file name. To do this, the http package will try to read from the file and then Seek back to file start, which the libray can't achieve currently. The same goes with Range requests. Seeking in archives is not currently supported by archiver due to limitations in dependencies.
If content-type is desirable, you can register it yourself.
Compression formats let you open writers to compress data:
// wrap underlying writer w
compressor, err := archiver.Zstd{}.OpenWriter(w)
if err != nil {
return err
}
defer compressor.Close()
// writes to compressor will be compressed
Similarly, compression formats let you open readers to decompress data:
// wrap underlying reader r
decompressor, err := archiver.Brotli{}.OpenReader(r)
if err != nil {
return err
}
defer decompressor.Close()
// reads from decompressor will be decompressed
Tar and Zip archives can be appended to without creating a whole new archive by calling Insert()
on a tar or zip stream. However, for tarballs, this requires that the tarball is not compressed (due to complexities with modifying compression dictionaries).
Here is an example that appends a file to a tarball on disk:
tarball, err := os.OpenFile("example.tar", os.O_RDWR, 0644)
if err != nil {
return err
}
defer tarball.Close()
// prepare a text file for the root of the archive
files, err := archiver.FilesFromDisk(nil, map[string]string{
"/home/you/lastminute.txt": "",
})
err := archiver.Tar{}.Insert(context.Background(), tarball, files)
if err != nil {
return err
}
The code is similar for inserting into a Zip archive, except you'll call Insert()
on the Zip
type instead.