-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Encryption severely broken for large databases #21
Comments
@rokclimb15 Thanks a lot for pointing this out. Do you know the exact database size, when this problem will occur? |
The problem occurs between 1500 and 1600MB of input. That won't translate exactly to a database size, but anything over 1.5GB of data (minus indexes) is a candidate for this problem. Files that large can be encoded but not decoded. The problem with input being clipped seems to happen at 1.9GB, but due to the previous limitation, that issue is irrelevant right now. |
I made a temporary adjustment, which will issue a warning to stderr in case of databases > 1200 Megabytes with enabled encryption. |
Updated version also pushed to homebrew, webpage and announced on twitter. |
Thanks for the heads up, @rokclimb15! Much appreciated. |
@rokclimb15 it seems that a more recent OpenSSL version with # Old OpenSSL Version
$ openssl version
OpenSSL 0.9.8zh 14 Jan 2016
$ openssl \
smime -encrypt -binary -text -aes256 -in sample.txt \
-out sample.txt.enc -outform DER /etc/mysqldump-secure.pub.pem
# More recent OpenSSL Version
$ /usr/local/Cellar/openssl101/1.0.1t_1/bin/openssl version
OpenSSL 1.0.1t 3 May 2016
$ /usr/local/Cellar/openssl101/1.0.1t_1/bin/openssl \
smime -encrypt -stream -binary -text -aes256 -in sample.txt \
-out sample.txt.stream.enc -outform DER /etc/mysqldump-secure.pub.pem
# with and without stream
$ ls -laph |grep sample
-rw-r--r-- 1 cytopia 1286676289 3.0G Aug 18 23:43 sample.txt
-rw-r--r-- 1 cytopia 1286676289 1.9G Aug 18 23:44 sample.txt.enc
-rw-r--r-- 1 cytopia 1286676289 3.1G Aug 18 23:50 sample.txt.stream.enc Can you verify this on your end? |
That does appear to fix the encryption problem. But now try to decrypt it ;) It's unintentional ransomware if you ever lost your data and needed to restore a backup. The data size warning should probably remain until streaming decryption or bigger mem bufs are introduced in OpenSSL.
|
I see, decryption throws this error: Error reading S/MIME message
140735184199760:error:07069041:memory buffer routines:BUF_MEM_grow_clean:malloc failure:buffer.c:150:
140735184199760:error:0D06B041:asn1 encoding routines:ASN1_D2I_READ_BIO:malloc failure:a_d2i_fp.c:239: Too bad. |
Here's what I came up with for workable encryption. Command should be same or similar for gpg. I used v2 because it uses libgcrypt with AES-NI if new enough. the passphrase file should be chmod 600 for security and contain the passphrase for encryption/decryption. This reads from STDIN and writes to STDOUT by default.
|
@rokclimb15 Thanks for this. |
My two cents is to use symmetric. This is a backup tool, so in general the user will maintain control of the files throughout their lifecycle. Public/private key crypto probably isn't needed for that reason. Additionally, a careless user might not back up their private key and if the whole system was lost, they could very well remember their encryption passphrase. Most backup products use a passphrase to encrypt and decrypt. |
Thank you very much for your work on this script. It is very useful to me. I am waiting for your solutions for the large file problem as 25% of my databases cannot be decrypted because they are very large. Thank you again. |
@brownbrady - just add a new encryption option like this. You'll need to install gpg2 and create a passphrase file. This works on very large files.
|
@rockclimb15: Thank you for your suggestion. By "encryption option", do you mean it is a variable in the mysqldump-secure.conf file? If so, what is the name of the variable? |
No, you would have to apply these steps to the unencrypted backup after it On Sun, Nov 6, 2016 at 10:21 PM, brownbrady [email protected]
|
@rockclimb15: I understand now. I will set ENCRYPT=0 then install gpg2 and create a passphrase file. Then I will need to run this after mysqldump-secure completes. |
@brownbrady how large are those databases after compression? |
@cytopia: I just checked. Before compression, it was 6306.14 MB. After compression it was 954 MB according to ls -lh command. Here was the warning:
Does this mean the 'mydb' database above can be decrypted? |
Yes, it can be encypted, if the final filesize does not exceed 1.9GB. Looks like I've implemented the warning on the wrong size. It is checking the initial database size, instead of the size which will be on the disk (with or without compression/encryption). If you come close to 1.9 GB you can choose a stronger compression algo. |
@cytopia: I checked my databases and they are all under 1.9 GB. I will proceed with the encryption. Thank you for your script. |
Hi, Will this issue ever be resolved or the only solution will be to encrypt manually? |
I'd like to know if it would be possible to use the GPG encryption via a configuration option ? Its a great little utility, which is unfortunately being hamstrung by openssl :( Keep up the good work :D |
I've put a patch through to correct this behavior. |
Thank you very much Red-M, I am slowly approaching the dreaded limit and I didn't want to look for an alternative, hopefully this patch will save us all! |
For future visitors: I wrote a program that can decrypt large openssl smime encrypted files, it's at https://github.com/imreFitos/large_smime_decrypt |
For anybody interested: I wrote a solution for this, which is capable to encrypt and decrypt very large files, and does not need to use memory mapped files. I will try to send a PR tho this project (assuming the maintainer @cytopia is still here?) but in the meantime please free to use (at your own risk) the solution here, which includes two scripts, one to encrypt and another to decrypt. The solution consists in using a dynamically generated key to do the encryption, as suggested in this Stackoverflow answer. Each time, a new random symmetric key is generated, used to encrypt the contents, and then encrypted with the RSA cipher (public key). The encrypted contents together with the encrypted symmetric key is transferred to the recipient. The recipient decrypts the symmetric key using his private key, and then uses the symmetric key to decrypt the message So please be aware that you will have two ".enc" files to transfer, but in this way you can keep using your existing keys without any changes. |
Succesfully tested with a 115GB sql dump (compressed size 9GB). At this size it's important to use openssl 1.1 at least. |
Unfortunately there are a couple of serious problems with openssl smime that render the encryption option useless and dangerous when dumping a database over a few GB.
Regardless of input size, openssl smime will always produce a 1.9GB file on disk, which indicates that the input is truncated. This happens silently, which is very dangerous for a DB backup. Additionally, it is not possible to decrypt any smime message that large with OpenSSL due to internal limitations.
Reference https://rt.openssl.org/Ticket/Display.html?id=4651 for issues (guest/guest is login).
I recommend implementing gpg/gpg2 encryption with a passphrase file. gpg2 supports AES-NI with a new enough version of libgcrypt for AES acceleration. I can work on a patch if desired, but wanted to file this immediately so users with large DB exports can stop using encryption. It's very unsafe for backup as it cannot be restored.
The text was updated successfully, but these errors were encountered: