Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HDDA not supporting uri containing a colon #5513

Open
jtojnar opened this issue Jul 16, 2024 · 8 comments
Open

HDDA not supporting uri containing a colon #5513

jtojnar opened this issue Jul 16, 2024 · 8 comments
Labels
documentation question User support question

Comments

@jtojnar
Copy link

jtojnar commented Jul 16, 2024

I tried to determine a password for F1 from an example in https://www.rfc-editor.org/rfc/rfc3665#page-7:

Authorization: Digest username="bob", realm="atlanta.example.com",
 qop="auth", nonce="1cec4341ae6cbe5a359ea9c8e88df84f", opaque="",
 uri="sips:ss2.biloxi.example.com",
 response="71ba27c64bd01de719686aa4590d5824"

I was stuck on john not liking the legacy variant:

$ john digest.txt --show=invalid
bob:$response$6629fae49393a05397450978507c4ef1$bob$atlanta.example.com$REGISTER$sips:ss2.biloxi.example.com$ea9c8e88df84f1cec4341ae6cbe5a359$
0 valid hashes, 1 invalid hash

(I later realized that the Authorization header itself is actually invalid but that is unrelated to this report.)

Looks like it does not like the colon in the uri since when I delete it, I get

$ john digest.txt --show=invalid
1 valid hash, 0 invalid hashes

Using absolute URI is allowed by the specification:

; https://www.rfc-editor.org/rfc/rfc2617#section-3.2.2
digest-uri       = "uri" "=" digest-uri-value
digest-uri-value = request-uri   ; As specified by HTTP/1.1
; https://www.rfc-editor.org/rfc/rfc2616#section-5.1.2
Request-URI    = "*" | absoluteURI | abs_path | authority
; https://www.rfc-editor.org/rfc/rfc2396#page-11
absoluteURI   = scheme ":" ( hier_part | opaque_part )

It is also commonly used by Session Initiation Protocol, which does not even allow non-absolute URIs.


I modified the Nix package to point to 367d643:

$ john --list=build-info
Version: 1.9.0-jumbo-1+bleeding-$
Build: linux-gnu 64-bit x86_64 SSE2 AC OMP OPENCL
SIMD: SSE2, interleaving: MD4:3 MD5:3 SHA1:1 SHA256:1 SHA512:1
System-wide exec: /nix/store/i89m6qix9pragyqc5s7fqfg8gcm2412c-john-bleeding/bin
System-wide home: /nix/store/i89m6qix9pragyqc5s7fqfg8gcm2412c-john-bleeding/share/john
Private home: ~/.john
$JOHN is /nix/store/i89m6qix9pragyqc5s7fqfg8gcm2412c-john-bleeding/share/john/
Format interface version: 14
Max. number of reported tunable costs: 4
Rec file version: REC4
Charset file version: CHR3
CHARSET_MIN: 1 (0x01)
CHARSET_MAX: 255 (0xff)
CHARSET_LENGTH: 24
SALT_HASH_SIZE: 1048576
SINGLE_IDX_MAX: 2147483648
SINGLE_BUF_MAX: 4294967295
Effective limit: Number of salts vs. SingleMaxBufferSize
Max. Markov mode level: 400
Max. Markov mode password length: 30
gcc version: 13.3.0
GNU libc version: 2.39 (loaded: 2.39)
OpenCL headers version: 1.2
Crypto library: OpenSSL
OpenSSL library version: 0300000e0
OpenSSL 3.0.14 4 Jun 2024
GMP library version: 6.3.0
File locking: fcntl()
fseek(): fseek
ftell(): ftell
fopen(): fopen
memmem(): System's
times(2) sysconf(_SC_CLK_TCK) is 100
Using times(2) for timers, resolution 10 ms
HR timer: clock_gettime(), latency 38 ns
Total physical host memory: 7842 MiB
Available physical host memory: 3063 MiB
Terminal locale string: en_GB.UTF-8
Parsed terminal locale: UTF-8
@claudioandre-br
Copy link
Member

As a temporary (but valid) solution, you can change the separator field. Edit digest and:

$ john --field-separator-char="|" --show=left digest.txt
using field sep char '|' (0x7c)
bob|$response$6629fae49393a05397450978507c4ef1$bob$atlanta.example.com$REGISTER$sips:ss2.biloxi.example.com$ea9c8e88df84f1cec4341ae6cbe5a359
0 password hashes cracked, 1 left
$ john --field-separator-char="|" digest.txt
using field sep char '|' (0x7c)
Using default input encoding: UTF-8
Loaded 1 password hash (hdaa, HTTP Digest access authentication [MD5 256/256 AVX2 8x3])
Warning: no OpenMP support for this hash type, consider --fork=8
Note: Passwords longer than 10 [worst case UTF-8] to 32 [ASCII] rejected
Proceeding with single, rules:Single
Press 'q' or Ctrl-C to abort, 'h' for help, almost any other key for status
Warning: Only 16 candidates buffered for the current salt, minimum 24 needed for performance.
Warning: Only 6 candidates buffered for the current salt, minimum 24 needed for performance.
Almost done: Processing the remaining buffered candidate passwords, if any.
0g 0:00:00:00 DONE 1/3 (2024-07-17 08:46) 0g/s 19620p/s 19620c/s 19620C/s Bob1921..Bob1900
Proceeding with wordlist:/snap/john-the-ripper/current/bin/password.lst
Enabling duplicate candidate password suppressor

A suitable solution could be $HEX support in john.

@claudioandre-br
Copy link
Member

As a side note:

  • @jtojnar Do you have a new/updated computer? Could you post the result of the command:
cat /proc/cpuinfo | grep -e 'flags*\|model*' | head -3

@solardiz solardiz added the question User support question label Jul 17, 2024
@solardiz
Copy link
Member

As a temporary (but valid) solution, you can change the separator field.

I'm not sure we'll have any better solution. We do use colon as field separator by default. Maybe we need to document this problem and solution more prominently, but where?

What other solution can we possibly implement? Here are some ideas, out of which I don't like any:

One idea is to somehow know there are only 2 fields, and if so only consider the first colon as the field separator. We could magically do this where the are too few colons for this to be a full /etc/passwd or PWDUMP-style line. However, such heuristic would be fragile, as well as even more confusing (than the current behavior) when it fails (perhaps rarely).

Another idea is to have a command-line option to claim the number of fields, allowing for values 1 (bare hash) and 2. This would allow to keep the colons intact, but would it be any easier to learn of this problem/solution and use it than the current field separator option? I doubt it.

We could maybe base the loader's field count expectation on the hash type, but this mixes two abstraction layers into one, so it could also be unexpected and confusing. Also, it's tricky implementation-wise, in many ways.

@solardiz
Copy link
Member

A suitable solution could be $HEX support in john.

Ah, of course. But even then, where exactly in the documentation, program messages, etc. would we provide such advice, so that it reaches the right people when they bump into this problem?

Warning: Only 16 candidates buffered for the current salt, minimum 24 needed for performance.
Warning: Only 6 candidates buffered for the current salt, minimum 24 needed for performance.

I wonder why this got printed twice.

@solardiz
Copy link
Member

Warning: Only 16 candidates buffered for the current salt, minimum 24 needed for performance.
Warning: Only 6 candidates buffered for the current salt, minimum 24 needed for performance.

I wonder why this got printed twice.

Oh, nevermind, we do allow up to 10 of these by default.

@jtojnar
Copy link
Author

jtojnar commented Jul 17, 2024

As a temporary (but valid) solution, you can change the separator field

Thanks, this made it recognizable.

A suitable solution could be $HEX support in john.

I guess we could also add a heuristic like:

  • If a line is invalid && there are more columns after hash && the next column contains $
    • Point user to --field-separator-char option.

But maybe adding an FAQ answer like the following would be enough:

  • Q: Why doesn't John load my password file? […]
  • A: The salt or some other component of the hash contains a colon so it is split into multiple columns, and john tries to crack the incomplete first one. Choose a different column separator using the --field-separator-char option.
  • Do you have a new/updated computer?

I use a ~2014 Haswell laptop. Why?

Could you post the result of the command:

$ cat /proc/cpuinfo | grep -e 'flags*\|model*' | head -3
model		: 69
model name	: Intel(R) Core(TM) i7-4500U CPU @ 1.80GHz
flags		: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm cpuid_fault epb pti ssbd ibrs ibpb stibp tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid xsaveopt dtherm ida arat pln pts vnmi md_clear flush_l1d

@claudioandre-br
Copy link
Member

I use a ~2014 Haswell laptop. Why?

I had doubts about the binary not offering maximum performance (which is bad for the user).

  • I expect the build for (your) Nix to detect that you have AVX2 (but this has no impact on the problem or solution).

@jtojnar
Copy link
Author

jtojnar commented Jul 18, 2024

I had doubts about the binary not offering maximum performance (which is bad for the user).

Oh, good catch, I opened a downstream issue NixOS/nixpkgs#328226

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation question User support question
Projects
None yet
Development

No branches or pull requests

3 participants