Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
48 commits
Select commit Hold shift + click to select a range
0bbf0cb
update
ping-ke Mar 4, 2026
1275357
fix build
ping-ke Mar 4, 2026
b19a654
fix test
ping-ke Mar 4, 2026
ecc390c
add asyncio update
ping-ke Mar 10, 2026
530e85f
apply db change for rocksdict
ping-ke Mar 10, 2026
c92cee1
fix remaining jsonrpc_async usages and import issues in tools
Mar 13, 2026
607b664
fix implicit relative import in miner_address.py for Python 3
ping-ke Mar 16, 2026
405cdbf
fix jsonrpc test failures: params passing and websocket server shutdown
ping-ke Mar 16, 2026
97348c9
update test docker
ping-ke Mar 16, 2026
af066ad
add timeout for test
ping-ke Mar 17, 2026
9ac0f88
fix bug
ping-ke Mar 17, 2026
adaabde
fix asyncio task leaks causing test timeouts
ping-ke Mar 18, 2026
2c10e1f
remove timeout for test action
ping-ke Mar 18, 2026
39266f6
fix bugs in UPnPService nat.py
ping-ke Mar 19, 2026
1f4d681
add __main__ entry point to nat.py for manual UPnP testing
ping-ke Mar 19, 2026
07a5cf2
change CRLF to LF
ping-ke Mar 20, 2026
b5d6103
resolve comments
ping-ke Mar 26, 2026
b4e190d
convert sync tests to async using IsolatedAsyncioTestCase
ping-ke Mar 26, 2026
12bfa16
remove deprecated WebSocketServerProtocol import from websockets
ping-ke Mar 26, 2026
6378f9f
resolve comments
ping-ke Mar 26, 2026
72cdea8
add bench for hashimoto
ping-ke Mar 26, 2026
52ffd7b
bug fix
ping-ke Mar 26, 2026
78257fd
Revert "convert sync tests to async using IsolatedAsyncioTestCase"
ping-ke Mar 27, 2026
c741343
convert sync tests to async using IsolatedAsyncioTestCase
ping-ke Mar 28, 2026
4b80503
update requirements.txt
ping-ke Mar 29, 2026
41b84ca
remove useless code
ping-ke Mar 30, 2026
33ac87d
remove pyethash
ping-ke Mar 30, 2026
139ff81
rename jsonrpcserver to jsonrpc_server and fix related imports and is…
ping-ke Mar 31, 2026
a1b093d
fix tools compatibility with new JsonRpcClient
ping-ke Mar 31, 2026
a8878a4
resolve comment
ping-ke Apr 2, 2026
456ba11
resolve upgrade/p2p-nat comment
ping-ke Apr 3, 2026
5240b6f
add unit test for jsonrpc
ping-ke Apr 3, 2026
72493e4
add profile code
ping-ke Apr 3, 2026
1c73561
resolve upgrade/ethash comment
ping-ke Apr 3, 2026
3ecb2d2
Merge branch 'master' into upgrade-py3-13
ping-ke Apr 4, 2026
c9a87d9
cleanup build dependencies after pip install to reduce image size
ping-ke Apr 4, 2026
7b0dc58
add unit tests for UPnPService in nat.py
ping-ke Apr 5, 2026
03b27b7
refactor and expand UPnP NAT tests
ping-ke Apr 6, 2026
ab8d8e5
add full lifecycle test and extract test constants
ping-ke Apr 6, 2026
2eb2869
remove call_with_dict_params, unify call() for both param types, log …
ping-ke Apr 6, 2026
0764756
optimize ethash pow verification: struct-based hashing and numpy fnv …
ping-ke Apr 7, 2026
bedf0f9
clean up ethereum/pow: remove unused code, simplify ethpow.py
ping-ke Apr 8, 2026
eb01929
rename ethash_sha3_512/256_np -> ethash_sha3_512/256; refactor bench …
ping-ke Apr 8, 2026
3df9453
bench_before_after: rename mid -> R1 consistently
ping-ke Apr 8, 2026
915a271
add Cython inner loop for calc_dataset_item (R3): ~20x speedup over n…
ping-ke Apr 8, 2026
d9ca5fe
gitignore: add Cython generated .c and .pyd files
ping-ke Apr 8, 2026
7c4c10d
add Cython to requirements, update README install docs, add Cython vs…
ping-ke Apr 8, 2026
9eac57a
add setuptools to requirements, build Cython extension in Dockerfile
ping-ke Apr 8, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions .github/workflows/build-and-test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ env:
jobs:
test-without-integration:
runs-on: ubuntu-latest
container: quarkchaindocker/pyquarkchain:mainnet1.1.1
container: quarkchaindocker/pyquarkchain:test-py3.13

steps:
- uses: actions/checkout@v4
Expand All @@ -22,7 +22,7 @@ jobs:

evm-tests-runner-1:
runs-on: ubuntu-latest
container: quarkchaindocker/pyquarkchain:mainnet1.1.1
container: quarkchaindocker/pyquarkchain:test-py3.13

steps:
- uses: actions/checkout@v4
Expand All @@ -35,7 +35,7 @@ jobs:

evm-tests-runner-2:
runs-on: ubuntu-latest
container: quarkchaindocker/pyquarkchain:mainnet1.1.1
container: quarkchaindocker/pyquarkchain:test-py3.13

steps:
- uses: actions/checkout@v4
Expand All @@ -48,7 +48,7 @@ jobs:

test-integration-and-qkc-specific-state:
runs-on: ubuntu-latest
container: quarkchaindocker/pyquarkchain:mainnet1.1.1
container: quarkchaindocker/pyquarkchain:test-py3.13

steps:
- uses: actions/checkout@v4
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/nightly-check-db.yml
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ env:
jobs:
download-snapshot-and-checkdb:
runs-on: self-hosted
container: quarkchaindocker/pyquarkchain:mainnet1.6.1
container: quarkchaindocker/pyquarkchain:test-py3.13
timeout-minutes: 4320

steps:
Expand Down
4 changes: 4 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,10 @@ __pycache__/

# C extensions
*.so
*.pyd

# Cython generated
ethereum/pow/ethash_cy.c

# qkchash binaries
qkchash/qkchash
Expand Down
5 changes: 4 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,9 +71,12 @@ To install the required modules for the project. Under `pyquarkchain` dir where
# you may want to set the following if cryptography complains about header files: (https://github.com/pyca/cryptography/issues/3489)
# export CPPFLAGS=-I/usr/local/opt/openssl/include
# export LDFLAGS=-L/usr/local/opt/openssl/lib
pip install -e .
pip install -r requirements.txt
python setup.py build_ext --inplace
```

The second command builds the optional Cython extension (`ethash_cy`) that speeds up ethash `calc_dataset_item` by ~20x. It requires a C compiler. If the build is skipped, the pure-Python fallback is used automatically.

Once all the modules are installed, try running all the unit tests under `pyquarkchain`

```
Expand Down
125 changes: 77 additions & 48 deletions ethereum/pow/ethash.py
Original file line number Diff line number Diff line change
@@ -1,94 +1,123 @@
import copy
import numpy as np
from functools import lru_cache
from typing import Callable, Dict, List

from ethereum.pow.ethash_utils import *
from ethereum.pow.ethash_utils import (
ethash_sha3_512, ethash_sha3_256,
FNV_PRIME, HASH_BYTES, WORD_BYTES, MIX_BYTES,
DATASET_PARENTS, CACHE_ROUNDS, ACCESSES, EPOCH_LENGTH,
)

# uint32 overflow is intentional in FNV arithmetic
np.seterr(over="ignore")

_FNV_PRIME = np.uint32(FNV_PRIME)

# Optional Cython inner loop for calc_dataset_item. Falls back to pure numpy
# when the compiled extension isn't built (e.g. source checkouts without a
# C compiler).
try:
from ethereum.pow.ethash_cy import mix_parents as _cy_mix_parents
except ImportError: # pragma: no cover
_cy_mix_parents = None

cache_seeds = [b"\x00" * 32] # type: List[bytes]


def mkcache(cache_size: int, block_number) -> List[List[int]]:
def _fnv_arr(a: np.ndarray, b: np.ndarray) -> np.ndarray:
return a * _FNV_PRIME ^ b


def mkcache(cache_size: int, block_number) -> np.ndarray:
while len(cache_seeds) <= block_number // EPOCH_LENGTH:
new_seed = serialize_hash(ethash_sha3_256(cache_seeds[-1]))
new_seed = ethash_sha3_256(cache_seeds[-1]).tobytes()
cache_seeds.append(new_seed)

seed = cache_seeds[block_number // EPOCH_LENGTH]
return _get_cache(seed, cache_size // HASH_BYTES)


@lru_cache(10)
def _get_cache(seed, n) -> List[List[int]]:
# Sequentially produce the initial dataset
o = [ethash_sha3_512(seed)]
@lru_cache(2)
def _get_cache(seed: bytes, n: int) -> np.ndarray:
"""Returns cache as uint32 ndarray of shape (n, 16)."""
o = np.empty((n, 16), dtype=np.uint32)
o[0] = ethash_sha3_512(seed)
for i in range(1, n):
o.append(ethash_sha3_512(o[-1]))

# Use a low-round version of randmemohash
o[i] = ethash_sha3_512(o[i - 1])
for _ in range(CACHE_ROUNDS):
for i in range(n):
v = o[i][0] % n
o[i] = ethash_sha3_512(list(map(xor, o[(i - 1 + n) % n], o[v])))

v = int(o[i, 0]) % n
xored = o[(i - 1 + n) % n] ^ o[v]
o[i] = ethash_sha3_512(xored)
return o


def calc_dataset_item(cache: List[List[int]], i: int) -> List[int]:
def calc_dataset_item(cache: np.ndarray, i: int) -> np.ndarray:
n = len(cache)
r = HASH_BYTES // WORD_BYTES
# initialize the mix
mix = copy.copy(cache[i % n]) # type: List[int]
mix[0] ^= i
mix = cache[i % n].copy()
mix[0] ^= i # numpy auto-converts int, no explicit np.uint32() boxing
mix = ethash_sha3_512(mix)
# fnv it with a lot of random cache nodes based on i
for j in range(DATASET_PARENTS):
cache_index = fnv(i ^ j, mix[j % r])
mix = list(map(fnv, mix, cache[cache_index % n]))
if _cy_mix_parents is not None:
# mix is already C-contiguous uint32[16] (it's a fresh ndarray from
# ethash_sha3_512). cache rows are also contiguous uint32[16].
_cy_mix_parents(mix, cache, i)
else:
r = HASH_BYTES // WORD_BYTES # 16
for j in range(DATASET_PARENTS):
cache_index = ((i ^ j) * FNV_PRIME ^ int(mix[j % r])) & 0xFFFFFFFF
mix *= _FNV_PRIME # in-place: no temp array allocation
mix ^= cache[cache_index % n] # in-place: no temp array allocation
return ethash_sha3_512(mix)


def calc_dataset(full_size, cache) -> List[List[int]]:
o = []
for i in range(full_size // HASH_BYTES):
o.append(calc_dataset_item(cache, i))
return o
def calc_dataset(full_size, cache: np.ndarray) -> np.ndarray:
rows = full_size // HASH_BYTES
out = np.empty((rows, 16), dtype=np.uint32)
for i in range(rows):
out[i] = calc_dataset_item(cache, i)
return out


def hashimoto(
header: bytes,
nonce: bytes,
full_size: int,
dataset_lookup: Callable[[int], List[int]],
dataset_lookup: Callable[[int], np.ndarray],
) -> Dict:
n = full_size // HASH_BYTES
w = MIX_BYTES // WORD_BYTES
mixhashes = MIX_BYTES // HASH_BYTES
# combine header+nonce into a 64 byte seed
s = ethash_sha3_512(header + nonce[::-1])
mix = []
for _ in range(MIX_BYTES // HASH_BYTES):
mix.extend(s)
# mix in random dataset nodes

s = ethash_sha3_512(header + nonce[::-1]) # (16,) uint32
mix = np.tile(s, mixhashes) # (32,) uint32
s0 = int(s[0]) # hoist constant, avoid repeated unboxing
newdata = np.empty(w, dtype=np.uint32) # pre-allocate, reused every iteration

for i in range(ACCESSES):
p = fnv(i ^ s[0], mix[i % w]) % (n // mixhashes) * mixhashes
newdata = []
for j in range(mixhashes):
newdata.extend(dataset_lookup(p + j))
mix = list(map(fnv, mix, newdata))
# compress mix
cmix = []
for i in range(0, len(mix), 4):
cmix.append(fnv(fnv(fnv(mix[i], mix[i + 1]), mix[i + 2]), mix[i + 3]))
p = ((i ^ s0) * FNV_PRIME ^ int(mix[i % w])) & 0xFFFFFFFF
p = p % (n // mixhashes) * mixhashes
for j in range(mixhashes): # avoid np.concatenate alloc+copy
newdata[j * 16:(j + 1) * 16] = dataset_lookup(p + j)
mix *= _FNV_PRIME # in-place: no temp array
mix ^= newdata # in-place: no temp array

mix_r = mix.reshape(-1, 4)
cmix = mix_r[:, 0] * _FNV_PRIME ^ mix_r[:, 1]
cmix = cmix * _FNV_PRIME ^ mix_r[:, 2]
cmix = cmix * _FNV_PRIME ^ mix_r[:, 3]

s_cmix = np.concatenate([s, cmix])
return {
b"mix digest": serialize_hash(cmix),
b"result": serialize_hash(ethash_sha3_256(s + cmix)),
b"mix digest": cmix.tobytes(),
b"result": ethash_sha3_256(s_cmix).tobytes(),
}


def hashimoto_light(
full_size: int, cache: List[List[int]], header: bytes, nonce: bytes
full_size: int, cache: np.ndarray, header: bytes, nonce: bytes
) -> Dict:
return hashimoto(header, nonce, full_size, lambda x: calc_dataset_item(cache, x))


def hashimoto_full(dataset: List[List[int]], header: bytes, nonce: bytes) -> Dict:
def hashimoto_full(dataset: np.ndarray, header: bytes, nonce: bytes) -> Dict:
return hashimoto(header, nonce, len(dataset) * HASH_BYTES, lambda x: dataset[x])
60 changes: 60 additions & 0 deletions ethereum/pow/ethash_cy.pyx
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
# cython: language_level=3
# cython: boundscheck=False
# cython: wraparound=False
# cython: cdivision=True
# cython: initializedcheck=False
"""
Cython rewrite of the inner loop of ``calc_dataset_item``.

The Python version spends most of its time in the ``DATASET_PARENTS`` (256)
iteration loop, which is pure 32-bit integer arithmetic and an indexed
row XOR into a 16-word mix. This module exposes ``mix_parents`` which takes
the already-hashed mix (uint32[16]) plus the cache (uint32[:, 16]) and
performs the full parent loop in native code, writing back into ``mix``
in place.

The caller (ethash.py) is still responsible for the two keccak-512 calls
that bracket the loop.
"""

import numpy as np
cimport numpy as cnp
cimport cython
from libc.stdint cimport uint32_t, uint64_t

cnp.import_array()

# Ethash constants, mirrored here so we don't touch Python state in the loop.
cdef uint32_t FNV_PRIME = 0x01000193u
cdef Py_ssize_t DATASET_PARENTS = 256
cdef Py_ssize_t R = 16 # HASH_BYTES // WORD_BYTES


@cython.boundscheck(False)
@cython.wraparound(False)
def mix_parents(uint32_t[::1] mix,
const uint32_t[:, ::1] cache,
uint64_t i):
"""In-place parent mixing for one dataset item.

Parameters
----------
mix : uint32[16] (C-contiguous)
The post-sha3_512 seed mix; updated in place.
cache : uint32[n, 16] (C-contiguous)
The ethash cache.
i : uint64
Dataset item index.
"""
cdef Py_ssize_t n = cache.shape[0]
cdef Py_ssize_t j, k
cdef uint32_t cache_index, mix_word
cdef uint32_t i32 = <uint32_t>i

for j in range(DATASET_PARENTS):
mix_word = mix[j % R]
# 32-bit wraparound is implicit in uint32_t arithmetic
cache_index = ((i32 ^ <uint32_t>j) * FNV_PRIME) ^ mix_word
cache_index = cache_index % <uint32_t>n
for k in range(R):
mix[k] = (mix[k] * FNV_PRIME) ^ cache[cache_index, k]
Loading
Loading