Pseudo-random number generators (PRNGs) like Mersenne Twister and Python's random are recursive:
xβββ = (aΒ·xβ + c) mod m
This creates:
- π Hidden correlations β each number depends on the one before
- π Periodicity β sequences eventually repeat
- π§± Exploration boundaries β AI can't truly explore
- π False reproducibility β same seed = same path
AI deserves better.
import aleam as al
rng = al.Aleam()
x = rng.random() # True randomness. No recursion. No state.Aleam implements the proven equation:
Ξ¨(t) = BLAKE2s( (Ξ¦ Γ Ξ(t)) β Ο(t) )
| Symbol | Meaning |
|---|---|
| Ξ¦ | Golden ratio prime (0x9E3779B97F4A7C15) |
| Ξ(t) | 64-bit true entropy from system CSPRNG |
| Ο(t) | Nanosecond timestamp |
| β | XOR mixing |
| BLAKE2s | Cryptographic hash |
Properties:
| π Non-recursive | π² Stateless | π Cryptographically Secure | π§ AI-Optimized |
|---|---|---|---|
| Each call independent | No seeds, no state | Powered by BLAKE2s | Gradient noise, latent sampling |
| Step | Operation | Description |
|---|---|---|
| 1 | Ξ(t) = get_entropy_64() |
Pull 64-bit true entropy from system |
| 2 | Ξ© = Ξ¦ Γ Ξ(t) |
Golden ratio mixing (bijective, maximally equidistributed) |
| 3 | Ο = time.time_ns() |
Nanosecond timestamp for uniqueness |
| 4 | Ξ£ = Ξ© β Ο |
XOR mixing over 64 bits |
| 5 | Ο = BLAKE2s(Ξ£) |
Cryptographic hash to 64-bit output |
| 6 | r = Ο / 2βΆβ΄ |
Map to floating point [0, 1) |
| Generator | Speed (M ops/sec) | Randomness Type |
|---|---|---|
| Python random | 5.94 | Pseudo |
| Aleam CPU | 2.05 | True |
| PyTorch CUDA | 2,650.81 | Pseudo |
| Aleam GPU | 14,434.25 | True |
Tested on NVIDIA Tesla T4 (Google Colab) Β· CuPy 14.0.1 Β· Aleam 1.0.3
π‘ Key Insight: Aleam GPU delivers 14.4 BILLION true random numbers per second β 2,430x faster than Python random and 5.4x faster than PyTorch CUDA!
After 2.55 million samples, Aleam passed all 10 rigorous tests:
| Test | Result | Status |
|---|---|---|
| Mean | 0.499578 | β |
| Variance | 0.083154 | β |
| Chi-Square (Uniformity) | 21.40 (critical 30.14) | β PASS |
| Max Autocorrelation | 0.0094 | β EXCELLENT |
| Ο Estimation Error | 0.0105% | β EXCELLENT |
| Shannon Entropy | 0.9999 | β NEAR-PERFECT |
"True randomness is not a bug β it's a feature."
pip install aleamimport aleam as al
# Create a true random generator
rng = al.Aleam()
# Core randomness
x = rng.random() # 0.90324326
u64 = rng.random_uint64() # 12345678901234567890
y = rng.randint(1, 100) # 86
z = rng.choice(['AI', 'ML', 'Aleam']) # 'ML'
u = rng.uniform(5.0, 10.0) # 7.234
n = rng.gauss(0.0, 1.0) # -0.432
# Sampling (requires list, not range)
population = list(range(10000))
batch = rng.sample(population, 64) # Random 64 unique indices
# Shuffle list in-place
items = [1, 2, 3, 4, 5]
rng.shuffle(items) # [3, 1, 5, 2, 4]
# Random bytes (returns list of integers)
key = rng.random_bytes(32) # 32 random bytes as list| Method | Description | Example |
|---|---|---|
random() |
True random float in [0, 1) | rng.random() |
random_uint64() |
True random 64-bit integer | rng.random_uint64() |
randint(a, b) |
Random integer in [a, b] | rng.randint(1, 100) |
choice(seq) |
Random element from sequence | rng.choice(['a', 'b', 'c']) |
shuffle(lst) |
Shuffle list in-place | rng.shuffle(my_list) |
sample(pop, k) |
Sample k unique elements | rng.sample(list(range(100)), 10) |
random_bytes(n) |
Generate n random bytes (as list) | rng.random_bytes(32) |
All distributions are available as methods on the Aleam instance:
| Distribution | Method | Example |
|---|---|---|
| Uniform | uniform(low, high) |
rng.uniform(5, 10) |
| Normal (Gaussian) | gauss(mu, sigma) |
rng.gauss(0, 1) |
| Exponential | exponential(rate) |
rng.exponential(1.0) |
| Beta | beta(alpha, beta) |
rng.beta(2, 5) |
| Gamma | gamma(shape, scale) |
rng.gamma(2, 1) |
| Poisson | poisson(lam) |
rng.poisson(3.5) |
| Laplace | laplace(loc, scale) |
rng.laplace(0, 1) |
| Logistic | logistic(loc, scale) |
rng.logistic(0, 1) |
| Log-Normal | lognormal(mu, sigma) |
rng.lognormal(0, 1) |
| Weibull | weibull(shape, scale) |
rng.weibull(1.5, 1) |
| Pareto | pareto(alpha, scale) |
rng.pareto(2, 1) |
| Chi-square | chi_square(df) |
rng.chi_square(5) |
| Student's t | student_t(df) |
rng.student_t(3) |
| F-distribution | f_distribution(df1, df2) |
rng.f_distribution(5, 10) |
| Dirichlet | dirichlet(alpha) |
rng.dirichlet([1, 2, 3]) |
| Class | Methods | Use Case |
|---|---|---|
AIRandom |
gradient_noise(), latent_vector(), dropout_mask(), augmentation_params(), mini_batch(), exploration_noise() |
Training, augmentation, RL exploration |
GradientNoise |
add_noise(), reset(), current_scale() |
Gradient noise injection with decay |
LatentSampler |
sample(), sample_one(), interpolate() |
Latent space sampling for VAEs/GANs |
Module-level functions that return numpy arrays directly:
| Function | Description | Example |
|---|---|---|
random_array(shape) |
Uniform random array | al.random_array((100, 100)) |
randn_array(shape, mu, sigma) |
Normal random array | al.randn_array((1000,), 0, 1) |
randint_array(shape, low, high) |
Integer random array | al.randint_array((50,), 0, 10) |
Aleam provides true randomness to ML frameworks via true random seeds.
import torch
import aleam as al
# Get true random seed from Aleam
rng = al.Aleam()
seed = rng.random_uint64()
# Set PyTorch seed
torch.manual_seed(seed)
# Generate tensors on GPU
tensor = torch.randn(100, 100, device='cuda')import tensorflow as tf
import aleam as al
# Get true random seed from Aleam
rng = al.Aleam()
seed = rng.random_uint64()
# Set TensorFlow seed
tf.random.set_seed(seed)
# Generate tensors
tensor = tf.random.normal((100, 100))import jax
import aleam as al
# Get true random seed from Aleam
rng = al.Aleam()
seed = rng.random_uint64()
# Create JAX key
key = jax.random.key(seed)
# Generate tensors
tensor = jax.random.normal(key, (100, 100))import cupy as cp
import aleam as al
# Get true random seed from Aleam
rng = al.Aleam()
seed = rng.random_uint64()
# Set CuPy seed
cp.random.seed(seed)
# Generate 100 million true random numbers on GPU
arr = cp.random.randn(10000, 10000) # 14.4B ops/sec!pip install aleam# PyTorch
pip install aleam[torch]
# TensorFlow
pip install aleam[tensorflow]
# CuPy (for GPU acceleration)
pip install aleam[cupy]
# All frameworks
pip install aleam[all]git clone https://github.com/fardinsabid/aleam.git
cd aleam
pip install .aleam/
β
βββ .github/
β βββ workflows/
β βββ tests.yml
β βββ publish.yml
β βββ security.yml
β βββ docs.yml
β
βββ aleam/
β β
β βββ __init__.py
β βββ py.typed
β
βββ src/
β β
β βββ aleam/
β β
β βββ bindings/
β β βββ module.cpp
β β βββ exports.h
β β
β βββ core/
β β βββ aleam_core.h
β β βββ aleam_core.cpp
β β βββ constants.h
β β βββ utils.h
β β
β βββ entropy/
β β βββ entropy.h
β β βββ entropy_linux.h
β β βββ entropy_windows.h
β β βββ entropy_darwin.h
β β
β βββ hash/
β β βββ blake2s.h
β β βββ blake2s_config.h
β β
β βββ distributions/
β β βββ distributions.h
β β βββ distributions.cpp
β β βββ normal.h
β β βββ exponential.h
β β βββ beta.h
β β βββ gamma.h
β β βββ poisson.h
β β βββ laplace.h
β β βββ logistic.h
β β βββ lognormal.h
β β βββ weibull.h
β β βββ pareto.h
β β βββ chi_square.h
β β βββ student_t.h
β β βββ f_distribution.h
β β βββ dirichlet.h
β β
β βββ arrays/
β β βββ arrays.h
β β βββ arrays.cpp
β β βββ array_utils.h
β β
β βββ ai/
β β βββ ai.h
β β βββ ai.cpp
β β βββ gradient_noise.h
β β βββ latent_sampler.h
β β βββ augmentation.h
β β
β βββ integrations/
β β βββ integrations.h
β β βββ integrations.cpp
β β βββ torch_integration.h
β β βββ torch_integration.cpp
β β βββ tensorflow_integration.h
β β βββ tensorflow_integration.cpp
β β βββ jax_integration.h
β β βββ jax_integration.cpp
β β βββ cupy_integration.h
β β βββ cupy_integration.cpp
β β βββ pandas_integration.h
β β βββ pandas_integration.cpp
β β βββ polars_integration.h
β β βββ polars_integration.cpp
β β βββ xarray_integration.h
β β βββ xarray_integration.cpp
β β βββ pymc_integration.h
β β βββ pymc_integration.cpp
β β βββ dask_integration.h
β β βββ dask_integration.cpp
β β
β βββ cuda/
β βββ cuda_kernels.h
β βββ cuda_kernels.cu
β βββ cuda_uniform.cu
β βββ cuda_normal.cu
β βββ cuda_utils.h
β
βββ include/
β βββ aleam/
β βββ aleam.h
β
βββ tests/
β βββ test_core.py
β βββ test_ai.py
β βββ test_statistical.py
β
βββ benchmarks/
β βββ benchmark_core.py
β
βββ assets/
β βββ images/
β βββ benchmarks/
β β βββ cpu_vs_gpu.png
β βββ diagrams/
β βββ algorithm.png
β
β
βββ examples/
β βββ basic_usage.py
β βββ ai_ml_features.py
β βββ array_operations.py
β βββ distributions.py
β βββ monte_carlo_pi.py
β βββ reinforcement_learning.py
β βββ cuda_integration.py
β βββ pytorch_integration.py
β βββ tensorflow_integration.py
β
βββ docs/
β βββ ALEAM_RESEARCH_PAPER.md
β βββ CHANGELOG.md
β βββ index.md
β βββ INSTALLATION.md
β βββ ROADMAP.md
β
βββ setup.py
βββ pyproject.toml
βββ MANIFEST.in
βββ requirements.txt
βββ requirements-dev.txt
βββ LICENSE
βββ README.md
βββ SECURITY.md
βββ CONTRIBUTING.md
βββ CODE_OF_CONDUCT.md
βββ .gitignore
A: True randomness is slower than pseudo-random β that's expected. You're trading speed for genuine entropy. On GPU, Aleam achieves 14.4B ops/sec, far exceeding CPU pseudo-random speeds.
A: No. Aleam is stateless by design. Use Python's random module if you need reproducibility.
A: Yes. Each call consumes 64 bits of true entropy and passes through BLAKE2s.
A: Yes! Use CuPy with true random seeds from Aleam:
import cupy as cp
import aleam as al
seed = al.Aleam().random_uint64()
cp.random.seed(seed)
arr = cp.random.randn(10000, 10000) # 14.4B ops/secA: The C++ bindings accept Python lists directly. Use list(range(10000)) instead of range(10000).
A: It returns a Python list of integers (0-255), not a bytes object.
- β Use for AI research, exploration, and creative projects
- β Use for scientific simulations requiring true randomness
- β Use for cryptographic applications
- β Do not use for security-critical systems without additional entropy sources
MIT License β see LICENSE for details.
| π¦ PyPI | pypi.org/project/aleam |
| π Issues | GitHub Issues |
| π Documentation | GitHub Docs |
| π Research Paper | ALEAM_RESEARCH_PAPER.md |

