Skip to content

Commit 99fcbca

Browse files
authored
Update README.md
1 parent d7ff730 commit 99fcbca

File tree

1 file changed

+95
-1
lines changed

1 file changed

+95
-1
lines changed

README.md

Lines changed: 95 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1,95 @@
1-
# ParticleSwarmOptimization
1+
# MLJParticleSwarmOptimization
2+
3+
Particle swarm optimization for hyperparameter tuning in [MLJ](https://github.com/alan-turing-institute/MLJ.jl).
4+
5+
[![Build Status](https://github.com/JuliaAI/MLJParticleSwarmOptimization.jl/workflows/CI/badge.svg?branch=master)](https://github.com/JuliaAI/MLJTuning.jl/actions)
6+
[![codecov](https://codecov.io/gh/JuliaAI/MLJParticleSwarmOptimization.jl/branch/master/graph/badge.svg?token=W71AMGZ4IW)](https://codecov.io/gh/JuliaAI/MLJParticleSwarmOptimization.jl)
7+
8+
[MLJParticleSwarmOptimization](https://github.com/JuliaAI/MLJParticleSwarmOptimization.jl/) offers a suite of different particle swarm algorithms, extending [MLJTuning](https://github.com/JuliaAI/MLJTuning.jl)'s existing collection of tuning strategies. Currently supported variants and planned releases include:
9+
- [x] `ParticleSwarm`: the original algorithm as conceived by Kennedy and Eberhart [1]
10+
- [x] `AdaptiveParticleSwarm`: Zhan et. al.'s variant with adaptive control of swarm coefficients [2]
11+
- [ ] `OMOPSO`: Sierra and Coello's multi-objective particle swarm variant [3]
12+
13+
## Installation
14+
15+
This package is registered, and can be installed via the Julia REPL:
16+
17+
```julia
18+
julia> ]add MLJParticleSwarmOptimization
19+
```
20+
21+
## Discrete Hyperparameter Handling
22+
23+
Most particle swarm algorithms are designed for problems in continuous domains. To extend support for [MLJ](https://github.com/alan-turing-institute/MLJ.jl)'s integer `NumericRange` and `NominalRange`, we encode discrete hyperparameters with an internal continuous representation, as proposed by Strasser et. al. [4]. See the tuning strategies' documentation and reference the paper for more details.
24+
25+
## Examples
26+
27+
```julia
28+
julia> using MLJ, MLJDecisionTreeInterface, MLJParticleSwarmOptimization, Plots, StableRNGs
29+
30+
julia> rng = StableRNG(1234);
31+
32+
julia> X = MLJ.table(rand(rng, 100, 10));
33+
34+
julia> y = 2X.x1 - X.x2 + 0.05*rand(rng, 100);
35+
36+
julia> Tree = @load DecisionTreeRegressor pkg=DecisionTree verbosity=0;
37+
38+
julia> tree = Tree();
39+
40+
julia> forest = EnsembleModel(atom=tree);
41+
42+
julia> r1 = range(forest, :(atom.n_subfeatures), lower=1, upper=9);
43+
44+
julia> r2 = range(forest, :bagging_fraction, lower=0.4, upper=1.0);
45+
```
46+
47+
### `ParticleSwarm`
48+
49+
```julia
50+
julia> self_tuning_forest = TunedModel(
51+
model=forest,
52+
tuning=ParticleSwarm(rng=StableRNG(0)),
53+
resampling=CV(nfolds=6, rng=StableRNG(1)),
54+
range=[r1, r2],
55+
measure=rms,
56+
n=15
57+
);
58+
59+
julia> mach = machine(self_tuning_forest, X, y);
60+
61+
julia> fit!(mach, verbosity=0);
62+
63+
julia> plot(mach)
64+
```
65+
![basic](https://github.com/JuliaAI/MLJParticleSwarmOptimization.jl/blob/assets/basic.svg)
66+
67+
### `AdaptiveParticleSwarm`
68+
69+
```julia
70+
julia> self_tuning_forest = TunedModel(
71+
model=forest,
72+
tuning=AdaptiveParticleSwarm(rng=StableRNG(0)),
73+
resampling=CV(nfolds=6, rng=StableRNG(1)),
74+
range=[r1, r2],
75+
measure=rms,
76+
n=15
77+
);
78+
79+
julia> mach = machine(self_tuning_forest, X, y);
80+
81+
julia> fit!(mach, verbosity=0);
82+
83+
julia> plot(mach)
84+
```
85+
86+
![adaptive](https://github.com/JuliaAI/MLJParticleSwarmOptimization.jl/blob/assets/adaptive.svg)
87+
88+
## References
89+
[1] [Kennedy, J., & Eberhart, R. (1995, November). Particle swarm optimization. In Proceedings of ICNN'95-international conference on neural networks (Vol. 4, pp. 1942-1948). IEEE.](https://ieeexplore.ieee.org/abstract/document/488968/)
90+
91+
[2] [Zhan, Z. H., Zhang, J., Li, Y., & Chung, H. S. H. (2009). Adaptive particle swarm optimization. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 39(6), 1362-1381.](https://ieeexplore.ieee.org/abstract/document/4812104/)
92+
93+
[3] [Sierra, M. R., & Coello, C. A. C. (2005, March). Improving PSO-based multi-objective optimization using crowding, mutation and∈-dominance. In International conference on evolutionary multi-criterion optimization (pp. 505-519). Springer, Berlin, Heidelberg.](https://link.springer.com/chapter/10.1007/978-3-540-31880-4_35)
94+
95+
[4] [Strasser, S., Goodman, R., Sheppard, J., & Butcher, S. (2016, July). A new discrete particle swarm optimization algorithm. In Proceedings of the Genetic and Evolutionary Computation Conference 2016 (pp. 53-60).](https://dl.acm.org/doi/abs/10.1145/2908812.2908935)

0 commit comments

Comments
 (0)