This project implements a Feed-Forward Neural Network (FFNN) using Erlang/OTP's actor model. Each component of the neural network (neurons, sensors, actuators, and cortex) is implemented as a separate process, allowing for concurrent execution and message passing.
The system consists of several key modules:
exoself.erl
- Orchestrates the network creation and lifecycleconstructor.erl
- Generates the network structuresensor.erl
- Handles input generationneuron.erl
- Implements neuron behavioractuator.erl
- Manages output processingcortex.erl
- Orchestrates the network operationrecords.hrl
- Defines data structures
graph TD
A[ExoSelf] -->|Creates| B[Cortex]
B -->|Manages| C[Sensors]
B -->|Manages| D[Neurons]
B -->|Manages| E[Actuators]
B -->|Manages| F[Monitor]
sequenceDiagram
participant E as ExoSelf
participant C as Cortex
participant N as Neuron
participant M as Monitor
E->>C: {ExoSelf, start}
C->>M: spawn(monitor)
Note over M: Monitor Started
C->>N: {sync}
N->>M: {neuron_update, Id, Activation}
Note over M: Display Update
C->>M: {terminate}
Note over M: Monitor Stopped
-record(sensor, {id, cortex_id, name, vector_length, fanout_ids}).
-record(actuator, {id, cortex_id, name, vector_length, fanin_ids}).
-record(neuron, {id, cortex_id, activation_function, input_ids, output_ids}).
-record(cortex, {id, sensor_ids, actuator_ids, neuron_ids}).
The dot product is used in neurons to compute the weighted sum of inputs:
dot([I | Input], [W | Weights], Acc) ->
dot(Input, Weights, I * W + Acc);
dot([], [], Acc) ->
Acc.
This operation:
- Multiplies each input by its corresponding weight
- Sums all products
- Determines neuron activation strength
The network uses hyperbolic tangent (tanh) as its activation function:
tanh(Val) ->
math:tanh(Val).
Key properties:
- Bounds output between -1 and 1
- Non-linear transformation
- Smooth gradient
- Zero-centered output
Benefits for neural networks:
- Prevents numerical overflow
- Allows for negative outputs
- Strong gradients near zero
- Smooth activation curves
The ExoSelf is responsible for:
- Reading network configuration (genotype)
- Spawning all neural processes
- Establishing connections
- Managing network lifecycle
- Saving updated weights
Orchestrates:
- Network synchronization
- Information flow
- Process termination
- Weight updates
Handle:
- Input processing
- Weight application
- Activation function
- Output distribution
- Ensure Erlang/OTP 26 or later is installed
- Compile all modules:
c(exoself).
c(constructor).
c(sensor).
c(neuron).
c(actuator).
c(cortex).
constructor:construct_genotype("ffnn.erl", rng, pts, [1,3]).
Parameters:
"ffnn.erl"
- Output file namerng
- Sensor type (random number generator)pts
- Actuator type (prints to screen)[1,3]
- Hidden layer configuration (1 neuron in first hidden layer, 3 in second)
exoself:map("ffnn.erl").
-
Initialization:
- ExoSelf reads genotype
- Spawns all processes
- Establishes connections
-
Operation:
- Sensor generates input
- Neurons process data
- Actuator presents output
- Cortex synchronizes steps
-
Learning:
- Weights are updated
- Network state is saved
- Genotype is modified
- Each neural component is a separate Erlang process
- Communication via message passing
- Supervised by Cortex process
- Coordinated by ExoSelf
- Network configuration stored in files
- Weight updates saved automatically
- Restartable from saved state
- Process monitoring
- Graceful termination
- State preservation
For learning more about Erlang/OTP and neural networks:
- Learn You Some Erlang - Excellent Erlang tutorial
- Erlang Documentation - Official documentation
- Making reliable distributed systems in the presence of software errors - Joe Armstrong's thesis on Erlang
Eshell V14.1.1
1> c(constructor).
{ok,constructor}
2> constructor:construct_genotype("ffnn.erl",rng,pts,[1,3]).
ok
3> exoself:map("ffnn.erl").
<0.123.0>
This will:
- Generate a neural network configuration
- Save it to
ffnn.erl
- Create and start a network with:
- Random number generator input
- Two hidden layers (1 and 3 neurons)
- Print-to-screen output
- Uses Erlang's actor model for concurrent processing
- Each neuron runs as a separate process
- Communication happens via message passing
- Uses hyperbolic tangent (tanh) as activation function
- Supports dynamic network topology
- Persistent state through file storage
- Coordinated by ExoSelf process
Let me know if you need any clarification or have questions about specific parts of the implementation!