Skip to content

Commit

Permalink
Remove unused files (#1349)
Browse files Browse the repository at this point in the history
* Remove unused files

Signed-off-by: Chaurasiya, Payal <[email protected]>

* Revert adaptive aggregation removal

Signed-off-by: Chaurasiya, Payal <[email protected]>

* Update overriding_agg_fn.rst

Signed-off-by: Chaurasiya, Payal <[email protected]>

---------

Signed-off-by: Chaurasiya, Payal <[email protected]>
  • Loading branch information
payalcha authored Feb 7, 2025
1 parent dec7cc5 commit 945877f
Show file tree
Hide file tree
Showing 21 changed files with 4 additions and 1,511 deletions.
115 changes: 0 additions & 115 deletions docs/developer_guide/advanced_topics/overriding_agg_fn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -26,121 +26,6 @@ Choose from the following predefined aggregation functions:
- ``openfl.interface.aggregation_functions.YogiAdaptiveAggregation``


.. _adaptive_aggregation_functions:

Adaptive Aggregation Functions
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

.. note::
To create adaptive aggregation functions,
the user must specify parameters for the aggregation optimizer
(``NumPyAdagrad``, ``NumPyAdam`` or ``NumPyYogi``) that will aggregate
the global model. Theese parameters parameters are passed via **keywords**.

Also, user must pass one of the arguments: ``params``
- model parameters (a dictionary with named model parameters
in the form of numpy arrays), or pass ``model_interface``
- an instance of the `ModelInterface <https://github.com/intel/openfl/blob/develop/openfl/interface/interactive_api/experiment.py>`_ class.
If user pass both ``params`` and ``model_interface``,
then the optimizer parameters are initialized via
``params``, ignoring ``model_interface`` argument.

See the `AdagradAdaptiveAggregation
<https://github.com/intel/openfl/blob/develop/openfl/interface/aggregation_functions/adagrad_adaptive_aggregation.py>`_
definitions for details.

`Adaptive federated optimization <https://arxiv.org/pdf/2003.00295.pdf>`_ original paper.

``AdagradAdaptiveAggregation`` usage example:

.. code-block:: python
from openfl.interface.interactive_api.experiment import TaskInterface, ModelInterface
from openfl.interface.aggregation_functions import AdagradAdaptiveAggregation
TI = TaskInterface()
MI = ModelInterface(model=model,
optimizer=optimizer,
framework_plugin=framework_adapter)
...
# Creating aggregation function
agg_fn = AdagradAdaptiveAggregation(model_interface=MI,
learning_rate=0.4)
# Define training task
@TI.register_fl_task(model='model', data_loader='train_loader', \
device='device', optimizer='optimizer')
@TI.set_aggregation_function(agg_fn)
def train(...):
...
You can define your own numpy based optimizer,
which will be used for global model aggreagation:

.. code-block:: python
from openfl.utilities.optimizers.numpy.base_optimizer import Optimizer
class MyOpt(Optimizer):
"""My optimizer implementation."""
def __init__(
self,
*,
params: Optional[Dict[str, np.ndarray]] = None,
model_interface=None,
learning_rate: float = 0.001,
param1: Any = None,
param2: Any = None
) -> None:
"""Initialize.
Args:
params: Parameters to be stored for optimization.
model_interface: Model interface instance to provide parameters.
learning_rate: Tuning parameter that determines
the step size at each iteration.
param1: My own defined parameter.
param2: My own defined parameter.
"""
super().__init__()
pass # Your code here!
def step(self, gradients: Dict[str, np.ndarray]) -> None:
"""
Perform a single step for parameter update.
Implement your own optimizer weights update rule.
Args:
gradients: Partial derivatives with respect to optimized parameters.
"""
pass # Your code here!
...
from openfl.interface.aggregation_functions import WeightedAverage
from openfl.interface.aggregation_functions.core import AdaptiveAggregation
# Creating your implemented optimizer instance based on numpy:
my_own_optimizer = MyOpt(model_interface=MI, learning_rate=0.01)
# Creating aggregation function
agg_fn = AdaptiveAggregation(optimizer=my_own_optimizer,
agg_func=WeightedAverage()) # WeightedAverage() is used for aggregating
# parameters that are not inside the given optimizer.
# Define training task
@TI.register_fl_task(model='model', data_loader='train_loader', \
device='device', optimizer='optimizer')
@TI.set_aggregation_function(agg_fn)
def train(...):
...
.. note::
If you do not understand how to write your own numpy based optimizer, please see the `NumPyAdagrad <https://github.com/intel/openfl/blob/develop/openfl/utilities/optimizers/numpy/adagrad_optimizer.py>`_ and
`AdaptiveAggregation <https://github.com/intel/openfl/blob/develop/openfl/interface/aggregation_functions/core/adaptive_aggregation.py>`_ definitions for details.

Custom Aggregation Functions
----------------------------

Expand Down
54 changes: 1 addition & 53 deletions docs/developer_guide/structure/plugins.rst
Original file line number Diff line number Diff line change
Expand Up @@ -46,56 +46,4 @@ implement the :code:`serialization_setup` method to prepare the model object for

.. code-block:: python
def serialization_setup():
.. _serializer_plugin:

Experiment Serializer
######################

The Serializer plugin is used on the frontend Python API to serialize the Experiment components and then on Envoys to deserialize them.
Currently, the default serializer plugin is based on pickling. It is a **required** plugin.

The serializer plugin must implement the :code:`serialize` method that creates a Python object representation on disk.

.. code-block:: python
@staticmethod
def serialize(object_, filename: str) -> None:
The plugin must also implement the :code:`restore_object` method that will load previously serialized object from disk.

.. code-block:: python
@staticmethod
def restore_object(filename: str):
.. _device_monitor_plugin:

CUDA Device Monitor
######################

The CUDA Device Monitor plugin is an **optional** plugin for Envoys that can gather status information about GPU devices.
This information may be used by Envoys and included in a healthcheck message that is sent to the Director.
Therefore, you can query this Envoy Registry information from the Director to determine the status of CUDA devices.

CUDA Device Monitor plugin must implement the following interface:

.. code-block:: python
class CUDADeviceMonitor:
def get_driver_version(self) -> str:
...
def get_device_memory_total(self, index: int) -> int:
...
def get_device_memory_utilized(self, index: int) -> int:
...
def get_device_utilization(self, index: int) -> str:
"""It is just a general method that returns a string that may be shown to the frontend user."""
...
def serialization_setup():
17 changes: 2 additions & 15 deletions docs/developer_guide/utilities/splitters_data.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,21 +13,8 @@ OpenFL allows you to specify custom data splits **for simulation runs on a singl
You may apply data splitters differently depending on the OpenFL workflow that you follow.


OPTION 1: Use **Native Python API** (Aggregator-Based Workflow) Functions to Split the Data (Deprecated)
===========================================================================================

Predefined OpenFL data splitters functions are as follows:

- ``openfl.utilities.data_splitters.EqualNumPyDataSplitter`` (default)
- ``openfl.utilities.data_splitters.RandomNumPyDataSplitter``
- ``openfl.interface.aggregation_functions.LogNormalNumPyDataSplitter``, which assumes the ``data`` argument as ``np.ndarray`` of integers (labels)
- ``openfl.interface.aggregation_functions.DirichletNumPyDataSplitter``, which assumes the ``data`` argument as ``np.ndarray`` of integers (labels)

Alternatively, you can create an `implementation <https://github.com/intel/openfl/blob/develop/openfl/utilities/data_splitters/numpy.py>`_ of :class:`openfl.plugins.data_splitters.NumPyDataSplitter` and pass it to the :code:`FederatedDataset` function as either ``train_splitter`` or ``valid_splitter`` keyword argument.


OPTION 2: Use Dataset Splitters in your Shard Descriptor
========================================================
Use Dataset Splitters in your Shard Descriptor
===================================================

Apply one of previously mentioned splitting function on your data to perform a simulation.

Expand Down
6 changes: 0 additions & 6 deletions openfl/federated/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,20 +12,14 @@
from openfl.federated.task import TaskRunner # NOQA

if util.find_spec("keras") is not None:
from openfl.federated.data import FederatedDataSet # NOQA
from openfl.federated.data import KerasDataLoader
from openfl.federated.task import FederatedModel # NOQA
from openfl.federated.task import KerasTaskRunner
if util.find_spec("torch") is not None:
os.environ["SETUPTOOLS_USE_DISTUTILS"] = "stdlib"
from openfl.federated.data import FederatedDataSet # NOQA
from openfl.federated.data import PyTorchDataLoader
from openfl.federated.task import FederatedModel # NOQA
from openfl.federated.task import PyTorchTaskRunner
if util.find_spec("xgboost") is not None:
from openfl.federated.data import FederatedDataSet # NOQA
from openfl.federated.data import XGBoostDataLoader
from openfl.federated.task import FederatedModel # NOQA
from openfl.federated.task import XGBoostTaskRunner

__all__ = [
Expand Down
3 changes: 0 additions & 3 deletions openfl/federated/data/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,13 +9,10 @@
from openfl.federated.data.loader import DataLoader # NOQA

if util.find_spec("keras") is not None:
from openfl.federated.data.federated_data import FederatedDataSet # NOQA
from openfl.federated.data.loader_keras import KerasDataLoader # NOQA

if util.find_spec("torch") is not None:
from openfl.federated.data.federated_data import FederatedDataSet # NOQA
from openfl.federated.data.loader_pt import PyTorchDataLoader # NOQA

if util.find_spec("xgboost") is not None:
from openfl.federated.data.federated_data import FederatedDataSet # NOQA
from openfl.federated.data.loader_xgb import XGBoostDataLoader # NOQA
115 changes: 0 additions & 115 deletions openfl/federated/data/federated_data.py

This file was deleted.

7 changes: 1 addition & 6 deletions openfl/federated/plan/plan.py
Original file line number Diff line number Diff line change
Expand Up @@ -320,12 +320,7 @@ def get_assigner(self):
"""Get the plan task assigner."""
aggregation_functions_by_task = None
assigner_function = None
try:
aggregation_functions_by_task = self.restore_object("aggregation_function_obj.pkl")
assigner_function = self.restore_object("task_assigner_obj.pkl")
except Exception as exc:
self.logger.error(f"Failed to load aggregation and assigner functions: {exc}")
self.logger.info("Using Task Runner API workflow")

if assigner_function:
self.assigner_ = Assigner(
assigner_function=assigner_function,
Expand Down
3 changes: 0 additions & 3 deletions openfl/federated/task/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,11 +9,8 @@
from openfl.federated.task.runner import TaskRunner # NOQA

if util.find_spec("keras") is not None:
from openfl.federated.task.fl_model import FederatedModel # NOQA
from openfl.federated.task.runner_keras import KerasTaskRunner # NOQA
if util.find_spec("torch") is not None:
from openfl.federated.task.fl_model import FederatedModel # NOQA
from openfl.federated.task.runner_pt import PyTorchTaskRunner # NOQA
if util.find_spec("xgboost") is not None:
from openfl.federated.task.fl_model import FederatedModel # NOQA
from openfl.federated.task.runner_xgb import XGBoostTaskRunner # NOQA
Loading

0 comments on commit 945877f

Please sign in to comment.