Skip to content

Commit

Permalink
Release v0.16.0
Browse files Browse the repository at this point in the history
Signed-off-by: The Sionna Team <[email protected]>

Co-authored-by: Jakob Hoydis <[email protected]>
Co-authored-by: Fayçal Ait-Aoudia <[email protected]>
Co-authored-by: Sebastian Cammerer <[email protected]>
Co-authored-by: Guillermo Marcus <[email protected]>
Co-authored-by: Merlin Nimier-David <[email protected]>
Co-authored-by: Jérome Eertmans <[email protected]
Co-authored-by: Felix Klement <[email protected]>
Co-authored-by: Neal Becker <[email protected]>
  • Loading branch information
8 people committed Nov 28, 2023
1 parent 1c619ac commit fd8b13a
Show file tree
Hide file tree
Showing 44 changed files with 3,154 additions and 1,409 deletions.
2 changes: 1 addition & 1 deletion DOCKERFILE
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
FROM tensorflow/tensorflow:2.11.0-gpu-jupyter
FROM tensorflow/tensorflow:2.13.0-gpu-jupyter
EXPOSE 8888
COPY . /tmp/
WORKDIR /tmp/
Expand Down
8 changes: 4 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ In order to run the tutorial notebooks on your machine, you also need [JupyterLa
You can alternatively test them on [Google Colab](https://colab.research.google.com/).
Although not necessary, we recommend running Sionna in a [Docker container](https://www.docker.com).

Sionna requires [TensorFlow 2.10 or newer](https://www.tensorflow.org/install) and Python 3.6-3.9. We recommend Ubuntu 20.04. Earlier versions of TensorFlow may still work but are not recommended because of known, unpatched CVEs.
Sionna requires [TensorFlow 2.10-2.13](https://www.tensorflow.org/install) and Python 3.8-3.11. We recommend Ubuntu 22.04. Earlier versions of TensorFlow may still work but are not recommended because of known, unpatched CVEs.

To run the ray tracer on CPU, [LLVM](https://llvm.org) is required by DrJit. Please check the [installation instructions for the LLVM backend](https://drjit.readthedocs.io/en/latest/firststeps-py.html#llvm-backend).

Expand All @@ -38,7 +38,7 @@ On macOS, you need to install [tensorflow-macos](https://github.com/apple/tensor
```
>>> import sionna
>>> print(sionna.__version__)
0.15.1
0.16.0
```

3.) Once Sionna is installed, you can run the [Sionna "Hello, World!" example](https://nvlabs.github.io/sionna/examples/Hello_World.html), have a look at the [quick start guide](https://nvlabs.github.io/sionna/quickstart.html), or at the [tutorials](https://nvlabs.github.io/sionna/tutorials.html).
Expand All @@ -49,7 +49,7 @@ For a local installation, the [JupyterLab Desktop](https://github.com/jupyterlab

### Docker-based installation

1.) Make sure that you have [Docker](<https://docs.docker.com/engine/install/ubuntu/>) installed on your system. On Ubuntu 20.04, you can run for example
1.) Make sure that you have [Docker](<https://docs.docker.com/engine/install/ubuntu/>) installed on your system. On Ubuntu 22.04, you can run for example

```
sudo apt install docker.io
Expand Down Expand Up @@ -97,7 +97,7 @@ We recommend to do this within a [virtual environment](https://docs.python.org/3
```
>>> import sionna
>>> print(sionna.__version__)
0.15.1
0.16.0
```

## License and Citation
Expand Down
2 changes: 1 addition & 1 deletion doc/source/api/rt.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ It has methods for the computation of propagation :class:`~sionna.rt.Paths` (:me
Sionna has several integrated `Example Scenes`_ that you can use for your own experiments. In this `video <https://youtu.be/7xHLDxUaQ7c>`_, we explain how you can create your own scenes using `OpenStreetMap <https://www.openstreetmap.org>`_ and `Blender <https://www.blender.org>`_.
You can preview a scene within a Jupyter notebook (:meth:`~sionna.rt.Scene.preview`) or render it to a file from the viewpoint of a camera (:meth:`~sionna.rt.Scene.render` or :meth:`~sionna.rt.Scene.render_to_file`).

Propagation :class:`~sionna.rt.Paths` can be transformed into time-varying channel impulse responses (CIRs) via :meth:`~sionna.rt.Paths.cir`. The CIRs can then be used for link-level simulations in Sionna via the functions :meth:`~sionna.channel.cir_to_time_channel` or :meth:`~sionna.channel.cir_to_ofdm_channel`. Alternatively, you can create a dataset of CIRs that can be used like a channel model with the help of :class:`~sionna.channel.CIRDataset`.
Propagation :class:`~sionna.rt.Paths` can be transformed into time-varying channel impulse responses (CIRs) via :meth:`~sionna.rt.Paths.cir`. The CIRs can then be used for link-level simulations in Sionna via the functions :meth:`~sionna.channel.cir_to_time_channel` or :meth:`~sionna.channel.cir_to_ofdm_channel`. Alternatively, you can create a dataset of CIRs that can be used by a channel model with the help of :class:`~sionna.channel.CIRDataset`.

The paper `Sionna RT: Differentiable Ray Tracing for Radio Propagation Modeling <https://nvlabs.github.io/sionna/made_with_sionna.html#sionna-rt-differentiable-ray-tracing-for-radio-propagation-modeling>`_ shows how differentiable ray tracing can be used for various optimization tasks. The related `notebooks <https://nvlabs.github.io/sionna/made_with_sionna.html#sionna-rt-differentiable-ray-tracing-for-radio-propagation-modeling>`_ can be a good starting point for your own experiments.

Expand Down
14 changes: 1 addition & 13 deletions doc/source/api/rt_coverage_map.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -22,16 +22,4 @@ CoverageMap
.. autoclass:: sionna.rt.CoverageMap
:members:
:inherited-members:
:exclude-members: as_tensor, sample_positions, show, to_world

as_tensor
---------
.. autofunction:: sionna.rt.CoverageMap.as_tensor

sample_positions
----------------
.. autofunction:: sionna.rt.CoverageMap.sample_positions

show
----
.. autofunction:: sionna.rt.CoverageMap.show
:exclude-members: to_world
2 changes: 1 addition & 1 deletion doc/source/api/rt_paths.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -19,4 +19,4 @@ Paths
.. autoclass:: sionna.rt.Paths
:members:
:inherited-members:
:exclude-members: mask, vertices, normals, objects, clone, merge, finalize, set_los_path_type
:exclude-members: vertices, normals, objects, clone, merge, finalize, set_los_path_type, targets_sources_mask, sources, targets, pad_or_crop
5 changes: 3 additions & 2 deletions doc/source/api/rt_radio_device.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -26,8 +26,9 @@ equipped with a :math:`4 \times 2` :class:`~sionna.rt.PlanarArray` with cross-po
The position :math:`(x,y,z)` and orientation :math:`(\alpha, \beta, \gamma)` of a radio device
can be freely configured. The latter is specified through three angles corresponding to a 3D
rotation as defined in :eq:`rotation`.

The position and orientation are implemented as TensorFlow variables and can be made trainable.
Both can be assigned to TensorFlow variables or tensors. In the latter case,
the tensor can be the output of a callable, such as a Keras layer implementing a neural network.
In the former case, it can be set to a trainable variable.

Radio devices need to be explicitly added to the scene using the scene's method :meth:`~sionna.rt.Scene.add`
and can be removed from it using :meth:`~sionna.rt.Scene.remove`:
Expand Down
17 changes: 14 additions & 3 deletions doc/source/api/rt_radio_material.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -110,8 +110,7 @@ Custom radio materials can be implemented using the
parameters related to diffuse scattering, such as the scattering coefficient :math:`S`,
cross-polarization discrimination coefficient :math:`K_x`, and scattering pattern :math:`f_\text{s}(\hat{\mathbf{k}}_\text{i}, \hat{\mathbf{k}}_\text{s})`.
Note that only non-magnetic materials with :math:`\mu_r=1` are currently allowed.
The following code snippet shows how to create a custom radio material. Note that the creation
of a :class:`~sionna.rt.RadioMaterial` requires that a scene is loaded.
The following code snippet shows how to create a custom radio material.

.. code-block:: Python
Expand Down Expand Up @@ -152,11 +151,23 @@ or the material instance:
obj = scene.get("my_object") # obj is a SceneObject
obj.radio_material = custom_material # "my_object" is made of "my_material"
The material parameters can be assigned to TensorFlow variables or tensors, such as
the output of a Keras layer defining a neural network. This allows one to make materials
trainable:

.. code-block:: Python
mat = RadioMaterial("my_mat",
relative_permittivity= tf.Variable(2.1, dtype=tf.float32))
mat.conductivity = tf.Variable(0.0, dtype=tf.float32)
RadioMaterial
-------------
.. autoclass:: sionna.rt.RadioMaterial
:members:
:exclude-members: core_material, frequency_update, decrease_use, increase_use, scene, is_placeholder, discard_object_using, add_object_using
:exclude-members: core_material, frequency_update, decrease_use, increase_use, scene, is_placeholder, discard_object_using, add_object_using, assign

ScatteringPattern
-----------------
Expand Down
22 changes: 16 additions & 6 deletions doc/source/api/rt_scene.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ to instantiate a transmitter and receiver.
scene.add(rx)
# TX points towards RX
tx.look_at(rx)
tx.look_at(rx)
print(scene.transmitters)
print(scene.receivers)
Expand All @@ -95,7 +95,7 @@ You can visualize the paths within a scene by one of the following commands:
.. code-block:: Python
scene.preview(paths=paths) # Open preview showing paths
scene.render(camera="preview", paths=paths) # Render scene with paths from previev camera
scene.render(camera="preview", paths=paths) # Render scene with paths from preview camera
scene.render_to_file(camera="preview",
filename="scene.png",
paths=paths) # Render scene with paths to file
Expand All @@ -104,13 +104,14 @@ You can visualize the paths within a scene by one of the following commands:
:align: center

Note that the calls to the render functions in the code above use the "preview" camera which is configured through
:meth:`~sionna.rt.Scene.preview`. You can use any other :class:`~sionna.rt.Camera` that you create here as well.
:meth:`~sionna.rt.Scene.preview`. You can use any other :class:`~sionna.rt.Camera` that you create here as well.

The function :meth:`~sionna.rt.Scene.coverage_map` computes a :class:`~sionna.rt.CoverageMap` for every transmitter in a scene:

.. code-block:: Python
cm = scene.coverage_map()
cm = scene.coverage_map(cm_cell_size=[1.,1.], # Configure size of each cell
num_samples=1e7) # Number of rays to trace
Coverage maps can be visualized in the same way as propagation paths:

Expand All @@ -125,16 +126,25 @@ Coverage maps can be visualized in the same way as propagation paths:
.. figure:: ../figures/coverage_map_visualization.png
:align: center


Scene
-----
.. autoclass:: sionna.rt.Scene
:members:
:exclude-members: _ready_for_paths_computation, _load_cameras, _load_scene_objects, _is_name_used, register_radio_device, unregister_radio_device, register_radio_material, register_scene_object, compute_paths, render, render_to_file, preview, mi_scene, preview_widget, coverage_map
:exclude-members: _check_scene, _load_cameras, _load_scene_objects, _is_name_used, register_radio_device, unregister_radio_device, register_radio_material, register_scene_object, compute_paths, trace_paths, compute_fields, render, render_to_file, preview, mi_scene, preview_widget, coverage_map

compute_paths
-------------
.. autofunction:: sionna.rt.Scene.compute_paths

trace_paths
-------------
.. autofunction:: sionna.rt.Scene.trace_paths

compute_fields
-------------
.. autofunction:: sionna.rt.Scene.compute_fields

coverage_map
-------------
.. autofunction:: sionna.rt.Scene.coverage_map
Expand Down Expand Up @@ -199,7 +209,7 @@ simple_reflector
-----------------
.. autodata:: sionna.rt.scene.simple_reflector
:annotation:
(`Blender file <https://drive.google.com/file/d/1YBxKeDGan7zxM2eTa659tUYmHq_Rc1HA/view?usp=share_link>`__)
(`Blender file <https://drive.google.com/file/d/1iYPD11zAAMj0gNUKv_nv6QdLhOJcPpIa/view?usp=share_link>`__)

double_reflector
-----------------
Expand Down
2 changes: 1 addition & 1 deletion doc/source/api/rt_scene_object.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ of ``my_object`` as follows:
my_object.radio_material = "itu_wood"
Most scene objects names have postfixs of the form "-material_name". These are used during loading of a scene
Most scene objects names have postfixes of the form "-material_name". These are used during loading of a scene
to assign a :class:`~sionna.rt.RadioMaterial` to each of them. This `tutorial video <https://youtu.be/7xHLDxUaQ7c>`_
explains how you can assign radio materials to objects when you create your own scenes.

Expand Down
1 change: 1 addition & 0 deletions doc/source/figures/compute_paths.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified doc/source/figures/coverage_map_visualization.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
10 changes: 5 additions & 5 deletions doc/source/installation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,8 @@ You can alternatively test them on `Google Colab <https://colab.research.google.
Although not necessary, we recommend running Sionna in a `Docker container <https://www.docker.com>`_.

.. note::
Sionna requires `TensorFlow 2.10 or newer <https://www.tensorflow.org/install>`_ and Python 3.6-3.9.
We recommend Ubuntu 20.04.
Sionna requires `TensorFlow 2.10-2.13 <https://www.tensorflow.org/install>`_ and Python 3.8-3.11.
We recommend Ubuntu 22.04.
Earlier versions of TensorFlow may still work but are not recommended because of known, unpatched CVEs.

To run the ray tracer on CPU, `LLVM <https://llvm.org>`_ is required by DrJit. Please check the `installation instructions for the LLVM backend <https://drjit.readthedocs.io/en/latest/firststeps-py.html#llvm-backend>`_.
Expand Down Expand Up @@ -39,7 +39,7 @@ e.g., using `conda <https://docs.conda.io>`_. On macOS, you need to install `ten
>>> import sionna
>>> print(sionna.__version__)
0.15.1
0.16.0
3.) Once Sionna is installed, you can run the `Sionna "Hello, World!" example <https://nvlabs.github.io/sionna/examples/Hello_World.html>`_, have a look at the `quick start guide <https://nvlabs.github.io/sionna/quickstart.html>`_, or at the `tutorials <https://nvlabs.github.io/sionna/tutorials.html>`_.

Expand All @@ -49,7 +49,7 @@ For a local installation, the `JupyterLab Desktop <https://github.com/jupyterlab
Docker-based Installation
-------------------------

1.) Make sure that you have Docker `installed <https://docs.docker.com/engine/install/ubuntu/>`_ on your system. On Ubuntu 20.04, you can run for example
1.) Make sure that you have Docker `installed <https://docs.docker.com/engine/install/ubuntu/>`_ on your system. On Ubuntu 22.04, you can run for example

.. code-block:: bash
Expand Down Expand Up @@ -111,4 +111,4 @@ e.g., using `conda <https://docs.conda.io>`_.
>>> import sionna
>>> print(sionna.__version__)
0.15.1
0.16.0
Loading

0 comments on commit fd8b13a

Please sign in to comment.