# Software available¶

Currently there are two Jupyter kernels:

• Python
• Octave

## Python packages¶

To see the full list of packages and their version numbers, open a terminal and enter conda list. The usual scientific Python packages including numpy, scipy, and matplotlib, are available. We also include some additional packages relevant to working with Swarm data. These include:

### VirES software¶

viresclient

This is a programmatic interface to the VirES server, providing flexible access to Swarm data and model evaluations. Custom data sets can be built as CDF/CSV files, and directly as a pandas DataFrame or an xarray Dataset

### Swarm community software¶

pyamps
swarmpyfac
ibp (Ionospheric bubble probability)
https://gitext.gfz-potsdam.de/rother/ibp-model

…tbc

## Installing other packages¶

If you install some packages with conda install ... or pip install ..., this will indeed install them and make them available to the default kernel. However, they will disappear the next time your container shuts down. When you have not been active for some period of time (<~hours), the container within which your environment is running will shut down. When you log in again, the environment is created anew from the image that everybody shares, so your modifications will no longer be present. This means that installing packages (& extensions) in this way is only sensible for briefly trying them out.

If you need access to a different collection of packages (e.g. you need a certain unsupported package, or different versions of what is available), then the way to achieve this is through creating an additional kernel, which you can then optionally use to execute scripts or notebooks, in place of the default kernel. This procedure is not covered in full here, but in short:

conda create --prefix <kernelname> <packages>


will create the a new conda “environment” (see the existing ones with conda env list). For example:

conda create --prefix ~/envs/my_env ipykernel numpy


will create an environment stored within ~/envs/ and called ‘my_env’, with the packages ipykernel (this is required), and numpy. This kernel then needs to be registered with Jupyter:

~/envs/my_env/bin/python -m ipykernel install --user --name my_env --display-name "my_env"


The kernel should now be available to use from within notebooks, and you can also access the environment in a terminal with conda activate ~/envs/my_env. Since we have stored it in the home directory, it will not be lost when the container shuts down. You can see what kernels are installed with jupyter kernelspec list.

## Extensions¶

tbc

https://github.com/mauhai/awesome-jupyterlab