flower/examples/quickstart-pytorch-lightning at main · adap/flower · GitHub
Skip to content

Latest commit

 

History

History

quickstart-pytorch-lightning

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
tags dataset framework
quickstart
vision
fds
MNIST
lightning

Federated Learning with PyTorch Lightning and Flower (Quickstart Example)

This introductory example to Flower uses PyTorch Lightning, but deep knowledge of PyTorch Lightning is not necessarily required to run the example. However, it will help you understand how to adapt Flower to your use case. Running this example in itself is quite easy. This example uses Flower Datasets to download, partition and preprocess the MNIST dataset. The model being federated is a lightweight AutoEncoder as presented in Lightning in 15 minutes tutorial.

Project Setup

Start by cloning the example project. We prepared a single-line command that you can copy into your shell which will checkout the example for you:

git clone --depth=1 https://github.com/adap/flower.git _tmp \
        && mv _tmp/examples/quickstart-pytorch-lightning . \
        && rm -rf _tmp && cd quickstart-pytorch-lightning

This will create a new directory called quickstart-pytorch-lightning containing the following files:

quickstart-pytorch-lightning
├── pytorchlightning_example
│   ├── __init__.py
│   ├── client_app.py   # Defines your ClientApp
│   ├── server_app.py   # Defines your ServerApp
│   └── task.py         # Defines your model, training and data loading
├── pyproject.toml      # Project metadata like dependencies and configs
└── README.md

Install dependencies and project

Install the dependencies defined in pyproject.toml as well as the pytorchlightning_example package.

pip install -e .

Run the Example

You can run your Flower project in both simulation and deployment mode without making changes to the code. If you are starting with Flower, we recommend you using the simulation mode as it requires fewer components to be launched manually. By default, flwr run will make use of the Simulation Engine.

Run with the Simulation Engine

Note

Check the Simulation Engine documentation to learn more about Flower simulations and how to optimize them.

flwr run .

You can also override some of the settings for your ClientApp and ServerApp defined in pyproject.toml. For example:

flwr run . --run-config "num-server-rounds=5 max-epochs=2"

Run with the Deployment Engine

Follow this how-to guide to run the same app in this example but with Flower's Deployment Engine. After that, you might be intersted in setting up secure TLS-enabled communications and SuperNode authentication in your federation.

If you are already familiar with how the Deployment Engine works, you may want to learn how to run it using Docker. Check out the Flower with Docker documentation.