Running Stable Diffusion Locally & in Cloud with Diffusers & dstack

andrey_cheptsov

Andrey Cheptsov

Posted on February 13, 2023

Running Stable Diffusion Locally & in Cloud with Diffusers & dstack

By now, it is likely that everyone has heard of Stable Diffusion, a model capable of producing photo-realistic images from text. Thanks to the Diffusers library by HuggingFace, using this model is straightforward.

However, organizing your project and dependencies to run it independently of the environment, whether locally or in the cloud, can still be a challenge.

In this article, I'll show you how to solve a problem using diffusers and dstack. We will create a script that uses a pretrained model from a remote repository to generate images, and we will explore how effortless it is to run the script both locally and in the cloud. This setup accelerates local development and debugging while also providing the ability to switch to the cloud when additional resources are required.

To overcome this challenge, we have written an article guiding you through the steps of using diffusers and dstack to generate images from prompts, both locally and in the cloud, using a simple example.

The diffusers Python library provides an easy way to access a variety of pre-trained diffusion models published on Hugging Face, allowing you to perform inference tasks with ease.

The dstack tool lets you set up your ML workflows and their dependencies in code and run them either locally or in a cloud account you've set up. It takes care of creating and destroying cloud resources as needed.

Let's get started.

Requirements

Here is the list of Python libraries that we will utilize:



diffusers
transformers
accelerate
scipy
ftfy
safetensors


Enter fullscreen mode Exit fullscreen mode

Note: Using the safetensors library for storing tensors instead of pickle, as recommended by Hugging Face for better safety and speed.

To ensure our scripts can run smoothly across all environments, let's include them in the stable_diffusion/requirements.txt file.

You can also install these libraries locally:



pip install -r stable_diffusion/requirements.txt


Enter fullscreen mode Exit fullscreen mode

Let's install dstack CLI too, since we'll use it locally:



pip install dstack --upgrade


Enter fullscreen mode Exit fullscreen mode

Downloading the pretrained model

In this tutorial, we will use the runwayml/stable-diffusion-v1–5 model, which is pretrained by Runway. You can explore the background story of this model on their blog. However, there are a range of other models to choose from.

Let's create the following stable_diffusion/stable_diffusion.py file:



import shutil

from diffusers import StableDiffusionPipeline

def main():
    _, cache_folder = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5",
                                                              return_cached_folder=True)
    shutil.copytree(cache_folder, "./models/runwayml/stable-diffusion-v1-5", dirs_exist_ok=True)


Enter fullscreen mode Exit fullscreen mode

Note: By default, diffusers downloads the model to its own cache folder built using symlinks. Since dstack doesn't support symlinks in artifacts, we're copying the model files to the local folder.

To run a script via dstack, it must be defined as a workflow via a YAML file under .dstack/workflows.

The .dstack/workflows/stable_diffusion.yaml file:



workflows:
  - name: stable-diffusion
    provider: bash
    commands:
      - pip install -r stable_diffusion/requirements.txt
      - python stable_diffusion/stable_diffusion.py
    artifacts:
      - path: ./models
    resources:
      memory: 16GB


Enter fullscreen mode Exit fullscreen mode

Now, the workflow can be run anywhere via the dstack CLI.

Note: Before you run a workflow via dstack, make sure your project has a remote Git branch (git remote -v is not empty), and invoke the dstack init command which will ensure that dstack can access the repository.

Here's how to run a dstack workflow locally:



dstack run stable-diffusion


Enter fullscreen mode Exit fullscreen mode

When you run it, dstack will run the script, and save the models folder as an artifact. After that, you can reuse the artifact in other workflows.

Attaching an interactive IDE

Sometimes, before you can run a workflow, you may want to run code interactively, e.g. via an IDE or a notebook.

Look at the following example:



workflows:
  - name: code-stable
    provider: code
    deps:
      - workflow: stable-diffusion
    setup:
      - pip install -r stable_diffusion/requirements.txt
    resources:
      memory: 16GB


Enter fullscreen mode Exit fullscreen mode

As you see, the code-stable workflow refers the stable-diffusion workflow as a dependency. Go ahead and run it.



dstack run code-stable


Enter fullscreen mode Exit fullscreen mode

If you do, in the output, you'll see a URL:



RUN           WORKFLOW    SUBMITTED STATUS     TAG  BACKENDS
giant-sheep-0 code-stable now       Submitted       local

Provisioning… It may take up to a minute. ✓

To interrupt, press Ctrl+C.

Web UI available at http://127.0.0.1:49959/?folder=%2Fworkflow&tkn=1f6b1fcc1ac0424cb95eb74ae37ddbf7


Enter fullscreen mode Exit fullscreen mode

This opens VS Code, attached to your workflow, with everything set up: the code, the pre-trained model, and the Python environment.

Generating images by given prompts

Let's write a script that generates images using a pre-trained model and given prompts.

Here's an example of the stable_diffusion/prompt_stable.py file:



import argparse
from pathlib import Path

import torch
from diffusers import StableDiffusionPipeline

if __name__ == '__main__':
    parser = argparse.ArgumentParser()
    parser.add_argument("-P", "--prompt", action='append', required=True)
    args = parser.parse_args()

    pipe = StableDiffusionPipeline.from_pretrained("./models/runwayml/stable-diffusion-v1-5", local_files_only=True)
    if torch.cuda.is_available():
        pipe.to("cuda")
    images = pipe(args.prompt).images

    output = Path("./output")
    output.mkdir(parents=True, exist_ok=True)
    for i in range(len(images)):
        images[i].save(output / f"{i}.png")


Enter fullscreen mode Exit fullscreen mode

The script loads the model from the local ./models/runwayml/stable-diffusion-v1–5 folder, generates images based on the given prompts, and saves the resulting images to the local output folder.

To be able to run it via dstack, let's define it in .dstack/workflows/stable_diffusion.yaml:



workflows:
  - name: prompt-stable
    provider: bash
    deps:
      - workflow: stable-diffusion
    commands:
      - pip install -r stable_diffusion/requirements.txt
      - python stable_diffusion/prompt_stable.py ${{ run.args }}
    artifacts:
      - path: ./output
    resources:
      memory: 16GB


Enter fullscreen mode Exit fullscreen mode

When you run this workflow, dstack will mount the output artifacts from the stable-diffusion workflow to the working directory. So, the model that was previously downloaded will be in the local ./models/runwayml/stable-diffusion-v1–5 folder.

Note: The dstack run command allows to pass arguments to the workflow via ${{ run.args }}

Let's run the workflow locally:



dstack run prompt-stable -P "cats in hats"


Enter fullscreen mode Exit fullscreen mode

Note: The output artifacts of local runs are stored under ~/.dstack/artifacts.

An example of the prompt-stable workflow output.

Configuring AWS as a remote

By default, workflows in dstack run locally. However, you have the option to configure a remote to run your workflows. 

For instance, you can set up your AWS account as a remote to run workflows.

To configure a remote, run the following command:



dstack config


Enter fullscreen mode Exit fullscreen mode

This command prompts you to select an AWS profile for credentials, an AWS region for workflow execution, and an S3 bucket to store remote artifacts.



AWS profile: default
AWS region: eu-west-1
S3 bucket: dstack-142421590066-eu-west-1
EC2 subnet: none


Enter fullscreen mode Exit fullscreen mode

Note: Currently, dstack only supports AWS as a remote backend. The addition of support for GCP and Azure is expected in one of the upcoming releases.

Running remotely

Once a remote is configured, you can use the --remote flag with the dstack run command to run workflows remotely.

Let's first run the stable-diffusion workflow:



dstack run stable-diffusion --remote


Enter fullscreen mode Exit fullscreen mode

Note: When you run a workflow remotely, dstack automatically creates resources in the configured cloud, saves artifacts, and releases them once the workflow is finished.

When you run a workflow remotely, you can configure the required resources to run the workflows: either via the resources property in YAML or the dstack run command's arguments, such as --gpu, --gpu-name, etc.

Let' run the prompt-stable workflow remotely and tell to use a GPU:



dstack run prompt-stable --remote --gpu 1 -P "cats in hats"


Enter fullscreen mode Exit fullscreen mode

Note: By default, dstack picks the cheapest available machine that matches the resource requirements. For example, in AWS, if you request one GPU, it will use a p2.xlarge instance with NVIDIA Tesla K80 GPU.

Conclusion

If you found the tutorial interesting, you are invited to further delve deeper into the topic by exploring the official documentation for diffusers and dstack.

The source code for this tutorial can be found on GitHub.

In one of the next blog posts, we will delve not only into generating images, but also into finetuning a Stable Diffusion model.

💖 💪 🙅 🚩
andrey_cheptsov
Andrey Cheptsov

Posted on February 13, 2023

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related