Announcement

Collapse
No announcement yet.

Stable Diffusion Benchmark?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Stable Diffusion Benchmark?

    So, I'm playing around with different iterations today of Stable Diffusion. It seems for backends there are:

    - OpenGL via compute shaders for OpenGL 4.0 or higher
    - Vulkan via mlir ("SHARK")
    - OpenAPI / OneDNN / OpenVINO (Intel Xe APUs/GPUs/Movidius Myriad Neural Compute Stick/DNN MKL for CPUs)
    - tensorflow/pytorch for CUDA
    - tensorflow/pytorch for tensorrt
    - tensorflow/pytortch for ROCm
    - There is also a "SyCL" backend and a "HIP" backend, but I'm not sure how those differ.

    There are a lot of variations and interations and its hard to keep track. But so far, 1.5 is the stable diffusion stable version from the 1.x branch and 2.1 is stable from the 2x branch.

    Any hopes?

    Or would they have to crate a standardized benchmark that could be run?

  • #2
    Which one works best on XTX?
    How XTX (Navi31) does against RTX4090/4080 in AMD optimized frameworks?

    Comment


    • #3
      Originally posted by RyzenBatch View Post
      Which one works best on XTX?
      How XTX (Navi31) does against RTX4090/4080 in AMD optimized frameworks?
      This is what I'd like to see.

      EasyDiffusion is nVidia-focused.
      Nod.AI SHARK is Vulkan-focused (especially AMD)
      On Arch Linux there is a "Stable Diffusion Intel" package that uses OpenVINO and is Arc/Xe-GPU and Intel CPU focused but the package and the repo are out of date.

      So the challenge I guess is to:

      1) Fork each repo and lock package versions.
      1a) Ideally try to keep package versions across repos in sync.
      2) Automate testing of various versions of SD models
      3) Automate testing of various algorithms: focusing on those that have cross-platform compatibility.

      Its a pretty large undertaking: so if there are folks here willing to work together to build a custom test suite to start with, that would be great. Otherwise its just another idea out in the void for someone to eventually pick up on, which is fine too.

      Comment

      Working...
      X