Instructions to use superdiff/superdiff-sd-v1-4 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use superdiff/superdiff-sd-v1-4 with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("superdiff/superdiff-sd-v1-4", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
Update model_index.json
Browse files- model_index.json +12 -9
model_index.json
CHANGED
|
@@ -1,12 +1,15 @@
|
|
| 1 |
{
|
| 2 |
"_class_name": "SuperDiffPipeline",
|
| 3 |
"_diffusers_version": "0.31.0",
|
| 4 |
-
"
|
| 5 |
-
"
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
{
|
| 2 |
"_class_name": "SuperDiffPipeline",
|
| 3 |
"_diffusers_version": "0.31.0",
|
| 4 |
+
"batch_size": null,
|
| 5 |
+
"device": "cuda",
|
| 6 |
+
"guidance_scale": null,
|
| 7 |
+
"lift": null,
|
| 8 |
+
"num_inference_steps": null,
|
| 9 |
+
"scheduler": "EulerDiscreteScheduler",
|
| 10 |
+
"seed": null,
|
| 11 |
+
"text_encoder": "CLIPTextModel",
|
| 12 |
+
"tokenizer": "CLIPTokenizer",
|
| 13 |
+
"unet": "UNet2DConditionModel",
|
| 14 |
+
"vae": "AutoencoderKL"
|
| 15 |
+
}
|