FAQs

Learn about Sogni with our most frequently asked questions
What are the minimum system requirements?
macOS Ventura 13.3: Mac with M1 chip, or Intel Core i9 with at least 16GB of memory.
(macOS Sonoma 14.0, Apple silicon, and at least 16GB of RAM are recommended)

iOS 17: iPhone 13 Pro, or iPad with Apple silicon.

Note: Older devices may be unable to compile the Core ML models or complete the image generation process due to memory limitations.
Does the purchase of Sogni on one platform (iOS or macOS) include the download of both versions?
Yes, if you purchased Sogni on your Mac, you will be able to install it for free on your iOS devices and vice versa. The App Store may prompt you to approve the second purchase, but it will notify you that the download is free immediately after accepting.
Are there any extra costs to generate images in Sogni?
No, image generation is unlimited and free after downloading Sogni, with no waiting queues or generation credits required.
Is Sogni completely private?
Yes, Sogni runs locally on your device, and it doesn't even require an internet connection. Your prompts, images, and settings are never shared, and they are stored exclusively on your iOS device or Mac.
Which Stable Diffusion models are currently available in Sogni?
🅂 Sogni.XL 𝛂2

🅂 Sogni Artist v1

🅂 Sogni Photo v1

SDXL v1.0

SDXL v1.0 with Refiner

Playground v2

Stable Diffusion v2.1, v2.0, v1.5, v1.4

Stable Diffusion small v0

Anything XL

AlbedoBase XL v1.2

A-Zovya RPG Artist Tools v3 & v4

AnimeLineart LoRA + AnythingV5Ink

Architecture Ra-render v3

Blue Pencil XL v0.8

ChilloutMix

Crystal Clear XL

ZavyChromaXL v4
CyberRealistic Classic v3.1

CyberRealistic v3.2

DiPix Cartoon v1

DreamShaper XL 𝛂.2

Dream Shaper XL v1.0

Dream Shaper v8.0

DynaVisionXL v0.5.5.7

EdgeOfRealism v1, v2

EpiCRealism pure Evolution v3, v5

FaeTastic XL v2

FairyTaleAI LoRA + ReVAnimated

GhostMix v2

InsanelyXL v1

Juggernaut XL v6 + RunDiffusion

KidsIllustration LoRA + Lyriel

kLabyrinth LoRA + SD v1.5

LCM - 🅂 Sogni v1

LCM - Blue Pencil XL v1
LCM - Comic Craft

LCM - CyberRealistic v3.3, v4.2

LCM - DreamCrafter v1

LCM - DreamShaper v7, v8

LCM - MagicMix + Realistic Vision

LCM - SoteMix

LCM - SD v1.5

SDXL-Lightning - JuggernautXL 9 + RD Photo2

SDXL-Turbo - 🅂 Sogni.XLT 𝛂1

SDXL-Turbo v1

SDXL-Turbo - DreamShaper v2.1

SDXL-Turbo - Pixel Art

SDXL-Turbo - PLUS-RED TEAM

SDXL-Turbo - Realities Edge XL

SDXL-Turbo - Tertium v1

SDXL-Turbo - TurboVisionXL 𝛂 & v4.3.1

SDXL-Turbo - Unstable Diffusers ☛ YamerMIX v10
SDXL-Turbo - Yamer's Perfect Design v5.SpecialGift

LEOSAM's HelloWorld XL v5 GPT4V

LoveXL v2

Mo Di Diffusion v1

NightVision XL v0.7.7

Omnium XL v1.1

Openjourney v4.0

ProtoVision XL - High Fidelity 3D

Realistic Vision v5.1

Segmind SSD-1B

SSD-1B Zach's Emoji v1.0

Starlight XL Animated v3

Sweet Mix v2.1, v2.2

ThinkDiffusion XL

Unstable Diffusers XL v9

Yamer's Realistic XL5 + RunDiffusion

YesMix v2, v4
Can I import custom Stable Diffusion models?
Yes. You can import any Stable Diffusion model that has already been converted to Core ML, whether downloaded from the internet or trained by you, by going to File > Import S.D. Core ML model.
How to Convert Stable Diffusion Model SafeTensors or CKPT Files to Core ML
Sogni utilizes Apple's Core ML to efficiently run stable diffusion models. To obtain the necessary bundle of compiled .mlmodelc and other files, the original Stable Diffusion model (CKPT or SafeTensors) must first be converted to Diffusers. Subsequently, the Diffusers format is transformed into .mlpackage files, from which the final compiled bundle is created.
Open Terminal and run all of the following commands :
Requirements
1. Install Homebrew and follow the "Next steps" instructions in Terminal
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
2. Install Wget
brew install wget
3. Install Xcode
4. Select Xcode as the active Command Line Tools provider.
sudo xcode-select -s /Applications/Xcode.app
5. Install Miniconda
6. Clone Apple's ml-stable-diffusion repository to your Mac:
git clone https://github.com/apple/ml-stable-diffusion.git
7. Start a new Python environment and install other requirements:
conda create -n coreml_stable_diffusion python=3.8 -y
conda activate coreml_stable_diffusion
cd ml-stable-diffusion
pip install -e .
pip install omegaconf
pip install safetensors
8. Create a folder called "models" on your Desktop. Place the model's .safetensors or .ckpt file and this Python script inside the new folder.
9. Run the following commands to ensure you have the correct versions for these items:
pip install torch==2.1.0
pip install scikit-learn==1.1.2
pip install diffusers==0.23.1
pip install huggingface-hub==0.19.4
pip install transformers==4.34.1
10. Your system should now be ready to convert Stable Diffusion models!
Conversion: SafeTensors / CKPT → Diffusers
1. Activate the Conda environment.
conda activate coreml_stable_diffusion
2. Navigate to the "models" folder on your Desktop
cd ../desktop/models
3. Run conversion script. Replace <MODEL-NAME> with your model's file name.
SafeTensors:
python convert_original_stable_diffusion_to_diffusers.py --checkpoint_path <MODEL-NAME>.safetensors --from_safetensors --device cpu --extract_ema --dump_path <MODEL-NAME>_diffusers
CKPT:
python convert_original_stable_diffusion_to_diffusers.py --checkpoint_path <MODEL-NAME>.ckpt --device cpu --extract_ema --dump_path <MODEL-NAME>_diffusers
For SDXL models, add the following flag to the conversion command you are using:
--pipeline_class_name StableDiffusionXLPipeline
(This process takes about 1 minute.)
Conversion: Diffusers → Core ML
Run conversion script. Replace <MODEL-NAME> with your model's file name.
1. ORIGINAL attention-implementation (compatible with CPU, GPU only)
python -m python_coreml_stable_diffusion.torch2coreml --compute-unit CPU_AND_GPU --convert-vae-decoder --convert-vae-encoder --convert-unet --unet-support-controlnet --convert-text-encoder --model-version <MODEL-NAME>_diffusers --bundle-resources-for-swift-cli --attention-implementation ORIGINAL -o original/<MODEL-NAME>_original && python -m python_coreml_stable_diffusion.torch2coreml --compute-unit CPU_AND_GPU --convert-unet --model-version <MODEL-NAME>_diffusers --bundle-resources-for-swift-cli --attention-implementation ORIGINAL -o original/<MODEL-NAME>_original
2. SPLIT_EINSUM attention-implementation (compatible with all processing units)
python -m python_coreml_stable_diffusion.torch2coreml --compute-unit ALL --convert-vae-decoder --convert-vae-encoder --convert-unet --unet-support-controlnet --convert-text-encoder --model-version <MODEL-NAME>_diffusers --bundle-resources-for-swift-cli --attention-implementation SPLIT_EINSUM -o split-einsum/<MODEL-NAME>_split-einsum && python -m python_coreml_stable_diffusion.torch2coreml --compute-unit ALL --convert-unet --model-version <MODEL-NAME>_diffusers --bundle-resources-for-swift-cli --attention-implementation SPLIT_EINSUM -o split-einsum/<MODEL-NAME>_split-einsum
For SDXL models:
1. ORIGINAL attention-implementation (compatible with CPU, GPU only)
python -m python_coreml_stable_diffusion.torch2coreml --xl-version --compute-unit CPU_AND_GPU --convert-vae-decoder --convert-vae-encoder --convert-unet --unet-support-controlnet --convert-text-encoder --model-version <MODEL-NAME>_diffusers --bundle-resources-for-swift-cli --attention-implementation ORIGINAL -o original/<MODEL-NAME>_original && python -m python_coreml_stable_diffusion.torch2coreml --xl-version --compute-unit CPU_AND_GPU --convert-unet --model-version <MODEL-NAME>_diffusers --bundle-resources-for-swift-cli --attention-implementation ORIGINAL -o original/<MODEL-NAME>_original
Weight Compression:
--quantize-nbits 8
or
--quantize-nbits 6
Example:
python -m python_coreml_stable_diffusion.torch2coreml --xl-version --quantize-nbits 8 --compute-unit CPU_AND_GPU --convert-vae-decoder --convert-vae-encoder --convert-unet --unet-support-controlnet --convert-text-encoder --model-version <MODEL-NAME>_diffusers --bundle-resources-for-swift-cli --attention-implementation ORIGINAL -o original/<MODEL-NAME>_original_8bit && python -m python_coreml_stable_diffusion.torch2coreml --xl-version --quantize-nbits 8 --compute-unit CPU_AND_GPU --convert-unet --model-version <MODEL-NAME>_diffusers --bundle-resources-for-swift-cli --attention-implementation ORIGINAL -o original/<MODEL-NAME>_original_8bit
Custom Size:
Modifying the output image size of the model is only possible with the ORIGINAL implementation. To do so use the following flags:
--latent-w <SIZE> --latent-h <SIZE>
Note: Size must be a multiple of 64, specify it divided by 8 (Eg. for 768x768 use --latent-w 96 --latent-h 96)

Example:
python -m python_coreml_stable_diffusion.torch2coreml --xl-version --quantize-nbits 8 --latent-w 96 --latent-h 96 --compute-unit CPU_AND_GPU --convert-vae-decoder --convert-vae-encoder --convert-unet --unet-support-controlnet --convert-text-encoder --model-version <MODEL-NAME>_diffusers --bundle-resources-for-swift-cli --attention-implementation ORIGINAL -o original/<MODEL-NAME>_original_8bit_768x768 && python -m python_coreml_stable_diffusion.torch2coreml --xl-version --quantize-nbits 8 --latent-w 96 --latent-h 96 --compute-unit CPU_AND_GPU --convert-unet --model-version <MODEL-NAME>_diffusers --bundle-resources-for-swift-cli --attention-implementation ORIGINAL -o original/<MODEL-NAME>_original_8bit_768x768
(This process can take 30+ minutes depending on model size and quantization settings.)
Import CoreML model into Sogni
1. The compiled model folder will be created with the name "Resources" inside the "models" folder on your desktop, under the orginal/<MODEL-NAME> or split-einsum/<MODEL-NAME> folder. Give the "Resources" folder a more descriptive name.

2. You can discard the .mlpackage files from the orginal/<MODEL-NAME> or split-einsum/<MODEL-NAME>.

3. You can also discard the <MODEL-NAME>_diffusers folder if you are not going to perform other convertions from this model.

4. Open Sogni, and go to File > Import Stable Diffusion (CoreML) model. Select the folder with the compiled resources to import the model.

5. Select your new model from the Stable Diffusion models list in the "Advanced" tab of the Controls-Bar.
Troubleshooting
Miniconda:

This package is incompatible with this version of macOS: after the "Software Licence Agreement" step, click on "Change Install Location..." and select "Install for me only"
Terminal:

xcrun: error:
unable to find utility "coremlcompiler", not a developer tool or in PATH: open Xcode and go to "Settings..." → "Locations" then click on the "Command Line Tools" drop-down menu and reselect the Command Line Tools version

ModuleNotFoundError:
No module named 'xyz': while the conda coreml_stable_diffusion environment is active.

Run:
pip install xyz
zsh: killed python: your Mac has run out of memory. Close other memory-hungry applications and retry.
What's the difference between the original models and their 8-bit / 6-bit / 4.5-bit quantized counterparts?
6-bit and 4.5-bit model versions are three times lighter than the originals, saving significant memory. They can run on devices with limited RAM, including iPhone 12-13, iPads with the A15 chip, and older Macs that may struggle with the original models. They are only available for macOS 14 and iOS 17.
I am not getting the image described in my prompts. Any tips?
  • Select a different Stable Diffusion Model. Sogni offers a wide selection of models, each trained differently and therefore with different capabilities in photorealism, art, illustration, etc. The SD model selection list is located inside the "Advanced" tab at the bottom of the Controls-Bar. The help section for the SD model selection list includes a short description of each model.
  • Check your prompt structure. Each setting in the app has a question mark button that provides Help content. The help section for the Prompt has good tips on how to construct effective prompts.
  • Adjust the number of Steps and Guidance Scale.
  • Use a Guide Image or ControlNet to give the model a reference.
  • Review the Safety filter setting in the Advanced tab.
How can I generate more realistic faces?
  • Add keywords related to the face you want in the image. Just mentioning the style of eyes or type of facial expression can already help. Also, adding keywords like "perfect face" or "highly detailed face" usually makes the model put in more effort.
  • Use a lower Guidance Scale setting when looking for photorealistic results. A lower Guidance setting around 5-6 can help maintain softer edges and achieve more realistic lighting, film, and atmospheric effects.
  • For achieving photorealistic faces, consider experimenting with the Film and Photography style presets to fine-tune the visual nuances.
  • Turn OFF automatic scaling in the Advanced tab. The upscaler can sometimes lead to excessive smoothing, particularly on faces and skin areas, resulting in an unnatural appearance.
What can I do to prevent image degradation when generating animations using a Guide Image?
  • Use a different seed for the animation than the one used to generate the Guide Image.
  • Consider increasing the number of steps and adjusting the Guidance Scale value.
  • Modify the Scheduler and Time Spacing settings.
Which ControlNet models are currently available in Sogni?
Preprocessors:
  • Vision - Face Capture
  • Vision - Pose Capture
  • Vision - Face & Pose Capture
  • Sketch
  • Depth Map (Midas)
  • Segmentation (IS-Net & U2Net)
  • Subject Mask
  • Background Mask
  • Invert
CN Models:
  • Canny
  • Depth
  • InPaint
  • Instruct Pixel2Pixel
  • LineArt
  • LineArt Anime
  • M-LSD
  • Normal Bae
  • OpenPose
  • Scribble
  • Segmentation
  • Shuffle
What size are the images and videos generated by Sogni?
Unless you turn OFF upscaling, Sogni will produce sharp images and videos at 2048 x 2048 pixels!
What does the "Safe Content Filter" do?
The filter is intended to detect images that may contain potentially disturbing content for the viewer, such as graphic violence, nudity, sexual, vulgar, and NSFW content in general. If such an image is detected, it will not be displayed, and you will receive an error message instead. You should be at least 18 years old to turn OFF the filter.
How can I save my generated images to disk?
On iOS, tap the share button located under your generated image and choose "Save Image" to save it to your Photos library, or "Save to Files" to save it in the Files app.

On macOS, there are various ways to save your images:
a. Click "Save..." from the File menu.
b. Click the save button located on the top right corner of the main window.
c. Drag and drop the generated image from the main window to the desired location.
d. Drag and drop any image from the gallery to the desired location.
The app quits or returns an error message before completing the model installation or image generation process, is there anything I can do to solve this?
First, make sure your device meets the minimum system requirements.

Second, quit other memory-hungry apps before launching Sogni. On iOS, quitting apps like WhatsApp, Instagram, and other social media or video applications that might be running in the background is a good start.

Third, avoid switching between apps while the generation is in process.

Also, keeping your device cool will help, especially older iOS devices like iPhone 12-13.
My device is heating up after generating a few images, what should I do?
Sogni puts a big demand on your device's CPU-GPU because it is running the AI model locally without depending on any external resources, meaning everything happens on your device. The generation process requires the execution of trillions of computations in a very short time, which could lead to your device getting hotter than usual.

Update to iOS 17 and the latest Sogni version, switch to one of the available LCM models, which require just 1-4 steps to generate an image. Give your device a break until the heat alert disappears.

On macOS, if your Mac has an M1/M2 chip, you can also switch between the Processing options in the "Advanced" settings group, located on the bottom left corner. For example, using GPU-only processing may lead to more heating than if you select "GPU and Neural Engine." Note: These options are not available on Intel Macs.

Need help? Ask in our Discord!

For any questions regarding the use of Sogni, tips, suggestions, bugs, sharing art, discussing AI and life, or simply hanging out, please feel free to join our Discord today!