Topic: AI Tools for Linux Users?

Posted under AI Art Tutorials

I am wondering what free AI image/video generators are out there for Linux users. Last I looked into it, a Nvidia GPU was realistically required. Any that support AMD GPUs out right now? I used one a while ago, but it was strictly using my CPU which made generating anything super slow.

This would be for generators that are not online. Just ones that run strictly run off the hardware of your machine.

blp

Member

clophicube said:
I am wondering what free AI image/video generators are out there for Linux users. Last I looked into it, a Nvidia GPU was realistically required. Any that support AMD GPUs out right now? I used one a while ago, but it was strictly using my CPU which made generating anything super slow.

This would be for generators that are not online. Just ones that run strictly run off the hardware of your machine.

A1111 webui (what most people use) and ComfyUI both work just fine with an AMD GPU. the support isn't _as_ good (and generally performance is going to be a bit worse than the equivalent Nvidia card) and you'll also possibly run into extensions that assume Nvidia and break on AMD but that's an annoying that that happens sometimes, not something that makes using an AMD GPU unusable.

of course, you do need a GPU that has enough VRAM and power to actually run image generation models effectively so if you have integrated graphics or something then that's not going to work.

What is a good amount of VRAM for an AMD GPU in this instance? Mine has 8GB.

blp

Member

clophicube said:
What is a good amount of VRAM for an AMD GPU in this instance? Mine has 8GB.

that should be enough to run SD15 comfortably, SDXL is possible but might be a bit of a squeeze. it would be hard to run huge models like Flux though.

blp said:
that should be enough to run SD15 comfortably, SDXL is possible but might be a bit of a squeeze. it would be hard to run huge models like Flux though.

Sounds like for future generators, I should consider getting a GPU with more VRAM then.

clophicube said:
Sounds like for future generators, I should consider getting a GPU with more VRAM then.

8GB is fine for SDXL. You can always set it to half precision if your Vram isn't enough.

So I just got around to attempting to install A1111 webui. I ran into an error and did some research. It seems that "libgoogle-perftools" is required. I really do not want to use any Google anything. I do not use even use any Google based browser or search engine. It also sounds like this requires an internet connection after installation, which I do not want, as I wish for this to run natively on my own machine.

clophicube said:
So I just got around to attempting to install A1111 webui. I ran into an error and did some research. It seems that "libgoogle-perftools" is required. I really do not want to use any Google anything. I do not use even use any Google based browser or search engine. It also sounds like this requires an internet connection after installation, which I do not want, as I wish for this to run natively on my own machine.

A1111 is fully local, it doesn't need internet connection at all after installation. Everything is installed into venv.

If you are not willing to install "anything google", you can try comfyui or forge. But i suspect that this package is a dependency from some another, bigger package, as there's no way perftools are used for anything in A1111, other than dev debugging.
So you might end up needing it anyway if it's a part of some common package.

As for the VRAM question, you should aim for 12GB to feel comfortable with all the current SD models (not Flux, probably). 16-24 is the safest.

ayokeito said:
A1111 is fully local, it doesn't need internet connection at all after installation. Everything is installed into venv.

If you are not willing to install "anything google", you can try comfyui or forge. But i suspect that this package is a dependency from some another, bigger package, as there's no way perftools are used for anything in A1111, other than dev debugging.
So you might end up needing it anyway if it's a part of some common package.

As for the VRAM question, you should aim for 12GB to feel comfortable with all the current SD models (not Flux, probably). 16-24 is the safest.

Thank you for clearing that up. I am currently using an RX 5700 XT, but with AI generative models and what not advancing rapidly, I suspected that it and maybe any card with 8 GB of VRAM or less would be left in the dust pretty soon. Although, for the time being, I can throw my 6700 XT from another machine being used on a 4k TV for the one I intend to use for image generation.

Also, I have been experimenting with perchance and have (in my opinion) developed a decent prompt that (I think) uses HTML? At least that is what the website says in the scratchpad before inputting anything. Is that something that I can transfer?

clophicube said:
Thank you for clearing that up. I am currently using an RX 5700 XT, but with AI generative models and what not advancing rapidly, I suspected that it and maybe any card with 8 GB of VRAM or less would be left in the dust pretty soon. Although, for the time being, I can throw my 6700 XT from another machine being used on a 4k TV for the one I intend to use for image generation.

Also, I have been experimenting with perchance and have (in my opinion) developed a decent prompt that (I think) uses HTML? At least that is what the website says in the scratchpad before inputting anything. Is that something that I can transfer?

In terms of prompts, you're only transferring text.
Example of prompt using booru (not e621) tags:

score_9, score_8_up, score_7_up, score_6_up, score_5_up, score_4_up, source_furry, rating_explicit, 1girl, solo, on back, anthro, lynx girl, cheetah girl, tail through clothes, from above, pink pawpads, mature female, arms behind head

This one has a bunch of trash required by Pony models, so more vanilla SDXL models can drop everything before 1girl

So I have been trying to get help with having Comfyui working on my system with my GPU (6700 XT, I swapped out my 5700XT). It has been really frustrating trying to get this help because it has been super splintered. I am wondering if anyone here can help me getting my 6700 XT working in Comfyui on Linux Mint. This way there is a forum to look back so instructions are less likely to be repeated, I am not asked the same questions over again, etc.

ayokeito said:
There are step by step guides on how to do that:
https://comfyui-wiki.com/install/install-comfyui/install-comfyui-on-linux

You might also want to cross-reference with the instructions from Github, specifically AMD GPUs (Linux only):
https://github.com/comfyanonymous/ComfyUI

You should be up and running in less than 10 minutes. After that, you'll spend a lot more time learning nodes workflow :)

I can VERY much say I have attempted to follow the instructions from both links, and many others, well before your post. I have not been able to get it running on my system. I have also made sure to follow said instructions for my situation (using AMD GPU on Linux) to the best of my knowledge and ability.

Below is information relating to the journey I have been through to try and get ComfyAI running:

I have:
Ryzen 5600X
32 GB of RAM @ 3600 MT/s
1 TB SSD with more than 300 GB of free space.
AMD 6700 XT (RDNA2, 12 GB of VRAM)

I want to use Comfyui to generate images. Maybe later for music and clip generation if that is a thing, but one step at a time.

Linux Mint Cinnamon

Rocm 6.2 is installed. When attempting to install it, it shows it is already installed via terminal, stating after most lines "Requirement already satisfied".

Followed instructions for this link: https://github.com/comfyanonymous/ComfyUI#installing
Also installed Pytorch from https://pytorch.org/ with the following configuration:
Stable (2.5.1), Linux, Pip, Python, Rocm 6.2

Also followed instructions from November 16th post here: https://github.com/pytorch/pytorch/issues/103973#issuecomment-1815387635

I have found that a lot of commands I have seen or been given require I put a 3 after "python". Otherwise, any command with just python" does not work.

I have installed comfyui requirements already via this command:
pip install -r requirements.txt

venv has been activated.

Used these commands per instructions from doctorpangloss to get comfyui running while I had my 5700XT:

python -m venv .venv
source .venv/bin/activate
pip install "comfyui[withtorch]@git+https://github.com/hiddenswitch/ComfyUI.git"
comfyui --create-directories
comfyui

However, it was using my CPU at the time and not seeing or using my GPU.

I also tried the following command at that time:

HSA_OVERRIDE_GFX_VERSION=10.3.0 python main.py

But it still used just my CPU.

I can no longer get ComfyUI to run at all after shutting it down. So I now have my RX 6700 XT in the system since it is newer than my 5700 XT and has 12 GB of VRAM vs 8 GB.

I have uninstalled rocm and reinstalled rocm 6.2, but get the following:

Traceback (most recent call last):
File "/home/user/ComfyUI/main.py", line 91, in <module>
import execution
File "/home/user/ComfyUI/execution.py", line 13, in <module>
import nodes
File "/home/user/ComfyUI/nodes.py", line 21, in <module>
import comfy.diffusers_load
File "/home/user/ComfyUI/comfy/diffusers_load.py", line 3, in <module>
import comfy.sd
File "/home/user/ComfyUI/comfy/sd.py", line 5, in <module>
from comfy import model_management
File "/home/user/ComfyUI/comfy/model_management.py", line 143, in <module>
total_vram = get_total_memory(get_torch_device()) / (1024 * 1024)
File "/home/user/ComfyUI/comfy/model_management.py", line 134, in get_total_memory
_, mem_total_cuda = torch.cuda.mem_get_info(dev)
File "/home/user/.local/lib/python3.10/site-packages/torch/cuda/memory.py", line 721, in mem_get_info
return torch.cuda.cudart().cudaMemGetInfo(device)
RuntimeError: HIP error: invalid argument
HIP kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing AMD_SERIALIZE_KERNEL=3
Compile with TORCH_USE_HIP_DSA to enable device-side assertions.

This is with the overide as well.

Followed instructions on 7/25/23 from link below, but got same results:

https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/11900

Updated

  • 1