Gptel is better than ellama. Ollama only runs small junk. Llama.cpp is way better because it will split CPU and GPU and run bigger quantized models, especially large MoEs on a 16 GB GPU.
I started trying to mess with image-dired today for a few minutes to try and get my training images and caption files to sync and scroll, but didn’t get very far before just tiling… such a noob
Snap me daddy