Comparing LLM results on the same prompt between 3 models - Oooooollama

In my journey to learn how to leverage LLMs for the benefit of my wife's Etsy shop, I decided to run some models locally via Ollama . The advantage of Ollama is that one can locally run open source models like Gemma3 or Mistral or even some of the Phi series (which in a few months will most probably be obsolete 😖). Moreover one can select to run the full model or some flavor of it (as far as parameters go). This post deals with testing the same prompt over 3 different models as a preliminary POC for the final project. AI generated image Hardware, the bane of local LLM serving We have all heard or read about the hardware needs of running a language model, of any size, locally. It needs some GPUs and an overall good supporting hardware. Unfortunately I have none of these at home. My private development playground setup is old (because I like recycling older machines but this is another story). Hardware setup: CPU: Intel i7, 4th Gen GPU: (1) Nvidia GTX 980 Storage: 250GB SSD Ram...