Instructions here: https://github.com/ghobs91/Self-GPT

If you’ve ever wanted a ChatGPT-style assistant but fully self-hosted and open source, Self-GPT is a handy script that bundles Open WebUI (chat interface front end) with Ollama (LLM backend).

  • Privacy & Control: Unlike ChatGPT, everything runs locally, so your data stays with you—great for those concerned about data privacy.
  • Cost: Once set up, self-hosting avoids monthly subscription fees. You’ll need decent hardware (ideally a GPU), but there’s a range of model sizes to fit different setups.
  • Flexibility: Open WebUI and Ollama support multiple models and let you switch between them easily, so you’re not locked into one provider.
  • Tobberone@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 个月前

    Thanks! I actually removedd up the concept of context window, and from there how to create a modelfile, through one of the links provided earlier and it has made a huge difference. In your experience, would a small model like llama3.2 with a bigger context window be able to provide the same output as a big modem L, like qwen2.5:14b, with a more limited window? The bigger window obviously allow more data to be taken into account, but how does the model size compare?

    • Zos_Kia@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 个月前

      If I understand these things correctly, the context window only affects how much text the model can “keep in mind” at any one time. It should not affect task performance outside of this factor.