that runs 100% offline on your computer.
Goddamn, that’s wonderful!
Does this differ from Ollama + Open WebUI in any way?
Depends. Are either of those companies bootstrapping a for-profit startup and trying to dupe people into contributing free labor prior to their inevitable rug pull/switcheroo?
Do explain how you dupe people into contributing free labor and do a switcheroo with an open source project. All the app does is just provide a nice UI for running models.
Ok I tried it out and as of now Jan has a better UI/UX imo (easier to install and use), but Open WebUI seems to have more features like document/image processing.
“100% Open Source“
[links to two proprietary services]
Why are so many projects like this?
I imagine it’s because a lot of people don’t have the hardware that can run models locally. I do wish they didn’t bake those in though.
Other offline tools I’ve found:
GPT4All
RWKY-Runner
I’ve been using Jan for a while now. It’s great!
Would you say it’s noob-friendly?
Yes
Very. Just have a good enough internet connection and hardware to download and run models. Interrupted downloads must start over. 4-41 GB. Otherwise find the source, use wget, and download to the correct folder.
Is there a model you prefer? I’ve been throwing the exact same question to different models and they seem to all give a very similar answer.
Also, how is it getting certain information if it’s all offline? For example, I asked it to recommend some bike products, and gave very specific brands and models.
Trinity stood out the most to me, it seems to have less unnecessary fluff
Train it online. Use it offline.
That’s crazy impressive, though. I’ve been playing with it more, and it’s very specific about certain things. I guess you can hold a lot of data in the GB of space these models use.
Agree, no small feat. Two caveats tho:
- These models prioritize plausibility above factual correctness. So verification often is needed.
- Data from after the creation of trainingmaterial is absent of course.
These models prioritize plausibility above factual correctness. So verification often is needed.
100% I was telling my wife that anyone who knows about a subject, can easily point out the inaccuracies with the output from any of the models.
But if you don’t know about a subject, the AI gives you an answer that seems like it could be right. Scary to see where this technology takes us, especially when the majority easily digests information without verifying any of it.
I use Stealth or Starling, usually.
Is it better than GPT4All? Do they provide their own model(s) or do we have to download it from other sources?
The provide a hub of models, in my case it was better than gpt4all because it didn’t crash, but I also think it has a nicer user interface.
I’m in the process of installing https://github.com/imartinez/privateGPT will check this one out afterwards.
The biggest difference seems to be that you can let privateGPT to let analyze your own files. Didn’t see that functionality in Jan.
One difference is that Jan is increadibly easy to install, just download the AppImage, make it executable and start it.
@jeena And absolutely nothing can go wrong by downloading random files from the internet based on contemporary hype, making them executable and starting them…
As opposed to cloning a random repository and running
make
or something?@xigoi Is that something you do?
How else would you install something that doesn’t happen to be in your favorite package manager?
@xigoi Are you actually trying to get malware into your computer? Don’t install **random** shiny new things without maximum skepticism. Period. Just let some other fools “test” the minefield for you. Or do a proper inspection. Executing foreign code just because it had “GPT” in the name… and acting like there was no other option… yuck!
Anyone interested in a local llm should check out Llamafile from Mozilla.
This looks very cool, especially the part about being able to use it on consumer-grade laptops. Will try it out when I get a chance.
What are the hardware requirements?
Depends on the size of the model you want to run. Generally, having a decent GPU and at least 16 gigs of RAM is helpful.