Dr. Nathan Parker Posted February 3 Share Posted February 3 This is neat. I'm trying it out. You can run an AI Chat app locally on your device so your history isn't shared with a server: https://www.pcworld.com/article/2217387/heres-the-simplest-way-to-run-a-private-ai-chatbot-on-your-pc.html 2 Link to comment Share on other sites More sharing options...
jlm Posted February 9 Share Posted February 9 I'm using LM Studio, which is similar. They use the same .gguf files for models, and may both be using llama.cpp to run them. I haven't tried GPT4All, but one reviewer found it less intuitive. Note that GPT4All is not running Open AI's GPT-4. It's running smaller AI models that are less capable but often good enough. To run a state-of-the-art model is beyond the capabilities of most home computers. It would take something on the order of 64GB of RAM to load the model and lots of GPU power to run it at a reasonable speed. But in this fast-changing field, hardware requirements are one thing that is rapidly improving. Link to comment Share on other sites More sharing options...
Dr. Nathan Parker Posted February 9 Author Share Posted February 9 True. Thanks for the info! I may check out LM Studio. Link to comment Share on other sites More sharing options...
Brian K. Mitchell Posted March 30 Share Posted March 30 I am late to this topic, but did want to mention one can also run AI models in MacOS Terminal with Ollama https://ollama.com Link to comment Share on other sites More sharing options...
Dr. Nathan Parker Posted March 30 Author Share Posted March 30 Nice! Link to comment Share on other sites More sharing options...
Recommended Posts
Please sign in to comment
You will be able to leave a comment after signing in
Sign In Now