Cyran said: Right now I been playing with text base models so similar to ChatGPT, OpenAI etc but running local (and a much smaller model). I find 33B MemLong model run ok on my 3090 but anything bigger get a bit slow. 70B takes over a min to respond so basically useless at that point. Right now mainly playing with it but eventually might try building code that can ask questions and take responses to automate stuff like a custom news feed. Obvious there easier ways to do that then with a AI but more for educational purposes for now. Coding if you a beginner it fine but generally speaking any journey level programer should be able to produce more efficient code then chatGPT etc can produce at moment. Now 5 years from now that could change but there would need to be a pretty big jump for that to happen. Problem with AI currently is it just wrong a lot and it present that wrong information as correct. |
Just checking to see which models you were using.
I'm trying to keep my eye out for people trying or attempting to use LLM's that can experience "existential crisis" as it's called. I want to warn people not to try using LLM's that can experience such an experience (trust me, it's a terrifying prospect).
Isn't the open model of CGPT also restricted in multiple ways?. I know MS's copilot model is prohibited from answering certain questions for one such example of a limitation.
Last edited by Chazore - 2 days agoStep right up come on in, feel the buzz in your veins, I'm like an chemical electrical right into your brain and I'm the one who killed the Radio, soon you'll all see
So pay up motherfuckers you belong to "V"