Hi @<1628565287957696512:profile|AloofBat92>
Yeah the name is confusing, we should probably change that. The idea is it is a low code / high code , train your own LLM and deploy it. Not really chatgpt 1:1 comparison, more like, GenAI for enterprises. make sense ?
Thank you Martin for your prompt response , meaning we can deploy any gpt models like llama2 , falcon , chatgpt , right ? I wanted to make sure the platform supports all the llm models . Please advise
That is correct. Unfortunately though this is not part of the open source, this means that for the open source it might be a bit more hands-on to deploy an llm model
Totally understandable , but chatgpt api is not open source and also I wanted to understand a few .
How come we support the claim that there is a data leakage by using chatgpt model and how are we resolving it here with a different architecture .
Even if the architecture is different , still it is a chatgpt interface correct ? Please advise
still it is a chatgpt interface correct ?
Actually, no. And we will change the wording on the website so it is more intuitive to understand.
The idea is you actually train your own model (not chatgpt/openai) and use that model internally, which means everything is done inside your organisation, from data through training and ending with deployment. Does that make sense ?