I used it and was not impressed… I found Wizard LM to be far superior.
Also, I agree with @wagesj45 up there about training other models… but how would they detect that you’re training other models with it? I think one of the best things you can do with a large model is to train a small specialist model.
Well, WizardLM is an instruction evolution method, and not particular the underlying foundational model, like either LLaMA 1 vs. 2. And WizardLM themselves have clearly hopped onto the LLaMA 2 bandwagon, which has been out since yesterday: https://huggingface.co/WizardLM/WizardLM-13B-V1.2
I used it and was not impressed… I found Wizard LM to be far superior.
Also, I agree with @wagesj45 up there about training other models… but how would they detect that you’re training other models with it? I think one of the best things you can do with a large model is to train a small specialist model.
Well, WizardLM is an instruction evolution method, and not particular the underlying foundational model, like either LLaMA 1 vs. 2. And WizardLM themselves have clearly hopped onto the LLaMA 2 bandwagon, which has been out since yesterday: https://huggingface.co/WizardLM/WizardLM-13B-V1.2