sosoai Orion-14B-Chat-RAG-safetensors - Hugging Face Orion-14B-Chat-RAG: A chat-model fine-tuned on a custom retrieval augmented generation dataset, achieving superior performance in retrieval augmented generation tasks
Orion-14B: Open-source Multilingual Large Language Models - arXiv. org In the AlignBench evaluation, Orion-14B-Chat excels notably in Chinese understanding, Writing, Role-Playing, and Professional tasks The results demonstrate competitive performance across diverse conversational contexts
orion-chat — Xinference Abilities: chat Description: Orion-14B series models are open-source multilingual large language models trained from scratch by OrionStarAI Specifications# Model Spec 1 (pytorch, 14 Billion)# Model Format: pytorch Model Size (in billions): 14 Quantizations: none Engines: vLLM, Transformers Model ID: OrionStarAI Orion-14B-Chat
Orion 14B Chat · Models · Dataloop Orion-14B Chat is a highly efficient AI model designed for fast and accurate text generation It's trained on a diverse dataset of 2 5 trillion tokens and excels in handling long texts, with support for up to 320k tokens
Orion-14B: Open-source Multilingual Large Language Models In this study, we introduce Orion-14B, a collection of multilingual large language models with 14 billion parameters We utilize a data scheduling approach to train a foundational model on a
CurtisJeon OrionStarAI-Orion-14B-Chat-4bit - Hugging Face This Model is quantized base model of OrionStarAI Orion-14B-Chat Model Details Model Description This is the model card of a 🤗 transformers model that has been pushed on the Hub This model card has been automatically generated Developed by: [More Information Needed] Funded by [optional]: [More Information Needed]
OrionStarAI - GitHub Orion-14B is a family of models includes a 14B foundation LLM, and a series of models: a chat model, a long context model, a quantized model, a RAG fine-tuned model, and an Agent fine-tuned model …