Skip to content

Polyglot: Large Language Models of Well-balanced Competence in Multi-languages

License

Notifications You must be signed in to change notification settings

EleutherAI/polyglot

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 

Repository files navigation

Polyglot: Large Language Models of Well-balanced Competence in Multi-languages

1. Introduction

Why another multilingual model?

Various multilingual models such as mBERT, BLOOM, and XGLM have been released. Therefore, someone might ask, "why do we need to make multilingual models again?" Before answering the question, we would like to ask, "Why do people around the world make monolingual models in their language even though there are already many multilingual models?" We would like to point out there is a dissatisfaction with the non-English language performance of the current multilingual models as one of the most significant reason. So we want to make multilingual models with higher non-English language performance. This is the reason we need to make multilingual models again and why we name them 'Polyglot'.

2. Projects

1) Polyglot-Ko

When we started our research, we have already had 1.2TB of Korean data collected by TUNiB. Before we collected a large amount of multilingual data, we decided to try Korean modeling with the dataset we already had. This Korean model can be used for performance comparison with the multilingual models, and this model itself would help many Korean companies and researchers.

Size Training Status Model Card Model Checkpoints
1.3B Finished Available Available
3.8B Finished Available Available
5.8B Finished Available Available
12.8B Finished Available Available

💡 We are collaborating with KoAlpaca team which is creating a series of Korean instruct fine-tuned models. As a result, we were able to release the Koalapca-Polyglot models. Please refer to here to see more details.

2) Japanese StableLM

We co-worked with StabilityAI Japan to create open source Japanese language models. We've mainly contributed to dataset collection part for this project.

Size Training Status Model Card Model Checkpoints
7B-base Finished Available Available
7B-instruct Finished Available Available

3. Limitations and Biases

Polyglot has been trained to optimize next token prediction. Language models such as this are often used for a wide variety of tasks and it is important to be aware of possible unexpected outcomes. For instance, Polyglot will not always return the most factual or accurate response but the most statistically likely one. In addition, Polyglot may produce socially unacceptable or offensive content. We recommend having a human curator or other filtering mechanism to censor sensitive content.

4. Citation and Related Information

BibTeX entry

If you find our work useful, please consider citing:

@misc{ko2023technical,
      title={A Technical Report for Polyglot-Ko: Open-Source Large-Scale Korean Language Models}, 
      author={Hyunwoong Ko and Kichang Yang and Minho Ryu and Taekyoon Choi and Seungmu Yang and jiwung Hyun and Sungho Park},
      year={2023},
      eprint={2306.02254},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
@misc{JapaneseStableLMBaseAlpha7B, 
      url={[https://huggingface.co/stabilityai/japanese-stablelm-base-alpha-7b](https://huggingface.co/stabilityai/japanese-stablelm-base-alpha-7b)}, 
      title={Japanese StableLM Base Alpha 7B}, 
      author={Lee, Meng and Nakamura, Fujiki and Shing, Makoto and McCann, Paul and Akiba, Takuya and Orii, Naoki}
}
@misc{JapaneseStableLMInstructAlpha7B, 
      url={[https://huggingface.co/stabilityai/japanese-stablelm-instruct-alpha-7b](https://huggingface.co/stabilityai/japanese-stablelm-instruct-alpha-7b)}, 
      title={Japanese StableLM Instruct Alpha 7B}, 
      author={Lee, Meng and Nakamura, Fujiki and Shing, Makoto and McCann, Paul and Akiba, Takuya and Orii, Naoki}
}

Licensing

All our models are licensed under the terms of the Apache License 2.0.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

However, the model has the potential to generate unpredictable text as mentioned. Therefore, we are not responsible for any damages resulting from the use of the model.

Acknowledgement

This project was made possible thanks to the computing resources from Stability.ai, thanks to TUNiB for providing a large-scale Korean dataset.

About

Polyglot: Large Language Models of Well-balanced Competence in Multi-languages

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published