Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Groq Support #739

Open
robertritz opened this issue Apr 21, 2024 · 2 comments
Open

Groq Support #739

robertritz opened this issue Apr 21, 2024 · 2 comments
Labels
enhancement New feature or request

Comments

@robertritz
Copy link

Groq has tremendous inference speeds (280 tokens per second for Llama 3 70B and 877 tokens per second for Llama 8B0. It would be amazing to get support for this in Jupyter AI.

@robertritz robertritz added the enhancement New feature or request label Apr 21, 2024
Copy link

welcome bot commented Apr 21, 2024

Thank you for opening your first issue in this project! Engagement like this is essential for open source projects! 🤗

If you haven't done so already, check out Jupyter's Code of Conduct. Also, please try to follow the issue template as it helps other other community members to contribute more effectively.
welcome
You can meet the other Jovyans by joining our Discourse forum. There is also an intro thread there where you can stop by and say Hi! 👋

Welcome to the Jupyter community! 🎉

@nick-youngblut
Copy link

Also, groq has a generous free pricing tier

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants