New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
remote write 2.0 - benchmarking #13995
Comments
Hello @cstyan, my name is Olorunfemi Daramola a software engineer. I'm interested in this project, is it still open? |
There's a bit of work ongoing already but nothing's finished yet. This is also open as a project for the upcoming LFX mentorship session. |
Since it's on LFX, could you send the link so i can apply as a mentee? |
This is the general LFX website: https://lfx.linuxfoundation.org/tools/mentorship/ IIRC applications for the summer session aren't open yet. |
I wanted the exact link to apply for this project, but since you said applications aren't opened yet, would you do me a favor and update this thread with the link when it is? |
Hi @cstyan my name is Avigyan Sinha, I am pretty interested in this project for lfx, could you recommend some resources to get satarted with this? |
Proposal
We need a more formal and repeatable way of benchmarking changes within remote write. It makes sense to include this as a (non-blocking) task for the remote write 2.0 tracking issue.
We can extend the avalanche project plus build a
/dev/null
esque sink that accepts remote write metrics/introduces latency etc. These could be used within prombench to provide a way of benchmarking changes to remote write in a realistic environment; k8s, multiple pods, etc.The text was updated successfully, but these errors were encountered: