New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Significant performance regression while using AWS S3 file storage #19412
Comments
@Willyfrog Do you know which team could take a look at this? (Doesn't have to be SET, I'm just looking for guidance on who would have best knowledge on this. Also, I'm wondering if this is related to our ongoing internal performance investigations, or if this is a different issue). |
@Vovcharaa can you share the specs of what you are using? and any special config you might have? |
@Willyfrog
docker-compose:
docker-compose.yml
Database: AWS RDS PostgreSQL 12.7 |
@amyblais sorry, I forgot to answer your question. I'd say cloud as they probably are more used to work with AWS, second option would be SRE or Server Platform |
@agnivade @streamer45 Let me know if you have thoughts on this report. |
@Vovcharaa - I see that you posted some logs in your root post. Could you post the complete logs that you see from a 6.3.1 server till the |
@isacikgoz - Since you are on SET, would you be able to take a look? |
Any updates on issue? |
@agnivade @isacikgoz Would you like me to create a ticket for SET? |
@amyblais makes sense. I couldn't find enough time to prioritize this one on my rotation. |
@Vovcharaa Were you using s3 file storage prior to the upgrade? |
@mkraft Yes. Currently there is around 100 GB of data and everything works fine with version 6.0.4. |
@Vovcharaa Would you please be able to share your startup logs like you did before but with System Console > Environment > File Store > Enable Amazon S3 Debugging set to true (ie |
@mkraft S3 bucket name masked. |
@Vovcharaa You said that download, uploading, and server restart are slow, but also that the server is completely unusable. Is reading and writing posts, loading channels, changing teams, etc slow too? Or is it just file uploading, downloading, and restart. Are plugins enabled? I added a note to the Jira ticket that my tests comparing those 2 release versions showed no notable change between upload and download of files using S3 (downloading was actually slightly faster on the newer version). |
@mkraft Plugins are enabled but I tried to delete all of them or disable completely and had same result. Perhaps problem is related to AWS region? We use eu-north-1 for S3 bucket and EC2 (t3.small). |
@Vovcharaa |
@mkraft Yes. I setuped test server where this bug is in action. Would it be useful for you to take a look on it yourself? I could provide you test account credentials in some private form. |
@Vovcharaa Yes please. I'm available either on our community instance or by email. |
@mkraft Sended in PM on community instance. |
@Vovcharaa I tried switching to a new test S3 bucket created under our Mattermost coporate AWS account and the performance issues were immediately solved, as far as I could tell. We recommend experimenting with increased resources. |
@mkraft Did you use AWS keys directly for S3? I assume problem is with EC2 instance metadata url. Something changed with upgrade of minio-go dependency. |
@Vovcharaa Yes, I used explicit keys. We did upgrade the minio dependency from v7.0.11 to v7.0.14. |
Instance is definitely not under resourced. This issue is with changed communication logic with S3 while using EC2 instance role, which was introduced with minio dependency upgrade. |
This issue was mitigated in version minio-go v7.0.24 by pr minio/minio-go#1626 |
That sounds great @Vovcharaa ! Thank you for digging into this. Would you be open to sending a PR to upgrade the minio dependency? |
Summary
Downloading, uploading of files and server startup time significantly increased after updating mattermost server to 6.1+ from 6.0.4.
Steps to reproduce
How can we reproduce the issue (what version are you using?)
Currently using 6.0.4 with no issues.
Any attemt of upgrade to 6.1+ (tried 6.1.0 and 6.3.1) leads to completly unusable state of the server.
Same behavior with fresh install and S3 file storage configured on version 6.3.1
Deployed on Ubuntu 20.04 using Docker Image https://hub.docker.com/r/mattermost/mattermost-team-edition .
Database: AWS RDS Postgres 12.7
Expected behavior
No performance regression while using AWS S3 file storage.
Observed behavior (that appears unintentional)
Data below for version 6.3.1:
Server startup time = 8 min (Server startup while downgrading to 6.0.4 = ~10 seconds)
Downloading file = 1-2 min only for downloading to start.
Uploading file = 2-3 min with processing status.
Test Connection Button in File Storage settings = 2 min with success status.
The text was updated successfully, but these errors were encountered: