Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance is much slower than a Java implementation when running in z/OS #579

Open
drimmeer opened this issue Mar 15, 2024 · 5 comments
Open

Comments

@drimmeer
Copy link

drimmeer commented Mar 15, 2024

I tried to receive a file from Linux to USS of z/OS (running in a Linux emulator which is called ZD&T), and compared the performance between using GoLang and Java.

When running the built code written in GoLang and this sftp package, it took 42 minutes and 30 seconds to receive a 940M file.
But when running a Java program (using Jsch to do sftp) using JVM, it took only 8 minutes 14 seconds to receive the same 940M file.

(More test showed that the performance of GoLang is much (5~10 times) slower than Java when running in z/OS, or send files to z/OS;
But Go is similar or has better performance than Java if running in Linux and send/receive files to/from Linux, or receive files from z/OS)

How come a compiled code is even slower than a interpreting code? And so slow? What could be the reason of the performance issue?

@puellanivis
Copy link
Collaborator

puellanivis commented Mar 16, 2024

Are you using UseConcurrentReads and UseConcurrentWrites ?

How come a compiled code is even slower than a interpreting code?

Look, I enjoy jokes at Java’s expense, but Java is compiled code, and not interpreted. The bytecode hasn’t been executed through an interpreter instead of a just-in-time compiler since the 90s.

But since the primary delays in transferring data across a network or the internet is usually the delay in transferring packets, I also wouldn’t be surprised if an actually interpreted language could still beat out our non-concurrent code, if it were issuing concurrent requests.

And so slow?

An earlier design made concurrent reads and writes more dangerous, so we turned them off by default. I think we’ve worked out that issue now, but we haven’t yet switched it back to default on, because there are some really weird esoteric servers out there that have been known to unexpectedly delete people’s files if you look at the file wrong. So, out of an abundance of caution, and because it’s all documented right there in the documentation… 🤷‍♀️

Out-of-band, before firing off a similar hot take to any other project, I would strongly recommend checking everything a lot more thoroughly. You may be working from insufficient information, and/or invalid assumptions, and then your snark won’t come across as all too funny.

@drimmeer
Copy link
Author

drimmeer commented Mar 18, 2024

Thank you Cassondra for the explanation and advice.

First, I apologize for the harsh word I made. I didn't mean it.
I was kind of surprised and disappointed when I saw that Go was so much slower than Java in my testing, cause the reason why I tried Go was to hope to replace a sftp tool written in Java and running in z/OS, which was too slow due to the performance of JVM, with a much faster solution.
Now your comment "But since the primary delays in transferring data across a network or the internet is usually the delay in transferring packets" brought me a second thought.

But anyway, I'd like to give you more info about what I've tried.
I hadn't try concurrent read or write before. But after I saw your comment, I tried it, unfortunately, I still didn't see much difference.
The same file transfer still took 36 minutes with Go, while it took 8 minutes with Java.

I am not sure if I made any mistake or misused the sftp package. Could you please help take a look?
Please find attached the code I used.
sftp.go.txt

@puellanivis
Copy link
Collaborator

Hm. I cannot seem to see any reason why your Go code should be unnecessarily slow. 🤔 You might try increasing the MaxConcurrentRequestsPerFile as well?

@drimmeer
Copy link
Author

Hi Cassondra,

I tried changing MaxConcurrentRequestsPerFile with 3 values (I assume that the default is 60):
100: it took 34 minutes
200: it took 55 minutes
30: it took 60 minutes.

Seems there is no hope to make it much faster with this config.

Do you have any idea what could be the especial of z/OS that causes this problem?

@puellanivis
Copy link
Collaborator

I hadn’t even heard of z/OS until you mentioned it. So, I don’t really have any insight to help you here. As the platform is not “out-of-the-box” supported, it could just be poor optimizations? 🤷‍♀️

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants