New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support upload from named pipes #748
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -219,16 +219,22 @@ export class UploadHttpClient { | |
httpClientIndex: number, | ||
parameters: UploadFileParameters | ||
): Promise<UploadFileResult> { | ||
const totalFileSize: number = (await stat(parameters.file)).size | ||
const fileStat: fs.Stats = await stat(parameters.file) | ||
const totalFileSize = fileStat.size | ||
// on Windows with mkfifo from MSYS2 stats.isFIFO returns false, so we check if running on Windows node and | ||
// if the file has size of 0 to compensate | ||
const isFIFO = | ||
fileStat.isFIFO() || (process.platform === 'win32' && totalFileSize === 0) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Changes look good to me! Would like to get one more pair of eyes on this before merging @yacaovsnc could you quickly glance over this PR? The only slight concern I have is the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think this is 👍🏼 . The logic here will make sure any file with size 0 to use the file on disk approach which should be safer compared to the in-memory buffer. |
||
let offset = 0 | ||
let isUploadSuccessful = true | ||
let failedChunkSizes = 0 | ||
let uploadFileSize = 0 | ||
let isGzip = true | ||
|
||
// the file that is being uploaded is less than 64k in size, to increase throughput and to minimize disk I/O | ||
// the file that is being uploaded is less than 64k in size to increase throughput and to minimize disk I/O | ||
// for creating a new GZip file, an in-memory buffer is used for compression | ||
if (totalFileSize < 65536) { | ||
// with named pipes the file size is reported as zero in that case don't read the file in memory | ||
if (!isFIFO && totalFileSize < 65536) { | ||
const buffer = await createGZipFileInBuffer(parameters.file) | ||
|
||
//An open stream is needed in the event of a failure and we need to retry. If a NodeJS.ReadableStream is directly passed in, | ||
|
@@ -287,7 +293,8 @@ export class UploadHttpClient { | |
let uploadFilePath = tempFile.path | ||
|
||
// compression did not help with size reduction, use the original file for upload and delete the temp GZip file | ||
if (totalFileSize < uploadFileSize) { | ||
// for named pipes totalFileSize is zero, this assumes compression did help | ||
if (!isFIFO && totalFileSize < uploadFileSize) { | ||
uploadFileSize = totalFileSize | ||
uploadFilePath = parameters.file | ||
isGzip = false | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pretty cool trick, didn't know about this 👍