Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed to upload artifact #270

Closed
AurevoirXavier opened this issue Nov 23, 2021 · 34 comments
Closed

Failed to upload artifact #270

AurevoirXavier opened this issue Nov 23, 2021 · 34 comments
Labels
bug Something isn't working

Comments

@AurevoirXavier
Copy link

What happened?

With the provided path, there will be 9 files uploaded
Total file count: 9 ---- Processed file #8 (88.8%)
Error: Unexpected response. Unable to upload chunk to https://pipelines.actions.githubusercontent.com/RsdR9hD0CJVSpVEzXgkUfNMF6Hs6R4uSsdednoJdwMVkSDQUba/_apis/resources/Containers/25801886?itemPath=darwinia-artifact%2Fdarwinia
##### Begin Diagnostic HTTP information #####
Status Code: 400
Status Message: Bad Request
Header Information: {
  "cache-control": "no-store,no-cache",
  "pragma": "no-cache",
  "transfer-encoding": "chunked",
  "content-type": "application/json; charset=utf-8",
  "strict-transport-security": "max-age=2592000",
  "x-tfs-processid": "bf452d3e-d749-4ecc-95ac-c5690fa61b23",
  "activityid": "0b60cfad-b027-4f7e-aacd-4cf90a915e8a",
  "x-tfs-session": "0b60cfad-b027-4f7e-aacd-4cf90a915e8a",
  "x-vss-e2eid": "0b60cfad-b027-4f7e-aacd-4cf90a915e8a",
  "x-vss-senderdeploymentid": "193695a0-0dcd-ade4-f810-b10ad24a9829",
  "x-frame-options": "SAMEORIGIN",
  "x-cache": "CONFIG_NOCACHE",
  "x-msedge-ref": "Ref A: DB637C6A867E4D66B9A9908ED317BA11 Ref B: DM2EDGE0806 Ref C: 2021-11-23T04:47:59Z",
  "date": "Tue, 23 Nov 2021 04:47:59 GMT"
}
###### End Diagnostic HTTP information ######
Warning: Aborting upload for /home/runner/work/darwinia/darwinia/shared/darwinia due to failure
Error: aborting artifact upload
Total size of all the files uploaded is 30886255 bytes
Finished uploading artifact darwinia-artifact. Reported size is 30886255 bytes. There were 1 items that failed to upload
Error: An error was encountered when uploading darwinia-artifact. There were 1 items that failed to upload.

https://github.com/darwinia-network/darwinia/runs/4295287117?check_suite_focus=true

What did you expect to happen?

Upload successful.

How can we reproduce it?

Not sure.

Anything else we need to know?

No response

What version of the action are you using?

v2

What are your runner environments?

linux

Are you on GitHub Enterprise Server? If so, what version?

No response

@AurevoirXavier AurevoirXavier added the bug Something isn't working label Nov 23, 2021
@RaenonX
Copy link

RaenonX commented Dec 8, 2021

Pushing up for facing the exact issue

@hogsy
Copy link

hogsy commented Dec 8, 2021

Facing same issue here as well. Was working previously on Nov 12th but not now.

@nealkruis
Copy link

We are experiencing the same issue. Re-running the same workflow that was successful yesterday afternoon now gives this error.

Here is the console output for our workflow:

With the provided path, there will be 239 files uploaded
Starting artifact upload
For more detailed logs during the artifact upload process, enable step-debugging: https://docs.github.com/actions/monitoring-and-troubleshooting-workflows/enabling-debug-logging#enabling-step-debug-logging
Artifact name is valid!
Container for artifact "Documentation" successfully created. Starting upload of file(s)
Error: Unexpected response. Unable to upload chunk to https://pipelines.actions.githubusercontent.com/LBGUejEleqC7RxTr3UyWYxJku0Fdd5SJ5usIIswpygD84F3Orn/_apis/resources/Containers/20552360?itemPath=Documentation%5C.nojekyll
##### Begin Diagnostic HTTP information #####
Status Code: 400
Status Message: Bad Request
Header Information: {
  "cache-control": "no-store,no-cache",
  "pragma": "no-cache",
  "transfer-encoding": "chunked",
  "content-type": "application/json; charset=utf-8",
  "strict-transport-security": "max-age=2592000",
  "x-tfs-processid": "d30a5c4e-f411-4df8-95b9-61fc28a0da67",
  "activityid": "fa956f57-fb28-4092-9ad8-b45e13bb9cac",
  "x-tfs-session": "fa956f57-fb28-4092-9ad8-b45e13bb9cac",
  "x-vss-e2eid": "fa956f57-fb28-4092-9ad8-b45e13bb9cac",
  "x-vss-senderdeploymentid": "a07ab14e-025a-39c3-8d53-788cd7ce207f",
  "x-frame-options": "SAMEORIGIN",
  "x-cache": "CONFIG_NOCACHE",
  "x-msedge-ref": "Ref A: 63D453FBCDF349A291DB51C9FE5FA48D Ref B: BN3EDGE0910 Ref C: 2021-12-08T15:53:59Z",
  "date": "Wed, 08 Dec 2021 15:53:58 GMT"
}
###### End Diagnostic HTTP information ######
Warning: Aborting upload for D:\a\cse\cse\doc\output\.nojekyll due to failure
Error: aborting artifact upload
Uploaded D:\a\cse\cse\doc\output\.git\objects\pack\pack-6d69e7822794a9552a2be8d89c726e952f225588.pack (9.0%) bytes 0:8388607
Uploaded D:\a\cse\cse\doc\output\.git\objects\pack\pack-6d69e7822794a9552a2be8d89c726e952f225588.pack (18.0%) bytes 8388608:16777215
Uploaded D:\a\cse\cse\doc\output\.git\objects\pack\pack-6d69e7822794a9552a2be8d89c726e952f225588.pack (27.0%) bytes 16777216:25165823
Total file count: 239 ---- Processed file #95 (39.7%)
Uploaded D:\a\cse\cse\doc\output\.git\objects\pack\pack-6d69e7822794a9552a2be8d89c726e952f225588.pack (36.0%) bytes 25165824:33554431
Uploaded D:\a\cse\cse\doc\output\.git\objects\pack\pack-6d69e7822794a9552a2be8d89c726e952f225588.pack (45.1%) bytes 33554432:41943039
Uploaded D:\a\cse\cse\doc\output\.git\objects\pack\pack-6d69e7822794a9552a2be8d89c726e952f225588.pack (54.1%) bytes 41943040:50331647
Uploaded D:\a\cse\cse\doc\output\.git\objects\pack\pack-6d69e7822794a9552a2be8d89c726e952f225588.pack (63.1%) bytes 50331648:58720255
Uploaded D:\a\cse\cse\doc\output\.git\objects\pack\pack-6d69e7822794a9552a2be8d89c726e952f225588.pack (72.1%) bytes 58720256:67108863
Uploaded D:\a\cse\cse\doc\output\.git\objects\pack\pack-6d69e7822794a9552a2be8d89c726e952f225588.pack (81.2%) bytes 67108864:75497471
Uploaded D:\a\cse\cse\doc\output\.git\objects\pack\pack-6d69e7822794a9552a2be8d89c726e952f225588.pack (90.2%) bytes 75497472:83886079
Uploaded D:\a\cse\cse\doc\output\.git\objects\pack\pack-6d69e7822794a9552a2be8d89c726e952f225588.pack (99.2%) bytes 83886080:92274687
Uploaded D:\a\cse\cse\doc\output\.git\objects\pack\pack-6d69e7822794a9552a2be8d89c726e952f225588.pack (100.0%) bytes 92274688:92966490
Total size of all the files uploaded is 93424960 bytes
File upload process has finished. Finalizing the artifact upload
Upload finished. There were 144 items that failed to upload

The raw size of all the files that were specified for upload is 93864848 bytes
The size of all the files that were uploaded is 93424960 bytes. This takes into account any gzip compression used to reduce the upload size, time and storage

Note: The size of downloaded zips can differ significantly from the reported size. For more information see: https://github.com/actions/upload-artifact#zipped-artifact-downloads 

Error: An error was encountered when uploading Documentation. There were 144 items that failed to upload.

@mbeckh
Copy link

mbeckh commented Dec 8, 2021

Same here. Job was working on Dec 4th, 2021, but now fails for no obvious reason, There were no changes to the job itself.

@nealkruis
Copy link

I believe this is a problem introduced in the 2.3.0 release (the timing checks out for us). We pegged our action to 2.2.4 and everything is working again.

The issue in 2.3.0 could be related to artifacts containing local git repository files as indicated in #281.

@mbeckh
Copy link

mbeckh commented Dec 8, 2021

I believe this is a problem introduced in the 2.3.0 release

confirmed. When the action is set to @v2.2.4. the upload is working again. Most recent update to v2.3 should either be fixed or reverted.

@ertodd-coke
Copy link

Had the exact same issue. Worked yesterday. Broken today. Updating to explicitly use 2.2.4 worked perfectly.

@mbeckh
Copy link

mbeckh commented Dec 8, 2021

The bug could be related to the number of files. A Job using @v2 works when uploading only a single file. Jobs uploading multiple files fail. This would also match the description in #281. The failure described there might not be related to .gitkeep being a repository file but just to .gitkeep being the second file.

@ferpasri
Copy link

ferpasri commented Dec 9, 2021

  • Run actions/upload-artifact@v2
  • windows-latest

File that fails "2021-12-09_09h46m37s_webdriver_replay_sequence_1.log" is an empty log 0kb

7/12/2021

With the provided path, there will be 10 files uploaded
Total size of all the files uploaded is 88178 bytes
Finished uploading artifact runTestWebdriverCorrectReplay-artifact. Reported size is 88178 bytes. There were 0 items that failed to upload
Artifact runTestWebdriverCorrectReplay-artifact has been successfully uploaded!

9/12/2021

With the provided path, there will be 10 files uploaded
Starting artifact upload
For more detailed logs during the artifact upload process, enable step-debugging: https://docs.github.com/actions/monitoring-and-troubleshooting-workflows/enabling-debug-logging#enabling-step-debug-logging
Artifact name is valid!
Container for artifact "runTestWebdriverCorrectReplay-artifact" successfully created. Starting upload of file(s)
Error: Unexpected response. Unable to upload chunk to https://pipelines.actions.githubusercontent.com/hSNpHTyyoSrNWdCUbHrGJ2g9yzo1ps3nsvaO3xhmDSTzw7VY15/_apis/resources/Containers/20589002?itemPath=runTestWebdriverCorrectReplay-artifact%5Clogs%5C2021-12-09_09h46m37s_webdriver_replay_sequence_1.log
##### Begin Diagnostic HTTP information #####
Status Code: 400
Status Message: Bad Request
Header Information: {
  "cache-control": "no-store,no-cache",
  "pragma": "no-cache",
  "transfer-encoding": "chunked",
  "content-type": "application/json; charset=utf-8",
  "strict-transport-security": "max-age=2592000",
  "x-tfs-processid": "c56f7c8d-066b-41b2-b2ea-478c2582da7e",
  "activityid": "ea4d0bb8-afaa-42ad-af27-51a1746d7fb0",
  "x-tfs-session": "ea4d0bb8-afaa-42ad-af27-51a1746d7fb0",
  "x-vss-e2eid": "ea4d0bb8-afaa-42ad-af27-51a1746d7fb0",
  "x-vss-senderdeploymentid": "a07ab14e-025a-39c3-8d53-788cd7ce207f",
  "x-frame-options": "SAMEORIGIN",
  "x-cache": "CONFIG_NOCACHE",
  "x-msedge-ref": "Ref A: 6E9DC44168964CB585F4AADAE8C64629 Ref B: BN3EDGE0808 Ref C: 2021-12-09T09:46:57Z",
  "date": "Thu, 09 Dec 2021 09:46:57 GMT"
}
###### End Diagnostic HTTP information ######
Warning: Aborting upload for D:\a\TESTAR_dev\TESTAR_dev\testar\target\install\testar\bin\webdriver_replay\logs\2021-12-09_09h46m37s_webdriver_replay_sequence_1.log due to failure
Error: aborting artifact upload
Total size of all the files uploaded is 898 bytes
File upload process has finished. Finalizing the artifact upload
Upload finished. There were 9 items that failed to upload

The raw size of all the files that were specified for upload is 6083 bytes
The size of all the files that were uploaded is 898 bytes. This takes into account any gzip compression used to reduce the upload size, time and storage

Note: The size of downloaded zips can differ significantly from the reported size. For more information see: https://github.com/actions/upload-artifact#zipped-artifact-downloads 

Error: An error was encountered when uploading runTestWebdriverCorrectReplay-artifact. There were 9 items that failed to upload.

@CAMOBAP
Copy link

CAMOBAP commented Dec 9, 2021

The same observed in my repo https://github.com/metanorma/packed-mn/runs/4461130168?check_suite_focus=true

Error: Unexpected response. Unable to upload chunk to https://pipelines.actions.githubusercontent.com/fck8thqdZQOwD2B7yWZqMBJYVFI6isQAXgAscuBYIS1J0YuqKD/_apis/resources/Containers/15989562?itemPath=site%5Cogc%5Cdocuments%5C14-065r2%5Cdocument.pdf
##### Begin Diagnostic HTTP information #####
Status Code: 400
Status Message: Bad Request
Header Information: {
  "cache-control": "no-store,no-cache",
  "pragma": "no-cache",
  "transfer-encoding": "chunked",
  "content-type": "application/json; charset=utf-8",
  "strict-transport-security": "max-age=2592000",
  "x-tfs-processid": "76798e9d-00af-42f1-8451-3247b6267f4d",
  "activityid": "480a2cb9-814e-4df6-83a4-2046009e6399",
  "x-tfs-session": "480a2cb9-814e-4df6-83a4-2046009e6399",
  "x-vss-e2eid": "480a2cb9-814e-4df6-83a4-2046009e6399",
  "x-vss-senderdeploymentid": "193695a0-0dcd-ade4-f810-b10ad24a9829",
  "x-frame-options": "SAMEORIGIN",
  "x-cache": "CONFIG_NOCACHE",
  "x-msedge-ref": "Ref A: 50379D9BDB4B4FA8854350F008F69714 Ref B: BN3EDGE1005 Ref C: 2021-12-08T18:33:02Z",
  "date": "Wed, 08 Dec 2021 18:33:02 GMT"
}
###### End Diagnostic HTTP information ######
Warning: Aborting upload for D:\a\packed-mn\packed-mn\site\ogc\documents\14-065r2\document.pdf due to failure
Error: aborting artifact upload
Total size of all the files uploaded is 45017313 bytes
File upload process has finished. Finalizing the artifact upload
Upload finished. There were 90 items that failed to upload

The raw size of all the files that were specified for upload is 63250658 bytes
The size of all the files that were uploaded is 45017313 bytes. This takes into account any gzip compression used to reduce the upload size, time and storage

@kwokcb
Copy link

kwokcb commented Dec 9, 2021

Getting the same issue appear for us (https://github.com/autodesk-forks/MaterialX/runs/4462734963?check_suite_focus=true).

This appears this is the common error that comes back from the link given in the log:

(https://pipelines.actions.githubusercontent.com/hSNpHTyyoSrNWdCUbHrGJ2g9yzo1ps3nsvaO3xhmDSTzw7VY15/_apis/resources/Containers/20589002?itemPath=runTestWebdriverCorrectReplay-artifact%5Clogs%5C2021-12-09_09h46m37s_webdriver_replay_sequence_1.log)

{"$id":"1","innerException":null,"message":"The user 'System:PublicAccess;aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa' is not authorized to access this resource.","typeName":"Microsoft.TeamFoundation.Framework.Server.UnauthorizedRequestException, Microsoft.TeamFoundation.Framework.Server","typeKey":"UnauthorizedRequestException","errorCode":0,"eventId":3000}

@ricardoboss
Copy link

@olebhartvigsen
Copy link

olebhartvigsen commented Dec 10, 2021

Same error observed here, but here the enviroment is windows-latest

Error: Unexpected response. Unable to upload chunk to https://pipelines.actions.githubusercontent.com/GzWTsAc8Tfkk7NAzxC0CrvDFhIPAaUpAFRbZzar8Syjs8JLKSw/_apis/resources/Containers/3142291?itemPath=ASP-app%5CApp_Plugins%5CExercise.css
##### Begin Diagnostic HTTP information #####
Status Code: 400
Status Message: Bad Request
Header Information: {
  "cache-control": "no-store,no-cache",
  "pragma": "no-cache",
  "transfer-encoding": "chunked",
  "content-type": "application/json; charset=utf-8",
  "strict-transport-security": "max-age=2592000",
  "x-tfs-processid": "dbcd0f50-a18e-4b5a-9184-06101c692ab9",
  "activityid": "32445705-393f-4869-91c5-b1a66e43f437",
  "x-tfs-session": "32445705-393f-4869-91c5-b1a66e43f437",
  "x-vss-e2eid": "32445705-393f-4869-91c5-b1a66e43f437",
  "x-vss-senderdeploymentid": "6be79bb9-f6f7-24a7-0b27-e718f2ab4200",
  "x-frame-options": "SAMEORIGIN",
  "x-cache": "CONFIG_NOCACHE",
  "x-msedge-ref": "Ref A: 7B60BF0CB4314C6497D704CAE0154C8F Ref B: PAOEDGE0616 Ref C: 2021-12-10T09:11:05Z",
  "date": "Fri, 10 Dec 2021 09:11:05 GMT"
}
###### End Diagnostic HTTP information ######

Also confirming, using v. @v2.2.4. and the upload works again

@obecker
Copy link

obecker commented Dec 10, 2021

I'm running my builds on linux, macos, and windows:

    strategy:
      matrix:
        os: [ ubuntu-latest, macos-latest, windows-latest ]
    runs-on: ${{ matrix.os }}
    ...

and the above upload error happens only for windows.
https://github.com/obecker/decycle/actions/runs/1560598795

jgiannuzzi added a commit to G-Research/ParquetSharp that referenced this issue Dec 10, 2021
@astafan8
Copy link

seeing similar issue with .nojakyll file on our repo, https://github.com/QCoDeS/Qcodes/runs/4485121681?check_suite_focus=true. only on Windows build, only recently.

@CAMOBAP
Copy link

CAMOBAP commented Dec 10, 2021

I have an assumption that it can happen for 0 bytes files

@ronaldtse
Copy link

We've just encountered the issue mentioned by @azqa in this particular instance when using actions/download-artifact:
https://github.com/metanorma/metanorma-docker/runs/5655561654?check_suite_focus=true

The exact output is:

Error: An error occurred while attempting to decompress the response stream
A 200 response code has been received while attempting to download an artifact

The corresponding actions/upload-artifact action was successful:
https://github.com/metanorma/metanorma-docker/runs/5655460223?check_suite_focus=true

It baffles me that this particular download fails when our other similar jobs have passed. I wonder if this is due to the length of the artifact name; that was the only difference amongst our jobs.

@maietta
Copy link

maietta commented Mar 25, 2022

The same error is now appearing for v2.3.1, v.2.4.0 and v3. {"$id":"1","innerException":null,"message":"The user 'System:PublicAccess;aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa' is not authorized to access this resource.","typeName":"Microsoft.TeamFoundation.Framework.Server.UnauthorizedRequestException, Microsoft.TeamFoundation.Framework.Server","typeKey":"UnauthorizedRequestException","errorCode":0,"eventId":3000}

Same here. Running v3.

@diogormendes
Copy link

diogormendes commented Apr 1, 2022

Downgraded from v3 to v2, while I'm seeing fewer failures still see those on v2 (actions/download-artifact) the same as #270 (comment)

https://github.com/metabase/metabase/runs/5789943574?check_suite_focus=true

@patsevanton
Copy link

any news?

jayofdoom added a commit to jayofdoom/armada that referenced this issue Jun 7, 2022
severinson added a commit to armadaproject/armada that referenced this issue Jun 8, 2022
* Move linting and unit tests from CircleCI -> GHA

Allows us to run unit tests without charge. Also opens the door
to more interesting integrations (such as linting failures showing
up as PR comments).

Integration tests will be migrated in a later PR.

* Remove CircleCI config for lint & unit tests

These will run in github now, so we no longer need to spend money
to run them in circle ci.

* Fix issue uploading artifact in v3

Works around actions/upload-artifact#270

* Revert "Remove CircleCI config for lint & unit tests"

This reverts commit 20139b3.

Co-authored-by: Albin Severinson <albin@severinson.org>
sushraju added a commit to muxinc/clickhouse-backup that referenced this issue Jun 27, 2022
* fix Altinity#311

* fix Altinity#312

* fix https://github.com/Altinity/clickhouse-backup/runs/4385266807

* fix wrong amd64 `libc` dependency

* change default skip_tables pattern to exclude INFORMATION_SCHEMA database for clickhouse 21.11+

* actualize GET /backup/actions, and fix config.go `CLICKHOUSE_SKIP_TABLES` definition

* add COS_DEBUG separate setting, wait details in Altinity#316

* try to resolve Altinity#317

* Allow using OIDC token for AWS credentials

* update ReadMe.md add notes about INFORMATION_SCHEMA.*

* fix Altinity#220, allow total_bytes as uint64 fetching
fix allocations for `tableMetadataForDownload`
fix getTableSizeFromParts behavior only for required tables
fix Error handling on some suggested cases

* fix Altinity#331, corner case when `Table`  and `Database` have the same name.
update clickhouse-go to 1.5.1

* fix Altinity#331

* add SFTP_DEBUG to try debug Altinity#335

* fix bug, recursuve=>recursive

* BackUPList use 'recursive=true', and other codes do not change, hope this can pass CI

* Force recursive equals true locally

* Reset recursive flag to false

* fix Altinity#111

* add inner Interface for COS

* properly fix for recursive delimiter, fix Altinity#338

* Fix bug about metadata.json, we should check the file name first, instead of appending metadata.json arbitrary

* add ability to restore schema ON CLUSTER, fix Altinity#145

* fix bug about clickhouse-backup list remote which shows no backups info, clickhouse-backup create_remote which will not delete the right backups

* fix `Address: NULL pointer` when DROP TABLE ... ON CLUSTER, fix Altinity#145

* try to fix `TestServerAPI` https://github.com/Altinity/clickhouse-backup/runs/4727526265

* try to fix `TestServerAPI` https://github.com/Altinity/clickhouse-backup/runs/4727754542

* Give up using metaDataFilePath variable

* fix bug

* Add support encrypted disk (include s3 encrypted disks), fix [Altinity#260](Altinity#260)
add 21.12 to test matrix
fix FTP MkDirAll behavior
fix `restore --rm` behavior for 20.12+ for tables which have dependent objects (like dictionary)

* try to fix failed build https://github.com/Altinity/clickhouse-backup/runs/4749276032

* add S3 only disks check for 21.8+

* fix Altinity#304

* fix Altinity#309

* try return GCP_TESTS back

* fix run GCP_TESTS

* fix run GCP_TESTS, again

* split build-artifacts and build-test-artifacts

* try to fix https://github.com/Altinity/clickhouse-backup/runs/4757549891

* debug workflows/build.yaml

* debug workflows/build.yaml

* debug workflows/build.yaml

* final download atrifacts for workflows/build.yaml

* fix build docker https://github.com/Altinity/clickhouse-backup/runs/4758167628

* fix integration_tests https://github.com/AlexAkulov/clickhouse-backup/runs/4758357087

* Improve list remote speed via local metadata cache, fix Altinity#318

* try to fix https://github.com/Altinity/clickhouse-backup/runs/4763790332

* fix test after fail https://github.com/Altinity/clickhouse-backup/runs/4764141333

* fix concurrency MkDirAll for FTP remote storage, improve `invalid compression_format` error message

* fix TestLongListRemote

* Clean code, do not name variables so sloppily, names should be meaningful

* Update clickhouse.go

Change partitions => part

* Not change Files filed in json file

* Code should be placed in proper position

* Update server.go

* fix bug

* Invoke SoftSelect should begin with ch.

* fix error, clickhouse.common.TablePathEncode => common.TablePathEncode

* refine code

* try to commit

* fix bug

* Remove unused codes

* Use NewReplacer

* Add `CLICKHOUSE_IGNORE_NOT_EXISTS_ERROR_DURING_FREEZE`, fix Altinity#319

* fix test fail https://github.com/Altinity/clickhouse-backup/runs/4825973411?check_suite_focus=true

* run only TestSkipNotExistsTable on Github actions

* try to fix TestSkipNotExistsTable

* try to fix TestSkipNotExistsTable

* try to fix TestSkipNotExistsTable, for ClickHouse version v1.x

* try to fix TestSkipNotExistsTable, for ClickHouse version v1.x

* add microseconds to log, try to fix TestSkipNotExistsTable, for ClickHouse version v20.8

* add microseconds to log, try to fix TestSkipNotExistsTable, for ClickHouse version v20.8

* fix connectWithWait, some versions of clickhouse accept connections during process /entrypoint-initdb.d, need wait to continue

* add TestProjections

* rename dropAllDatabases to more mental and clear name

* skip TestSkipNotExistsTable

* Support specified partition backup (Altinity#356)

* Support specify partition during backup create

Authored-by: wangzhen <wangzhen@growingio.com>

* fix PROJECTION restore Altinity#320

* fix TestProjection fail after https://github.com/Altinity/clickhouse-backup/actions/runs/1712868840

* switch to `altinity-qa-test` bucket in GCS test

* update github.com/mholt/archiver/v3 and github.com/ClickHouse/clickhouse-go to latest version, remove old github.com/mholt/archiver usage

* fix `How to convert MergeTree to ReplicatedMergeTree` instruction

* fix `FTP` connection usage in MkDirAll

* optimize ftp.go connection pool

* Add `UPLOAD_BY_PART` config settings for improve upload/download concurrency fix Altinity#324

* try debug https://github.com/AlexAkulov/clickhouse-backup/runs/4920777422

* try debug https://github.com/AlexAkulov/clickhouse-backup/runs/4920777422

* fix vsFTPd 500 OOPS: vsf_sysutil_bind, maximum number of attempts to find a listening port exceeded, fix https://github.com/AlexAkulov/clickhouse-backup/runs/4921182982

* try to fix race condition in GCP https://github.com/AlexAkulov/clickhouse-backup/runs/4924432841

* update clickhouse-go to 1.5.3, properly handle `--schema` parameter for show local backup size after `download`

* add `Database not exists` corner case for `IgnoreNotExistsErrorDuringFreeze` option

* prepare release 1.3.0
- Add implementation `--diff-from-remote` for `upload` command and properly handle `required` on download command, fix Altinity#289
- properly `REMOTE_STORAGE=none` error handle, fix Altinity#375
- Add support for `--partitions` on create, upload, download, restore CLI commands and API endpoint fix Altinity#378, properly implementation of Altinity#356
- Add `print-config` cli command fix Altinity#366
- API Server optimization for speed of `last_backup_size_remote` metric calculation to make it async during REST API startup and after download/upload, fix Altinity#309
- Improve `list remote` speed via local metadata cache in `$TEMP/.clickhouse-backup.$REMOTE_STORAGE`, fix Altinity#318
- fix Altinity#375, properly `REMOTE_STORAGE=none` error handle
- fix Altinity#379, will try to clean `shadow` if `create` fail during `moveShadow`
- more precise calculation backup size during `upload`, for backups created with `--partitions`, fix bug after Altinity#356
- fix `restore --rm` behavior for 20.12+ for tables which have dependent objects (like dictionary)
- fix concurrency by `FTP` creation directories during upload, reduce connection pool usage
- properly handle `--schema` parameter for show local backup size after `download`
- add ClickHouse 22.1 instead of 21.12 to test matrix

* fix build https://github.com/Altinity/clickhouse-backup/runs/5033550335

* Add `API_ALLOW_PARALLEL` to support multiple parallel execution calls for, WARNING, control command names don't try to execute multiple same commands and be careful, it could allocate much memory during upload / download, fix Altinity#332

* apt-get update too slow today on github ;(

* fix TestLongListRemote

* fix Altinity#340, properly handle errors on S3 during Walk() and delete old backup

* Add TestFlows tests to GH workflow (Altinity#5)

* add actions tests

* Update test.yaml

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* updated

* added config.yml

* added config.yml

* update

* updated files

* added tests for views

* updated tests

* updated

* fixed snapshots

* updated tests in response to @Slach

* upload new stuff

* rerun

* fix

* fix

* remove file

* added requirements

* fix fails

* ReRun actions

* Moved credentials

* added secrets

* ReRun actions

* Edited test.yaml

* Edited test.yaml

* ReRun actions

* removed TE flag

* update

* update

* update

* fix type

* update

* try to reanimate ugly github actions and ugly python tests

* try to reanimate ugly config_rbac.py

* fix Altinity#300
fix WINDOW VIEW restore
fix restore for different compression_format than backup created
fix most of xfail in regression.py
merge test.yaml and build.yaml in github actions
Try to add experimental support for backup `MaterializedMySQL` and `MaterializedPostgeSQL` tables, restore MySQL tables not impossible now without replace `table_name.json` to `Engine=MergeTree`, PostgreSQL not supported now, see ClickHouse/ClickHouse#32902

* return format back

* fix build.yaml after https://github.com/Altinity/clickhouse-backup/actions/runs/1800312966

* fix build.yaml after https://github.com/Altinity/clickhouse-backup/actions/runs/1800312966

* build fixes after https://github.com/Altinity/clickhouse-backup/runs/5079597138

* build fixes after https://github.com/Altinity/clickhouse-backup/runs/5079630559

* build fixes after https://github.com/Altinity/clickhouse-backup/runs/5079669062

* fix tfs report

* fix upload artifact for tfs report

* fix upload artifact for clickhouse logs, remove unused BackupOptions

* suuka

* fix upload `clickhouse-logs` artifacts and tfs `report.html`

* fix upload `clickhouse-logs` artifacts

* fix upload `clickhouse-logs` artifacts, fix tfs reports

* fix tfs reports

* change retention to allow upload-artifacts work

* fix ChangeLog.md

* skip gcs and aws remote storage tests if secrets not set

* remove short output

* increase timeout to allow download images during pull

* remove upload `tesflows-clickhouse-logs` artifacts to avoid 500 error

* fix upload_release_assets action for properly support arm64

* switch to mantainable `softprops/action-gh-release`

* fix Unexpected input(s) 'release_name'

* move internal, config, util into `pkg` refactoring

* updated test requirements

* refactoring `filesystemhelper.Chown` remove unnecessary getter/setter, try to reproduce access denied for Altinity#388 (comment)

* resolve Altinity#390, for 1.2.3 hotfix branch

* backport 1.3.x Dockerfile and Makefile to allow 1.2.3 docker ARM support

* fix Altinity#387 (comment), improve documentation related to memory and CPU usage

* fix Altinity#388, improve restore ON CLUSTER for VIEW with TO clause

* fix Altinity#388, improve restore ATTACH ... VIEW ... ON CLUSTER, GCS golang sdk updated to latest

* fix Altinity#385, properly handle multiple incremental backup sequences + `BACKUPS_TO_KEEP_REMOTE`

* fix Altinity#392, correct download for recursive sequence of diff backups when `DOWNLOAD_BY_PART` true
fix integration_test.go, add RUN_ADVANCED_TESTS environment, fix minio_nodelete.sh

* try to reduce upload artifact jobs, look actions/upload-artifact#171 and https://github.com/Altinity/clickhouse-backup/runs/5229552384?check_suite_focus=true

* try to docker-compose up from first time https://github.com/AlexAkulov/clickhouse-backup/runs/5231510719?check_suite_focus=true

* disable telemetry for GCS related to googleapis/google-cloud-go#5664

* update aws-sdk-go and GCS storage SDK

* DROP DATABASE didn't clean S3 files, DROP TABLE clean!

* - fix Altinity#406, properly handle `path` for S3, GCS for case when it begin from "/"

* fix getTablesWithSkip

* fix Altinity#409

* cherry pick release.yaml from 1.3.x to 1.2.x

* fix Altinity#409, for 1.3.x avoid delete partially uploaded backups via `backups_keep_remote` option

* Updated requirements file

* fix Altinity#409, for 1.3.x avoid delete partially uploaded backups via `backups_keep_remote` option

* fix testflows test

* fix testflows test

* restore tests after update minio

* Fix incorrect in progress check on the example of Kubernetes CronJob

* removeOldBackup error log from fatal to warning, to avoid race-condition deletion during multi-shard backup

* switch to golang 1.18

Signed-off-by: Slach <bloodjazman@gmail.com>

* add 22.3 to test matrix, fix Altinity#422, avoid cache broken (partially uploaded) remote backup metadata.

* add 22.3 to test matrix

* fix Altinity#404, switch to 22.3 by default

Signed-off-by: Slach <bloodjazman@gmail.com>

* fix Altinity#404, update to archiver/v4, properly support context during upload / download and correct error handler, reduce `SELECT * system.disks` calls

Signed-off-by: Slach <bloodjazman@gmail.com>

* cleanup ChangeLog.md, finally before 1.3.2 release

Signed-off-by: Slach <bloodjazman@gmail.com>

* continue fix Altinity#404

Signed-off-by: Slach <bloodjazman@gmail.com>

* continue fix Altinity#404, properly calculate max_parts_count

Signed-off-by: Slach <bloodjazman@gmail.com>

* continue fix Altinity#404, properly calculate max_parts_count

Signed-off-by: Slach <bloodjazman@gmail.com>

* add multithreading GZIP implementation

Signed-off-by: Slach <bloodjazman@gmail.com>

* add multithreading GZIP implementation

Signed-off-by: Slach <bloodjazman@gmail.com>

* add multithreading GZIP implementation

Signed-off-by: Slach <bloodjazman@gmail.com>

* Updated Testflows README.md

* add `S3_ALLOW_MULTIPART_DOWNLOAD` to config, to improve download speed, fix Altinity#431

Signed-off-by: Slach <bloodjazman@gmail.com>

* fix snapshot after change default config

Signed-off-by: Slach <bloodjazman@gmail.com>

* fix testflows healthcheck for slow internet connection during `clickhouse_backup` start

Signed-off-by: Slach <bloodjazman@gmail.com>

* fix snapshot after change defaultConfig

Signed-off-by: Slach <bloodjazman@gmail.com>

* - add support backup/restore user defined functions https://clickhouse.com/docs/en/sql-reference/statements/create/function, fix Altinity#420

Signed-off-by: Slach <bloodjazman@gmail.com>

* Updated README.md in testflows tests

* remove unnecessary SQL query for calculateMaxSize, refactoring test to allow restoreRBAC with restart on 21.8 (strange bug, clickhouse stuck after try to run too much distributed DDL queries from ZK), update LastBackupSize metric during API call /list/remote, add healthcheck to docker-compose in integration tests

Signed-off-by: Slach <bloodjazman@gmail.com>

* try to fix GitHub actions

Signed-off-by: Slach <bloodjazman@gmail.com>

* try to fix GitHub actions, WTF, why testflows failed?

Signed-off-by: Slach <bloodjazman@gmail.com>

* add `clickhouse_backup_number_backups_remote`, `clickhouse_backup_number_backups_local`, `clickhouse_backup_number_backups_remote_expected`,`clickhouse_backup_number_backups_local_expected` prometheus metric, fix Altinity#437

Signed-off-by: Slach <bloodjazman@gmail.com>

* add ability to apply `system.macros` values to `path` field in all types of `remote_storage`, fix Altinity#438

Signed-off-by: Slach <bloodjazman@gmail.com>

* use all disks for upload and download for mutli-disk volumes in parallel when `upload_by_part: true` fix Altinity#400

Signed-off-by: Slach <bloodjazman@gmail.com>

* fix wrong warning for .gz, .bz2, .br archive extensions during download, fix Altinity#441

Signed-off-by: Slach <bloodjazman@gmail.com>

* fix Altinity#441, again ;(

Signed-off-by: Slach <bloodjazman@gmail.com>

* try to improve strange parts long tail during test

Signed-off-by: Slach <bloodjazman@gmail.com>

* update actions/download-artifact@v3 and actions/upload-artifact@v2, after actions fail

Signed-off-by: Slach <bloodjazman@gmail.com>

* downgrade actions/upload-artifact@v2.2.4, actions/upload-artifact#270, after actions fail https://github.com/AlexAkulov/clickhouse-backup/runs/6481819375

Signed-off-by: Slach <bloodjazman@gmail.com>

* fix upload data go routines wait, expect improve upload speed the same as 1.3.2

Signed-off-by: Slach <bloodjazman@gmail.com>

* prepare 1.4.1

Signed-off-by: Slach <bloodjazman@gmail.com>

* Fix typo in Example.md

* Set default value for max_parts_count in Azure config

* fix `--partitions` parameter parsing, fix Altinity#425

Signed-off-by: Slach <bloodjazman@gmail.com>

* remove unnecessary logs, fix release.yaml to mark properly tag in GitHub release

Signed-off-by: Slach <bloodjazman@gmail.com>

* add `API_INTEGRATION_TABLES_HOST` option to allow use DNS name in integration tables system.backup_list, system.backup_actions

Signed-off-by: Slach <bloodjazman@gmail.com>

* add `API_INTEGRATION_TABLES_HOST` fix for tesflows fails

Signed-off-by: Slach <bloodjazman@gmail.com>

* fix `upload_by_part: false` max file size calculation, fix Altinity#454

* upgrade actions/upload-artifact@v3, actions/upload-artifact#270, after actions fail https://github.com/Altinity/clickhouse-backup/runs/6962550621

* [clickhouse-backup] fixes on top of upstream

* upstream versions

Co-authored-by: Slach <bloodjazman@gmail.com>
Co-authored-by: Vilmos Nebehaj <vilmos@sprig.com>
Co-authored-by: Eugene Klimov <eklimov@altinity.com>
Co-authored-by: root <root@SLACH-MINI.localdomain>
Co-authored-by: wangzhen <wangzhen@growingio.com>
Co-authored-by: W <wangzhenaaa7@gmail.com>
Co-authored-by: Andrey Zvonov <32552679+zvonand@users.noreply.github.com>
Co-authored-by: zvonand <azvonov@altinity.com>
Co-authored-by: benbiti <wangshouben@hotmail.com>
Co-authored-by: Vitaliis <vsviderskyi@altinity.com>
Co-authored-by: Toan Nguyen <hgiasac@gmail.com>
Co-authored-by: Guido Iaquinti <guido@posthog.com>
Co-authored-by: ricoberger <mail@ricoberger.de>
@ThatXliner
Copy link

ThatXliner commented Jul 17, 2022

Second edit: probably because I'm trying upload an artifact of the same name twice

https://github.com/ThatXliner/skootils/runs/7374236761?check_suite_focus=true

Does downgrading help? Or is this just flaky

Edit:

The logs mention Status Code: 503 Service Unavailable
and

An error has been caught http-client index 0, retrying the upload
[36](https://github.com/ThatXliner/skootils/runs/7374236761?check_suite_focus=true#step:21:37)
Error: Client has already been disposed.
[37](https://github.com/ThatXliner/skootils/runs/7374236761?check_suite_focus=true#step:21:38)
    at HttpClient.<anonymous> (/home/runner/work/_actions/actions/upload-artifact/v3/dist/index.js:5947:23)
[38](https://github.com/ThatXliner/skootils/runs/7374236761?check_suite_focus=true#step:21:39)
    at Generator.next (<anonymous>)
[39](https://github.com/ThatXliner/skootils/runs/7374236761?check_suite_focus=true#step:21:40)
    at /home/runner/work/_actions/actions/upload-artifact/v3/dist/index.js:5718:71
[40](https://github.com/ThatXliner/skootils/runs/7374236761?check_suite_focus=true#step:21:41)
    at new Promise (<anonymous>)
[41](https://github.com/ThatXliner/skootils/runs/7374236761?check_suite_focus=true#step:21:42)
    at module.exports.425.__awaiter (/home/runner/work/_actions/actions/upload-artifact/v3/dist/index.js:5714:12)
[42](https://github.com/ThatXliner/skootils/runs/7374236761?check_suite_focus=true#step:21:43)
    at HttpClient.request (/home/runner/work/_actions/actions/upload-artifact/v3/dist/index.js:5945:16)
[43](https://github.com/ThatXliner/skootils/runs/7374236761?check_suite_focus=true#step:21:44)
    at HttpClient.<anonymous> (/home/runner/work/_actions/actions/upload-artifact/v3/dist/index.js:5898:25)
[44](https://github.com/ThatXliner/skootils/runs/7374236761?check_suite_focus=true#step:21:45)
    at Generator.next (<anonymous>)
[45](https://github.com/ThatXliner/skootils/runs/7374236761?check_suite_focus=true#step:21:46)
    at /home/runner/work/_actions/actions/upload-artifact/v3/dist/index.js:5718:71
[46](https://github.com/ThatXliner/skootils/runs/7374236761?check_suite_focus=true#step:21:47)
    at new Promise (<anonymous>)
[47](https://github.com/ThatXliner/skootils/runs/7374236761?check_suite_focus=true#step:21:48)
Exponential backoff for retry #1. Waiting for 4825 milliseconds before continuing the upload at offset 16777216
[48](https://github.com/ThatXliner/skootils/runs/7374236761?check_suite_focus=true#step:21:49)
Finished backoff for retry #1, continuing with upload
[49](https://github.com/ThatXliner/skootils/runs/7374236761?check_suite_focus=true#step:21:50)
Finished backoff for retry #1, continuing with upload
[50](https://github.com/ThatXliner/skootils/runs/7374236761?check_suite_focus=true#step:21:51)
A 503 status code has been received, will attempt to retry the upload
[51](https://github.com/ThatXliner/skootils/runs/7374236761?check_suite_focus=true#step:21:52)
Exponential backoff for retry #2. Waiting for 10705 milliseconds before continuing the upload at offset 41943040

@Slach
Copy link

Slach commented Jul 17, 2022

it looks completelly flaky ;( and github team do nothing for it ;(

@Slach
Copy link

Slach commented Jul 26, 2022

It so annoying ;(

##### Begin Diagnostic HTTP information #####
Status Code: 500
Status Message: Internal Server Error
Header Information: {
  "cache-control": "no-store,no-cache",
  "pragma": "no-cache",
  "content-length": "328",
  "content-type": "application/json; charset=utf-8",
  "strict-transport-security": "max-age=2592000",
  "x-tfs-processid": "ce80f462-6227-4099-b13a-2691ba4d46f8",
  "activityid": "c7b49267-dec5-43a4-8bb8-38bc21ede[54](https://github.com/Altinity/clickhouse-backup/runs/7523341325?check_suite_focus=true#step:7:55)5",
  "x-tfs-session": "c7b49267-dec5-43a4-8bb8-38bc21ede545",
  "x-vss-e2eid": "c7b49267-dec5-43a4-8bb8-38bc21ede545",
  "x-vss-senderdeploymentid": "ac98198d-4d34-0364-420d-bafa6e51dce2",
  "x-frame-options": "SAMEORIGIN",
  "x-cache": "CONFIG_NOCACHE",
  "x-msedge-ref": "Ref A: 2A38F24912B94C5491751DB2F61CE290 Ref B: BN3EDGE0[60](https://github.com/Altinity/clickhouse-backup/runs/7523341325?check_suite_focus=true#step:7:61)8 Ref C: 2022-07-26T15:23:07Z",
  "date": "Tue, 26 Jul 2022 15:23:07 GMT"
}
###### End Diagnostic HTTP information ######
Retry limit has been reached for chunk at offset 0 to https://pipelines.actions.githubusercontent.com/0W0Oo4V5fr4j69RPnubz9dnfrpiwnQjnC7Fgl2vDHsTCLsvfZS/_apis/resources/Containers/122[67](https://github.com/Altinity/clickhouse-backup/runs/7523341325?check_suite_focus=true#step:7:68)186?itemPath=testflows-logs-and-reports%2Fclickhouse_backup%2F_instances%2Fclickhouse1%2Flogs%2Flog.log

@curena-contrast
Copy link

Just to add another voice, our organization has also been experiencing this issue. It happens intermittently, and also appears to happen more frequently when it must upload multiple chunks.

@DanielHoffmann
Copy link

DanielHoffmann commented Oct 12, 2022

I am also running into this problem on v3, like @curena-contrast said the artifact size seems to affect how often it happens. I have a matrix pipeline building several bundles, some are <1mb and some are >200mb. The ones <1mb don't fail and the ones >200mb always fails

I have seen some reports from people having a very similar error:
tus/tus-js-client#176

this seems related to the code not properly handling a 408 HTTP response (Request Timeout) from the servers and retrying to upload the chunk

Switching to v2 seems to fix it, or at the very least it triggers less often? It can be hard to tell

gambol99 added a commit to appvia/terranetes-controller that referenced this issue Nov 15, 2022
gambol99 added a commit to appvia/terranetes-controller that referenced this issue Nov 15, 2022
@e3b0c442
Copy link

Checking in here, also experiencing this with v3 today.

@NAJ8ry
Copy link

NAJ8ry commented Nov 17, 2022

I had this problem and the solution for me was to introduce a period of waiting.

      - name: Setup the database within Docker
        run: |
          echo 'Starting the db'
          docker-compose -f ./docker_files/docker-compose_dbOnly.yml -p mynameapp up -d
      - name: Sleep for 15 seconds
        run: sleep 15s
        shell: bash
      - name: Unit tests - pytest
        env:
          TEST_POSTGRES: "postgresql://username:password@localhost:5432/test_temp"
          AUTO_TEST: true;
        run: pytest
      - name: Upload artifact for deployment job
        uses: actions/upload-artifact@v2
        with:
          name: python-app
          path: .

The code was correct and worked prior to the introduction to docker.
My guess is that it is linked to having a file locked during the compose that was not released quickly.
This seems really odd but it now works.

@radiomix
Copy link

radiomix commented Nov 25, 2022

Getting the same error sporadically

Thu, 24 Nov 2022 14:08:18 GMT Uploaded /home/runner/GHA-artifacts/file-name-changed (68.1%) bytes 83886080:92274687
Thu, 24 Nov 2022 14:08:22 GMT Error: Unexpected response. Unable to upload chunk to https://pipelines.actions.githubusercontent.com/ewM82OcgF3U6Derh4jaIiF3VtxJxCC8Y9TMFFTJrXzQC6ahs3q/_apis/resources/Containers/16863834?itemPath=GHA-artifacts%2file-name-changed
Thu, 24 Nov 2022 14:08:22 GMT ##### Begin Diagnostic HTTP information #####
Thu, 24 Nov 2022 14:08:22 GMT Status Code: 400
Thu, 24 Nov 2022 14:08:22 GMT Status Message: Bad Request
Thu, 24 Nov 2022 14:08:22 GMT Header Information: {
Thu, 24 Nov 2022 14:08:22 GMT   "cache-control": "no-store,no-cache",
Thu, 24 Nov 2022 14:08:22 GMT   "pragma": "no-cache",
Thu, 24 Nov 2022 14:08:22 GMT   "transfer-encoding": "chunked",
Thu, 24 Nov 2022 14:08:22 GMT   "content-type": "application/json; charset=utf-8",
Thu, 24 Nov 2022 14:08:22 GMT   "strict-transport-security": "max-age=2592000",
Thu, 24 Nov 2022 14:08:22 GMT   "x-tfs-processid": "34cff34f-643b-4b96-89fe-20f88eda1531",
Thu, 24 Nov 2022 14:08:22 GMT   "activityid": "85fc3eca-310f-4852-a3b9-826a06044018",
Thu, 24 Nov 2022 14:08:22 GMT   "x-tfs-session": "85fc3eca-310f-4852-a3b9-826a06044018",
Thu, 24 Nov 2022 14:08:22 GMT   "x-vss-e2eid": "85fc3eca-310f-4852-a3b9-826a06044018",
Thu, 24 Nov 2022 14:08:22 GMT   "x-vss-senderdeploymentid": "6074703f-c195-632a-bd61-21691fc86fa5",
Thu, 24 Nov 2022 14:08:22 GMT   "x-frame-options": "SAMEORIGIN",
Thu, 24 Nov 2022 14:08:22 GMT   "x-cache": "CONFIG_NOCACHE",
Thu, 24 Nov 2022 14:08:22 GMT   "x-msedge-ref": "Ref A: 043F2106A4CC442283158A3B192D8D38 Ref B: SN1EDGE1411 Ref C: 2022-11-24T14:08:22Z",
Thu, 24 Nov 2022 14:08:22 GMT   "date": "Thu, 24 Nov 2022 14:08:21 GMT"
Thu, 24 Nov 2022 14:08:22 GMT }

with commit 83fd05a in step Setup job:

Thu, 24 Nov 2022 13:17:47 GMT Download action repository 'actions/upload-artifact@v3' (SHA:83fd05a356d7e2593de66fc9913b3002723633cb)

@prein
Copy link

prein commented Apr 5, 2023

Can this be caused by parallel (competing?) attempts from jobs running under matrix strategy?

@Slach
Copy link

Slach commented Jun 21, 2023

@prein

Can this be caused by parallel (competing?) attempts from jobs running under matrix strategy?
Yes, I have parallel matrix execution
but i try to upload to different name

      - name: Upload testflows logs
        uses: actions/upload-artifact@v3
        with:
          name: testflows-logs-and-reports-${{ matrix.clickhouse }}-${{ github.run_id }}
          path: |
            test/testflows/*.log
            test/testflows/*.log.txt
            test/testflows/clickhouse_backup/_instances/**/*.log
            test/testflows/*.html
          retention-days: 7

and it doesn't work ;(

@konradpabjan
Copy link
Collaborator

V4 upload-artifact has released today! Recommend switching over.

https://github.blog/changelog/2023-12-14-github-actions-artifacts-v4-is-now-generally-available/

v4 is a complete rewrite of the artifact actions with a new backend. v1-v3 uploads sometimes would hit 100% or close to 100% and things would just stop and fail due to mysterious reasons. Sometimes there would also be a small amount of transient errors like 500s or 400s that you see. v4 is all around more reliable, simpler and a host of issues described in this issue should no longer happen.

If there are any similar issues with v4 then please open up new issues

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests