Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: gcs endpoint should be independently configured with credential #837

Merged
merged 3 commits into from Feb 20, 2024

Conversation

sanadhis
Copy link
Contributor

Hi, I would like to propose to decouple gcs endpoint configuration with credentials.

Use case: We're connecting to gcs via private service connect:
https://cloud.google.com/vpc/docs/private-service-connect?authuser=1&_ga=2.220919888.-993337742.1685727395#supported-apis

The access control should stay the same regardless where the bucket is accessed from.

Let me know what do you think. Appreciate any feedback!

@coveralls
Copy link

coveralls commented Feb 17, 2024

Pull Request Test Coverage Report for Build 7961810108

Details

  • -4 of 4 (0.0%) changed or added relevant lines in 1 file are covered.
  • 243 unchanged lines in 10 files lost coverage.
  • Overall coverage decreased (-2.0%) to 64.536%

Changes Missing Coverage Covered Lines Changed/Added Lines %
pkg/storage/gcs.go 0 4 0.0%
Files with Coverage Reduction New Missed Lines %
pkg/backup/delete.go 1 66.36%
cmd/clickhouse-backup/main.go 2 72.22%
pkg/backup/restore.go 3 68.62%
pkg/storage/object_disk/object_disk.go 5 70.95%
pkg/status/status.go 9 70.2%
pkg/config/config.go 10 70.96%
pkg/server/server.go 18 52.76%
pkg/storage/s3.go 25 43.6%
pkg/storage/general.go 26 58.23%
pkg/storage/gcs.go 144 0.0%
Totals Coverage Status
Change from base Build 7932619113: -2.0%
Covered Lines: 7803
Relevant Lines: 12091

💛 - Coveralls

@Slach
Copy link
Collaborator

Slach commented Feb 19, 2024

Would you like improve your PR to add integration test for https://github.com/fsouza/fake-gcs-server?

@Slach Slach self-requested a review February 19, 2024 09:06
@Slach
Copy link
Collaborator

Slach commented Feb 19, 2024

Looks good,

did you try

GCS_TESTS=1 RUN_TESTS=TestIntegrationGCSWithCustomEndpoint ./test/integration/run.sh

??

@sanadhis sanadhis force-pushed the fix-gcs-custom-endpoint branch 2 times, most recently from 862abee to 901f61b Compare February 19, 2024 15:21
@sanadhis
Copy link
Contributor Author

Looks good,

did you try

GCS_TESTS=1 RUN_TESTS=TestIntegrationGCSWithCustomEndpoint ./test/integration/run.sh

??

Hi @Slach , thank you for some pointers.
I have resided the proposed changes and the tests. From the logs I can see that it successfully create_remote, download, delete, restore, etc. However the test didn't eventually succeeded. But I have the same result from other tests, such as TestIntegrationS3

2024/02/19 16:10:35.760087  info docker exec clickhouse-backup clickhouse-backup -c /etc/clickhouse-backup/config-gcs.yml delete remote partition_backup_2361390625533319336
2024/02/19 16:10:37.018948  info 2024/02/19 15:10:35.938842  info clickhouse connection prepared: tcp://clickhouse:9440 run ping logger=clickhouse
2024/02/19 15:10:35.948011  info clickhouse connection open: tcp://clickhouse:9440 logger=clickhouse
2024/02/19 15:10:35.948236  info SELECT count() AS is_macros_exists FROM system.tables WHERE database='system' AND name='macros'  SETTINGS empty_result_for_aggregation_by_empty_set=0 logger=clickhouse
2024/02/19 15:10:35.951193  info SELECT macro, substitution FROM system.macros logger=clickhouse
2024/02/19 15:10:35.952374  info SELECT count() AS is_macros_exists FROM system.tables WHERE database='system' AND name='macros'  SETTINGS empty_result_for_aggregation_by_empty_set=0 logger=clickhouse
2024/02/19 15:10:35.955176  info SELECT macro, substitution FROM system.macros logger=clickhouse
2024/02/19 15:10:35.971154  info SELECT value FROM `system`.`build_options` where name='VERSION_INTEGER' logger=clickhouse
2024/02/19 15:10:35.972604  info SELECT countIf(name='type') AS is_disk_type_present, countIf(name='free_space') AS is_free_space_present, countIf(name='disks') AS is_storage_policy_present FROM system.columns WHERE database='system' AND table IN ('disks','storage_policies')  logger=clickhouse
2024/02/19 15:10:35.974939  info SELECT d.path, any(d.name) AS name, any(d.type) AS type, min(d.free_space) AS free_space, groupUniqArray(s.policy_name) AS storage_policies FROM system.disks AS d  INNER JOIN (SELECT policy_name, arrayJoin(disks) AS disk FROM system.storage_policies) AS s ON s.disk = d.name GROUP BY d.path logger=clickhouse
2024/02/19 15:10:35.983842  info done                      backup=partition_backup_2361390625533319336 duration=45ms location=remote logger=RemoveBackupRemote operation=delete
2024/02/19 15:10:35.983965  info clickhouse connection closed logger=clickhouse

2024/02/19 16:10:37.019041  info docker exec clickhouse-backup clickhouse-backup -c /etc/clickhouse-backup/config-gcs.yml delete local partition_backup_2361390625533319336
2024/02/19 16:10:38.339274  info 2024/02/19 15:10:37.262973  info clickhouse connection prepared: tcp://clickhouse:9440 run ping logger=clickhouse
2024/02/19 15:10:37.272927  info clickhouse connection open: tcp://clickhouse:9440 logger=clickhouse
2024/02/19 15:10:37.272986  info SELECT value FROM `system`.`build_options` where name='VERSION_INTEGER' logger=clickhouse
2024/02/19 15:10:37.275127  info SELECT countIf(name='type') AS is_disk_type_present, countIf(name='free_space') AS is_free_space_present, countIf(name='disks') AS is_storage_policy_present FROM system.columns WHERE database='system' AND table IN ('disks','storage_policies')  logger=clickhouse
2024/02/19 15:10:37.277708  info SELECT d.path, any(d.name) AS name, any(d.type) AS type, min(d.free_space) AS free_space, groupUniqArray(s.policy_name) AS storage_policies FROM system.disks AS d  INNER JOIN (SELECT policy_name, arrayJoin(disks) AS disk FROM system.storage_policies) AS s ON s.disk = d.name GROUP BY d.path logger=clickhouse
2024/02/19 15:10:37.281947  info SELECT count() AS is_macros_exists FROM system.tables WHERE database='system' AND name='macros'  SETTINGS empty_result_for_aggregation_by_empty_set=0 logger=clickhouse
2024/02/19 15:10:37.283787  info SELECT macro, substitution FROM system.macros logger=clickhouse
2024/02/19 15:10:37.284765  info SELECT count() AS is_macros_exists FROM system.tables WHERE database='system' AND name='macros'  SETTINGS empty_result_for_aggregation_by_empty_set=0 logger=clickhouse
2024/02/19 15:10:37.286458  info SELECT macro, substitution FROM system.macros logger=clickhouse
2024/02/19 15:10:37.296090  info done                      backup=partition_backup_2361390625533319336 duration=33ms location=local logger=RemoveBackupLocal operation=delete
2024/02/19 15:10:37.296215  info clickhouse connection closed logger=clickhouse

2024/02/19 16:10:38.351172  info testBackupSpecifiedPartitions finish
2024/02/19 16:10:38.351260  info Clean before start       
2024/02/19 16:10:38.351299  info docker exec clickhouse-backup bash -xce clickhouse-backup -c /etc/clickhouse-backup/config-gcs.yml delete remote TestIntegrationGCS_full_2348673835862204199
2024/02/19 16:10:38.590346  info + clickhouse-backup -c /etc/clickhouse-backup/config-gcs.yml delete remote TestIntegrationGCS_full_2348673835862204199
2024/02/19 15:10:38.553902  info clickhouse connection prepared: tcp://clickhouse:9440 run ping logger=clickhouse
2024/02/19 15:10:38.564154  info clickhouse connection open: tcp://clickhouse:9440 logger=clickhouse
2024/02/19 15:10:38.564369  info SELECT count() AS is_macros_exists FROM system.tables WHERE database='system' AND name='macros'  SETTINGS empty_result_for_aggregation_by_empty_set=0 logger=clickhouse
2024/02/19 15:10:38.566992  info SELECT macro, substitution FROM system.macros logger=clickhouse
2024/02/19 15:10:38.568298  info SELECT count() AS is_macros_exists FROM system.tables WHERE database='system' AND name='macros'  SETTINGS empty_result_for_aggregation_by_empty_set=0 logger=clickhouse
2024/02/19 15:10:38.570582  info SELECT macro, substitution FROM system.macros logger=clickhouse
2024/02/19 15:10:38.581352  info clickhouse connection closed logger=clickhouse
2024/02/19 15:10:38.581393 error 'TestIntegrationGCS_full_2348673835862204199' is not found on remote storage

2024/02/19 16:10:38.590376  info docker exec clickhouse-backup bash -xce clickhouse-backup -c /etc/clickhouse-backup/config-gcs.yml delete local TestIntegrationGCS_full_2348673835862204199
2024/02/19 16:10:38.763957  info + clickhouse-backup -c /etc/clickhouse-backup/config-gcs.yml delete local TestIntegrationGCS_full_2348673835862204199
2024/02/19 15:10:38.735593  info clickhouse connection prepared: tcp://clickhouse:9440 run ping logger=clickhouse
2024/02/19 15:10:38.744409  info clickhouse connection open: tcp://clickhouse:9440 logger=clickhouse
2024/02/19 15:10:38.744456  info SELECT value FROM `system`.`build_options` where name='VERSION_INTEGER' logger=clickhouse
2024/02/19 15:10:38.746689  info SELECT countIf(name='type') AS is_disk_type_present, countIf(name='free_space') AS is_free_space_present, countIf(name='disks') AS is_storage_policy_present FROM system.columns WHERE database='system' AND table IN ('disks','storage_policies')  logger=clickhouse
2024/02/19 15:10:38.748777  info SELECT d.path, any(d.name) AS name, any(d.type) AS type, min(d.free_space) AS free_space, groupUniqArray(s.policy_name) AS storage_policies FROM system.disks AS d  INNER JOIN (SELECT policy_name, arrayJoin(disks) AS disk FROM system.storage_policies) AS s ON s.disk = d.name GROUP BY d.path logger=clickhouse
2024/02/19 15:10:38.751448  info clickhouse connection closed logger=clickhouse
2024/02/19 15:10:38.751483 error 'TestIntegrationGCS_full_2348673835862204199' is not found on local storage

2024/02/19 16:10:38.764012  info docker exec clickhouse-backup bash -xce clickhouse-backup -c /etc/clickhouse-backup/config-gcs.yml delete remote TestIntegrationGCS_increment_6730060813300271040
2024/02/19 16:10:38.955467  info + clickhouse-backup -c /etc/clickhouse-backup/config-gcs.yml delete remote TestIntegrationGCS_increment_6730060813300271040
2024/02/19 15:10:38.916576  info clickhouse connection prepared: tcp://clickhouse:9440 run ping logger=clickhouse
2024/02/19 15:10:38.926260  info clickhouse connection open: tcp://clickhouse:9440 logger=clickhouse
2024/02/19 15:10:38.926452  info SELECT count() AS is_macros_exists FROM system.tables WHERE database='system' AND name='macros'  SETTINGS empty_result_for_aggregation_by_empty_set=0 logger=clickhouse
2024/02/19 15:10:38.928781  info SELECT macro, substitution FROM system.macros logger=clickhouse
2024/02/19 15:10:38.929862  info SELECT count() AS is_macros_exists FROM system.tables WHERE database='system' AND name='macros'  SETTINGS empty_result_for_aggregation_by_empty_set=0 logger=clickhouse
2024/02/19 15:10:38.931602  info SELECT macro, substitution FROM system.macros logger=clickhouse
2024/02/19 15:10:38.943932  info clickhouse connection closed logger=clickhouse
2024/02/19 15:10:38.943977 error 'TestIntegrationGCS_increment_6730060813300271040' is not found on remote storage

2024/02/19 16:10:38.955537  info docker exec clickhouse-backup bash -xce clickhouse-backup -c /etc/clickhouse-backup/config-gcs.yml delete local TestIntegrationGCS_increment_6730060813300271040
2024/02/19 16:10:39.145398  info + clickhouse-backup -c /etc/clickhouse-backup/config-gcs.yml delete local TestIntegrationGCS_increment_6730060813300271040
2024/02/19 15:10:39.114499  info clickhouse connection prepared: tcp://clickhouse:9440 run ping logger=clickhouse
2024/02/19 15:10:39.124084  info clickhouse connection open: tcp://clickhouse:9440 logger=clickhouse
2024/02/19 15:10:39.124148  info SELECT value FROM `system`.`build_options` where name='VERSION_INTEGER' logger=clickhouse
2024/02/19 15:10:39.126165  info SELECT countIf(name='type') AS is_disk_type_present, countIf(name='free_space') AS is_free_space_present, countIf(name='disks') AS is_storage_policy_present FROM system.columns WHERE database='system' AND table IN ('disks','storage_policies')  logger=clickhouse
2024/02/19 15:10:39.128344  info SELECT d.path, any(d.name) AS name, any(d.type) AS type, min(d.free_space) AS free_space, groupUniqArray(s.policy_name) AS storage_policies FROM system.disks AS d  INNER JOIN (SELECT policy_name, arrayJoin(disks) AS disk FROM system.storage_policies) AS s ON s.disk = d.name GROUP BY d.path logger=clickhouse
2024/02/19 15:10:39.130871  info clickhouse connection closed logger=clickhouse
2024/02/19 15:10:39.130903 error 'TestIntegrationGCS_increment_6730060813300271040' is not found on local storage

2024/02/19 16:10:39.145507  info docker exec clickhouse ls -1 /var/lib/clickhouse/backup/*TestIntegrationGCS*
2024/02/19 16:10:39.226175  info Drop all databases       
2024/02/19 16:10:39.235170  info docker exec minio mc ls local/clickhouse/disk_s3
2024/02/19 16:10:39.343120  info                          
2024/02/19 16:10:39.343174  info Generate test data GCS with _TestIntegrationGCS suffix
    integration_test.go:2322: 
                Error Trace:    ~/clickhouse-backup/test/integration/integration_test.go:2322
                                                        ~/clickhouse-backup/test/integration/integration_test.go:1973
                                                        ~/clickhouse-backup/test/integration/integration_test.go:1720
                Error:          Received unexpected error:
                                code: 336, message: There was an error on [127.0.0.1:9000]: Code: 336. DB::Exception: Ordinary database engine is deprecated (see also allow_deprecated_database_ordinary setting). (UNKNOWN_DATABASE_ENGINE) (version 23.8.9.54 (official build))
                Test:           TestIntegrationGCS
--- FAIL: TestIntegrationGCS (26.30s)
FAIL
FAIL    command-line-arguments  26.688s
FAIL

test/integration/config-gcs.yml Outdated Show resolved Hide resolved
test/integration/config-gcs.yml Outdated Show resolved Hide resolved
test/integration/config-gcs.yml Outdated Show resolved Hide resolved
test/integration/run.sh Outdated Show resolved Hide resolved
@Slach
Copy link
Collaborator

Slach commented Feb 20, 2024

TestIntegrationGCS
code: 336, message: There was an error on [127.0.0.1:9000]: Code: 336. DB::Exception: Ordinary database engine is deprecated (see also allow_deprecated_database_ordinary setting). (UNKNOWN_DATABASE_ENGINE) (version 23.8.9.54 (official build))
Test: TestIntegrationGCS
--- FAIL: TestIntegrationGCS (26.30s)

i don't understand why you try to fix TestIntegrationGCS
instead of just run

GCS_TESTS=1 RUN_TESTS=TestIntegrationGCSWithCustomEndpoint ./test/integration/run.sh ?

@Slach
Copy link
Collaborator

Slach commented Feb 20, 2024

sorry I saw you have different test for GCS with custom endpoint with properly config
did you use git push --force?

@Slach
Copy link
Collaborator

Slach commented Feb 20, 2024

code: 336, message: There was an error on [127.0.0.1:9000]: Code: 336. DB::Exception: Ordinary database engine is deprecated (see also allow_deprecated_database_ordinary setting). (UNKNOWN_DATABASE_ENGINE) (version 23.8.9.54 (official build))

it looks weird, looks like /docker-entrypoint.d/dynamic_settings.sh didn't run after container start

look docker-compose_advanced.yml for details

@sanadhis
Copy link
Contributor Author

sanadhis commented Feb 20, 2024

Hi @Slach thanks again for your guidance.

sorry I saw you have different test for GCS with custom endpoint with properly config
did you use git push --force?

yes, I reverted regardless. Don't want to pollute git history.

it looks weird, looks like /docker-entrypoint.d/dynamic_settings.sh didn't run after container start
look docker-compose_advanced.yml for details

The issue is that dynamic_settings.sh is missing execute permission, hence I modify the permission in my latest commit with chmod +x.

Progress

so I made a progress on this but to be honest I don't think fake-gcs-server supports S3 interoperability yet. See fsouza/fake-gcs-server#1330
The issue, that I have to make the test pass, is Clickhouse failed to connect to fake-gcs-server:

2024.02.20 10:45:31.359605 [ 91 ] {} <Error> Application: DB::Exception: Message: , bucket XXX, key disk_gcs/cluster/0/jcn/qvlyiibmadrbmeeomyonbwbykkirl, object size 4: While checking access for disk disk_gcs

I am not Clickhouse expert, but I am sure Clickhouse is using S3 SDK, given the <type>s3</type>

If I disable QA_GCS_OVER_S3_BUCKET=XXX in the .env and run GCS_TESTS=1 RUN_TESTS=TestIntegrationGCSWithCustomEndpoint ./test/integration/run.sh. I am advancing to this stage now:

...
2024/02/20 11:00:43.846247  info TestIntegrationGCSWithCustomEndpoint_increment_8299163498514678998/metadata/_issue331%2E_atomic__TestIntegrationGCSWithCustomEndpoint/_issue331%2E_atomic__TestIntegrationGCSWithCustomEndpoint.json already processed logger=resumable
2024/02/20 11:00:43.846290  info done                      backup=TestIntegrationGCSWithCustomEndpoint_increment_8299163498514678998 duration=8ms operation=upload size=227.85KiB table=_issue331._atomic__TestIntegrationGCSWithCustomEndpoint._issue331._atomic__TestIntegrationGCSWithCustomEndpoint
2024/02/20 11:00:43.857944  info done                      backup=TestIntegrationGCSWithCustomEndpoint_increment_8299163498514678998 duration=103ms operation=upload size=403.38KiB
2024/02/20 11:00:43.858131  info clickhouse connection closed logger=clickhouse

2024/02/20 12:00:44.894192  info docker exec clickhouse-backup bash -ce ls -lha /var/lib/clickhouse/backup | grep TestIntegrationGCSWithCustomEndpoint
2024/02/20 12:00:44.992739  info Delete backup            
2024/02/20 12:00:44.992796  info docker exec clickhouse-backup clickhouse-backup -c /etc/clickhouse-backup/config-gcs-custom-endpoint.yml delete local TestIntegrationGCSWithCustomEndpoint_full_8695081332883177324
2024/02/20 12:00:46.223450  info 2024/02/20 11:00:45.141930  info clickhouse connection prepared: tcp://clickhouse:9440 run ping logger=clickhouse
2024/02/20 11:00:45.152826  info clickhouse connection open: tcp://clickhouse:9440 logger=clickhouse
2024/02/20 11:00:45.152889  info SELECT value FROM `system`.`build_options` where name='VERSION_INTEGER' logger=clickhouse
2024/02/20 11:00:45.155353  info SELECT countIf(name='type') AS is_disk_type_present, countIf(name='free_space') AS is_free_space_present, countIf(name='disks') AS is_storage_policy_present FROM system.columns WHERE database='system' AND table IN ('disks','storage_policies')  logger=clickhouse
2024/02/20 11:00:45.157869  info SELECT d.path, any(d.name) AS name, any(d.type) AS type, min(d.free_space) AS free_space, groupUniqArray(s.policy_name) AS storage_policies FROM system.disks AS d  INNER JOIN (SELECT policy_name, arrayJoin(disks) AS disk FROM system.storage_policies) AS s ON s.disk = d.name GROUP BY d.path logger=clickhouse
2024/02/20 11:00:45.164562  info SELECT count() AS is_macros_exists FROM system.tables WHERE database='system' AND name='macros'  SETTINGS empty_result_for_aggregation_by_empty_set=0 logger=clickhouse
2024/02/20 11:00:45.166885  info SELECT macro, substitution FROM system.macros logger=clickhouse
2024/02/20 11:00:45.168029  info SELECT count() AS is_macros_exists FROM system.tables WHERE database='system' AND name='macros'  SETTINGS empty_result_for_aggregation_by_empty_set=0 logger=clickhouse
2024/02/20 11:00:45.169876  info SELECT macro, substitution FROM system.macros logger=clickhouse
2024/02/20 11:00:45.208029  info done                      backup=TestIntegrationGCSWithCustomEndpoint_full_8695081332883177324 duration=66ms location=local logger=RemoveBackupLocal operation=delete
2024/02/20 11:00:45.208195  info clickhouse connection closed logger=clickhouse

2024/02/20 12:00:46.223524  info docker exec clickhouse-backup clickhouse-backup -c /etc/clickhouse-backup/config-gcs-custom-endpoint.yml delete local TestIntegrationGCSWithCustomEndpoint_increment_8299163498514678998
2024/02/20 12:00:47.517036  info 2024/02/20 11:00:46.415699  info clickhouse connection prepared: tcp://clickhouse:9440 run ping logger=clickhouse
2024/02/20 11:00:46.425779  info clickhouse connection open: tcp://clickhouse:9440 logger=clickhouse
2024/02/20 11:00:46.425835  info SELECT value FROM `system`.`build_options` where name='VERSION_INTEGER' logger=clickhouse
2024/02/20 11:00:46.428025  info SELECT countIf(name='type') AS is_disk_type_present, countIf(name='free_space') AS is_free_space_present, countIf(name='disks') AS is_storage_policy_present FROM system.columns WHERE database='system' AND table IN ('disks','storage_policies')  logger=clickhouse
2024/02/20 11:00:46.430491  info SELECT d.path, any(d.name) AS name, any(d.type) AS type, min(d.free_space) AS free_space, groupUniqArray(s.policy_name) AS storage_policies FROM system.disks AS d  INNER JOIN (SELECT policy_name, arrayJoin(disks) AS disk FROM system.storage_policies) AS s ON s.disk = d.name GROUP BY d.path logger=clickhouse
2024/02/20 11:00:46.434657  info SELECT count() AS is_macros_exists FROM system.tables WHERE database='system' AND name='macros'  SETTINGS empty_result_for_aggregation_by_empty_set=0 logger=clickhouse
2024/02/20 11:00:46.436443  info SELECT macro, substitution FROM system.macros logger=clickhouse
2024/02/20 11:00:46.437572  info SELECT count() AS is_macros_exists FROM system.tables WHERE database='system' AND name='macros'  SETTINGS empty_result_for_aggregation_by_empty_set=0 logger=clickhouse
2024/02/20 11:00:46.439370  info SELECT macro, substitution FROM system.macros logger=clickhouse
2024/02/20 11:00:46.474979  info done                      backup=TestIntegrationGCSWithCustomEndpoint_increment_8299163498514678998 duration=60ms location=local logger=RemoveBackupLocal operation=delete
2024/02/20 11:00:46.475143  info clickhouse connection closed logger=clickhouse

2024/02/20 12:00:47.517147  info docker exec clickhouse-backup bash -ce ls -lha /var/lib/clickhouse/backup | grep TestIntegrationGCSWithCustomEndpoint
2024/02/20 12:00:47.650765  info Drop all databases       
2024/02/20 12:00:47.849617  info Download                 
2024/02/20 12:00:47.849693  info docker exec clickhouse sed -i s/<disk_gcs>/<disk_gcs_rebalanced>/g; s/<\/disk_gcs>/<\/disk_gcs_rebalanced>/g; s/<disk>disk_gcs<\/disk>/<disk>disk_gcs_rebalanced<\/disk>/g /etc/clickhouse-server/config.d/storage_configuration_gcs.xml
2024/02/20 12:00:47.933577  info sed: can't read /etc/clickhouse-server/config.d/storage_configuration_gcs.xml: No such file or directory

    integration_test.go:2156: 
                Error Trace:    ~/clickhouse-backup/test/integration/integration_test.go:2156
                                                        ~/clickhouse-backup/test/integration/integration_test.go:2017
                                                       ~/clickhouse-backup/test/integration/integration_test.go:1729
                Error:          Received unexpected error:
                                exit status 2
                Test:           TestIntegrationGCSWithCustomEndpoint

now I am in circle, if I enable this QA_GCS_OVER_S3_BUCKET Clickhouse container will fail during boot up.
Any suggestion?

@sanadhis sanadhis requested a review from Slach February 20, 2024 11:06
@sanadhis
Copy link
Contributor Author

The failing tests seem unrelated to my changes

test/integration/dynamic_settings.sh Outdated Show resolved Hide resolved
test/integration/integration_test.go Outdated Show resolved Hide resolved
test/integration/integration_test.go Outdated Show resolved Hide resolved
@Slach
Copy link
Collaborator

Slach commented Feb 20, 2024

<disk_gcs_rebalanced> used for GCS for some corner cases, lets use GCS_EMULATOR as workaround

@sanadhis
Copy link
Contributor Author

<disk_gcs_rebalanced> used for GCS for some corner cases, lets use GCS_EMULATOR as workaround

Thanks @Slach , the integration test has passed now 😄

2024/02/20 14:26:38.413323  info docker exec clickhouse ls -1 /var/lib/clickhouse/backup/*TestIntegrationGCSWithCustomEndpoint*
2024/02/20 14:26:38.543924  info Drop all databases       
2024/02/20 14:26:38.698641  info docker exec gcs sh -c ls -lh /data/altinity-qa-test/
--- PASS: TestIntegrationGCSWithCustomEndpoint (84.78s)
PASS
ok      command-line-arguments  85.163s

@sanadhis sanadhis requested a review from Slach February 20, 2024 13:29
Copy link
Collaborator

@Slach Slach left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will try to fix failed tests later

@Slach Slach merged commit 66d0c2b into Altinity:master Feb 20, 2024
7 of 18 checks passed
@sanadhis sanadhis deleted the fix-gcs-custom-endpoint branch February 20, 2024 14:09
@sanadhis
Copy link
Contributor Author

@Slach is there planned release soon?

@Slach
Copy link
Collaborator

Slach commented Feb 21, 2024

@sanadhis
Copy link
Contributor Author

check https://github.com/Altinity/clickhouse-backup/releases/tag/v2.4.32

Superb, thanks a lot 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants