Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cdk-assets: Remove asset from staging bucket on failed deployment #14474

Open
2 tasks
bgshacklett opened this issue Apr 30, 2021 · 9 comments
Open
2 tasks

cdk-assets: Remove asset from staging bucket on failed deployment #14474

bgshacklett opened this issue Apr 30, 2021 · 9 comments
Labels
@aws-cdk/assets Related to the @aws-cdk/assets package effort/small Small work item – less than a day of effort feature-request A feature should be added or improved. p2

Comments

@bgshacklett
Copy link

In #12536, it has been noted that part of the problem is that a corrupted zip file may be uploaded to the staging bucket. At this point, CDK will no-longer attempt to upload the asset, again, because it detects that an asset with the corresponding hash resides within the bucket. After reaching this state, it is necessary to manually remove the affected asset, or assets, from the staging bucket before a successful deployment can occur. In cases where the deployment of a given asset fails, the asset should be removed from the staging bucket to ensure that this "broken" state is not reached.

Use Case

This change would help ensure that CDK does not attempt to use a corrupt pre-existing asset from the staging bucket during deployment.

Alternatives

Provide a CLI flag to ensure that assets are overwritten in the staging bucket on every deployment.

Other

  • 👋 I may be able to implement this feature request
  • ⚠️ This feature might incur a breaking change

This is a 🚀 Feature Request

@bgshacklett bgshacklett added feature-request A feature should be added or improved. needs-triage This issue or PR still needs to be triaged. labels Apr 30, 2021
@github-actions github-actions bot added the @aws-cdk/assets Related to the @aws-cdk/assets package label Apr 30, 2021
@eladb
Copy link
Contributor

eladb commented May 2, 2021

Reassigning to @rix0rrr

@eladb eladb assigned rix0rrr and unassigned eladb May 2, 2021
@eladb eladb added p1 effort/small Small work item – less than a day of effort labels May 2, 2021
@dariagrudzien
Copy link

We seem to be experiencing the same issue.

@ryparker ryparker removed the needs-triage This issue or PR still needs to be triaged. label Jun 1, 2021
@github-actions
Copy link

This issue has not received any attention in 1 year. If you want to keep this issue open, please leave a comment below and auto-close will be canceled.

@github-actions github-actions bot added the closing-soon This issue will automatically close in 4 days unless further comments are made. label Jun 17, 2022
@bgshacklett
Copy link
Author

Please do not auto-close this issue.

@github-actions github-actions bot removed the closing-soon This issue will automatically close in 4 days unless further comments are made. label Jun 17, 2022
@sannies
Copy link

sannies commented Feb 21, 2023

when you interrupt a deploy with Ctrl-C you also might end up with corrupted asset directories(*) for 3rd party layers. These asset directories will then be zipped, uploaded and cached. It is very hard to recover from that state.

(*) in my case the 3rd party layer is created by 'pip' in a docker.

@rix0rrr
Copy link
Contributor

rix0rrr commented Feb 21, 2023

Good find! If we can, we should try and switch to multipart uploads. Those are atomic by default, and the file will only appear if the upload completes.

Depends on whether wr already have the correct s3 permissions on the asset role though...

@rix0rrr
Copy link
Contributor

rix0rrr commented Feb 21, 2023

Multipart shouldn't need any additional permissions, so we should be good to deploy that.

Does need an additional lifecycle rule on the bucket to remove old multiparts though.

@sannies
Copy link

sannies commented Feb 21, 2023

I don't think that we are exactly talking about the same issue here. I my case I hit Ctrl-C while the pip install (*) is running. The asset directory (asset.0aff....cd54) was created and some but not all of the 3rd party libraries have been installed in it. In this moment Ctrl-C interrupts the installation. The directory is then there but its content is corrupt.
The next cdk synth will not rebuild this specific asset again. It is already there - no reason to do it. The directory will then be zipped and uploaded. In this moment the cdk asset bucket is 'poisoned' and you can only recover when your change the assets by force e.g. change the requirements.txt. A force flag would allow recovery without actually performing a dummy change.

(*)

LayerVersion(
   stack, '3rdpLayer',
   code=AssetCode(
        "lambdas",
        bundling=BundlingOptions(
            image=Runtime.PYTHON_3_9.,
            command=[
                'bash', '-c',
                'pip install -r requirements.txt -t /asset-output/python',
            ])))

@rix0rrr
Copy link
Contributor

rix0rrr commented Feb 22, 2023

Oh I see, this isn't about the upload but about the build. I misunderstood.

We've fixed this for zipping (by building to a tempfile), but apparently not for bundling. That'll be the solution for bundling as well then.

@comcalvi comcalvi added p2 and removed p1 labels Feb 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
@aws-cdk/assets Related to the @aws-cdk/assets package effort/small Small work item – less than a day of effort feature-request A feature should be added or improved. p2
Projects
None yet
Development

No branches or pull requests

7 participants