Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Custom domain and query string auth #165

Open
exp0nge opened this issue Jul 3, 2016 · 15 comments
Open

Custom domain and query string auth #165

exp0nge opened this issue Jul 3, 2016 · 15 comments
Labels

Comments

@exp0nge
Copy link

exp0nge commented Jul 3, 2016

I see that the query string auth feature does not work when a custom domain is provided. At least that's what it seems like from the source. Is this intentional? Is there a workaround to allow both features to work?

@jonathan-golorry
Copy link

Technically the comments in settings imply this "just use this domain plus the path", but I agree that this is an issue that needs to be addressed.

@exp0nge
Copy link
Author

exp0nge commented Jul 17, 2016

Are folks just not using auth with custom domains then?

@jonathan-golorry
Copy link

Probably. I'm running two different s3 locations for static and media files and Django settings are rarely well documented, so I'm not really sure what settings are being used for what. I think it might be possible to adjust MEDIA_URL or STATIC_URL without setting AWS_S3_CUSTOM_DOMAIN and have it work. I had to set STATIC_URL for django.contrib.staticfiles, but I went with the default domain for other media instead of messing with MEDIA_URL.

@tchaumeny
Copy link

Hi!
I just came across this problem too, it isn't possible to use a CUSTOM_DOMAIN for private media stored using S3 (the auth parameters just won't be added to querystring indeed). Is this because of some restriction in AWS / Boto ?

@vchrisb
Copy link
Contributor

vchrisb commented Jan 24, 2017

If a AWS_S3_CUSTOM_DOMAIN is specified, presigned URLs are not generated:
https://github.com/jschneier/django-storages/blob/master/storages/backends/s3boto3.py#L567-L569

@tchaumeny
Copy link

@vchrisb Exactly the boto backend you are pointing to doesn't make it possible to use signed URLs with custom domain, although it is something possible (see for instance http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-creating-signed-url-canned-policy.html#private-content-creating-signed-url-canned-policy-procedure, paragraph 2 with custom domain examples)

@0xjame5
Copy link

0xjame5 commented Feb 27, 2019

This was my hacky solution below:

"""
Note that below for media file objects I requested they not have a custom domain.

This is because they need to be signed in order to be viewed. 
Django-Storages only supports signing on files not set with a custom domain
"""

class StaticStorage(S3Boto3Storage):
    default_acl = None  # don't want any ACL applied ever.
    location = settings.AWS_STATIC_LOCATION

class PublicMediaStorage(S3Boto3Storage):
    location = settings.AWS_PUBLIC_MEDIA_LOCATION
    file_overwrite = False  # use case important
    default_acl = None  # don't want any ACL applied ever.
    custom_domain = ""

@terencehonles
Copy link
Contributor

Depending what people are looking for (this is an old issue), the fix in #587 supports RSA signing as required in the document @tchaumeny linked to.

@cpoetter
Copy link

I think #839 will close this.

@AllenEllis
Copy link

AllenEllis commented Jul 5, 2020

I'm having a problem related to this: I'm trying to load Amazon S3 assets over IPv6. Per their documentation, it's a simple matter of making the request to a different url:
s3.dualstack.aws-region.amazonaws.com/bucketname

I was hoping to achieve this by simply tweaking this line in settings.py:

AWS_S3_CUSTOM_DOMAIN = '%s.s3.dualstack.us-east-1.amazonaws.com' % AWS_STORAGE_BUCKET_NAME

But unfortunately it ignores the custom domain for private files. The fix in #587 (documented in #900) is specific to Cloudfront, which I'm not using.

Edit
My very clunky workaround for this one was just to override the url() method from the S3Boto3Storage class. I added this line right before the end:

url = url.replace("s3.amazonaws.com", "s3.dualstack.us-east-1.amazonaws.com")

Code in context

Includes following steps from this blog post.

class PrivateMediaStorage(S3Boto3Storage):
    location = settings.AWS_PRIVATE_MEDIA_LOCATION
    default_acl = 'private'
    file_overwrite = False
    custom_domain = False

    """
    Overriding the old `url` function to find & replace a dual-stack endpoint in the URL
    """
    def url(self, name, parameters=None, expire=None):
        # Preserve the trailing slash after normalizing the path.
        name = self._normalize_name(self._clean_name(name))
        if self.custom_domain:
            return "{}//{}/{}".format(self.url_protocol,
                                      self.custom_domain, filepath_to_uri(name))
        if expire is None:
            expire = self.querystring_expire

        params = parameters.copy() if parameters else {}
        params['Bucket'] = self.bucket.name
        params['Key'] = self._encode_name(name)
        url = self.bucket.meta.client.generate_presigned_url('get_object', Params=params,
                                                             ExpiresIn=expire)

        url = url.replace("s3.amazonaws.com", "s3.dualstack.us-east-1.amazonaws.com")

        if self.querystring_auth:
            return url
        return self._strip_signing_parameters(url)

@terencehonles
Copy link
Contributor

So that's definitely an interesting use case, and your solution of subclassing and overriding and replacing seems like it should work fine for the short term. It does seem like @cpoetter 's #839 would be close to what you need, but it would need to be adjusted to fix merge conflicts.

This also points out that there are not enough configuration parameters and the change made in #885 although needed to be done, probably should have created an additional parameter (or maybe they had this same problem here, but they were working around it in a different way(?)). The states I see are:

  1. No auth: self.querystring_auth is False and self.cloudfront_signer is None (With Sign CloudFront urls only in case querystring_auth is enabled #885 the last part isn't needed)
  2. Query string auth: self.querystring_auth is True (With Sign CloudFront urls only in case querystring_auth is enabled #885 you also have to have self.cloudfront_signer is None, but it might be better to have a setting self.cloudfront_signing is False instead of reusing self.querystring_auth)
  3. Cloudfront auth: self.querystring_auth is True and self.cloudfront_signer is not None (This is because of Sign CloudFront urls only in case querystring_auth is enabled #885)

Unfortunately as written now the cloudfront auth is only accessible on the custom domain path (since you'll be specifying a domain that makes sense), but the query string auth is not enabled on that path, and re-enabling it will require making sure you can easily understand why the code is using the different states above (since it's not just the two states of query string or not in the other code path)

@moritz89
Copy link

@AllenEllis Thanks for the code snippet. I built on it to make it a bit more forward compatible:

from storages.backends.s3boto3 import S3Boto3Storage
from django.conf import settings


class PrivateMediaStorage(S3Boto3Storage):
    """Extend S3 with signed URLs for custom domains."""
    custom_domain = False

    def url(self, name, parameters=None, expire=None, http_method=None):
        """Replace internal domain with custom domain for signed URLs."""
        url = super().url(name, parameters, expire, http_method)
        custom_url = url.replace(
            settings.AWS_S3_ENDPOINT_URL,
            f"{settings.AWS_S3_URL_PROTOCOL}//{settings.AWS_S3_CUSTOM_DOMAIN}",
        )
        return custom_url

@sudarshaana
Copy link

sudarshaana commented May 6, 2022

Thanks @moritz89 for the code.
For me I've to change the AWS_S3_ENDPOINT_URL with just the protocol.

    custom_domain = False

    def url(self, name, parameters=None, expire=None, http_method=None):
        url = super().url(name, parameters, expire, http_method)
        custom_url = url.replace(
            settings.AWS_S3_ENDPOINT_URL,
            f"https://",
        )
        return custom_url

@dopry
Copy link

dopry commented Oct 26, 2023

I've got a slightly improved version that I ended up using as a mixin.

class CustomDomainFixedUpS3Boto3Storage(S3Boto3Storage):
    # work around for url + custom domains not working for pre-signed urls.
    # see: https://github.com/jschneier/django-storages/issues/165#issuecomment-810166563
    # adapted to preserve the inputs we would expect to use if this were fixed upstream.
    x_custom_domain = None
    x_url_protocol = "https:"

    def url(self, name, parameters=None, expire=None, http_method=None):
        """Replace internal domain with custom domain for signed URLs."""
        url = super().url(name, parameters, expire, http_method)
        if self.x_custom_domain:
            return url.replace(
                self.endpoint_url,
                f"{self.x_url_protocol}//{self.x_custom_domain}",
            )
        return url

@amoralesc
Copy link

I was going crazy trying to understand why the solutions provided by #165 (comment) and #165 (comment) weren't working for me. Turns out, setting your AWS_S3_SIGNATURE_VERSION to s3v4 uses the S3 endpoint to sign the URL too. This means that simply replacing the endpoint of the generated URL will cause a malformed request. More specifically, this error:

<Error>
    <Code>SignatureDoesNotMatch</Code>
    <Message>
        The request signature we calculated does not match the signature you provided. Check your key and signing method.
    </Message>
    <Key>path/to/file</Key>
    <BucketName>bucket</BucketName>
    <Resource>/bucket/path/to/file</Resource>
    <RequestId>1234567890</RequestId>
    <HostId>abcdefghi123456789</HostId>
</Error>

Since the current Boto3 version (1.34.31 as of this comment) sets the default signature_version to Signature Version 4, one must explicitly set the AWS_S3_SIGNATURE_VERSION variable to s3 (legacy Signature Version 2).

My use case was running a Django app and a MinIO service in a Docker Compose project, and having them share a network. I could reference the MinIO service from the app container like http://minio:9000, and was setting the AWS_S3_ENDPOINT_URL variable to this value. However, this was also generating pre-signed URLs that I couldn't access from my host machine, even after port-forwarding MinIO, since I could only access it as localhost:9000.

Something to take into consideration is that the signature versions are not backwards compatible so be careful about url endpoints if making this change for legacy projects. The Signature Version 2 has been marked off as legacy by AWS, and some regions no longer support it. Since I only needed to solve this issue for a local project, it didn't feel wrong using the legacy version though!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests