Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow the storage to work in a Docker environment #124

Open
c0dearm opened this issue Jul 12, 2021 · 4 comments
Open

Allow the storage to work in a Docker environment #124

c0dearm opened this issue Jul 12, 2021 · 4 comments

Comments

@c0dearm
Copy link

c0dearm commented Jul 12, 2021

Hi @etianen,

A pleasure to talk to you. I used django-reversion in the past and now I just found django-s3-storage very useful for my new project. I think you are a great developer and I am very happy to submit an issue here.

I use to set up my projects for local development using docker-compose, making them as close to production as possible. I will be using MinIO (S3 compatible), so basically it looks like this (simplified):

minio:
  image: minio/minio
  volumes:
    - minio_data:/data
  environment:
    - MINIO_ROOT_USER=minio
    - MINIO_ROOT_PASSWORD=minio123
  command: server /data
  ports:
    - "9000:9000"

django:
  build: .
  environment:
    - STORAGE_USER=minio
    - STORAGE_PASSWORD=minio123
    - STORAGE_INTERNAL_URL=http://minio:9000
    - STORAGE_EXTERNAL_URL=http://localhost:9000
  depends_on:
    - minio

Note, that in a setup like this, the docker services communicate through their own private network with its own DNS resolution, which is not accessible from the host. So django can access minio at http://minio:9000, however from the host, it has to be accessed through http://localhost:9000.

I thought, well, this is fine. I can just set AWS_S3_ENDPOINT_URL to http://minio:9000 and AWS_S3_PUBLIC_URL to http://localhost:9000. The problem here, is that setting AWS_S3_PUBLIC_URL skips completely the generation of pre-signed urls, so right now it is not possible to use private buckets in docker-compose set ups.

I wonder if this should be expected behavior. If not, I can submit a PR to fix it. Your call!

For now, I am working around this limitation by creating a custom storage myself by injecting this mixin in any of the storage backends provided by this project. It is ugly but it does the job. Note that I set my own AWS_S3_PUBLIC_BASE_URL setting instead of AWS_S3_PUBLIC_URL:

class PublicUrlMixin:
    base_url = urlparse(settings.AWS_S3_PUBLIC_BASE_URL)

    def url(self, *args, **kwargs):
        return (
            urlparse(super().url(*args, **kwargs))
            ._replace(scheme=self.base_url.scheme, netloc=self.base_url.netloc)
            .geturl()
        )
@etianen
Copy link
Owner

etianen commented Jul 19, 2021

This is expected behaviour, since the AWS_S3_PUBLIC_URL setting is there to serve content through a CDN. Since CDNs are pretty public, that's what you want.

However, I can see your use case, which seems valid. Unfortunately, I can't change this behaviour without breaking all current library users, since AWS_S3_BUCKET_AUTH defaults to True. Making AWS_S3_PUBLIC_URL respect AWS_S3_BUCKET_AUTH would switch all existing users over to bucket auth.

This is complicated by #114

I've just got back from holiday, so can't really devote any time right now to figuring out an optimal soution as I've a million work emails. Suggestions welcome. Or I'll take a look myself in a few weeks. Meanwhile, your solution seems perfectly fine.

@brammittendorff
Copy link

We have the same problem out here, it would be really nice to have a fix for this.

@ckoppelman
Copy link

You can either add a parameter like this: AWS_S3_PUBLIC_URL_BUCKET_AUTH

Or turn AWS_S3_BUCKET_AUTH into an enumeration with three options:

class BucketAuthType(Enum):
    NONE = 0
    PRIVATE_URLS = auto()
    PRIVATE_AND_PUBLIC_URLS = auto()

Then you can test the condition -- if it's False or NONE, it will act the same (if AWS_S3_BUCKET_AUTH: will fail for both). You can set a new defaults to PRIVATE_URLS. To solve the case of now-legacy users with True, you can just test for that case in load:

if instanceof(AWS_S3_BUCKET_AUTH, bool) and AWS_S3_BUCKET_AUTH:
    AWS_S3_BUCKET_AUTH = BucketAuthType.PRIVATE_URLS

@etianen
Copy link
Owner

etianen commented Mar 26, 2024

That's a good solution, but importing an enum into a django settings file is a bit wierd.

Maybe make the type of AWS_S3_BUCKET_AUTH be bool | Literal["private"] | Literal["private-public"]?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants