Skip to content
This repository has been archived by the owner on Feb 16, 2023. It is now read-only.

Secure contexts #23

Open
annevk opened this issue Apr 12, 2021 · 50 comments
Open

Secure contexts #23

annevk opened this issue Apr 12, 2021 · 50 comments

Comments

@annevk
Copy link

annevk commented Apr 12, 2021

Any reason this isn't restricted to secure contexts? getRandomValues() isn't because of web compatibility only I think.

@bakkot
Copy link

bakkot commented Apr 12, 2021

Is there any reason to restrict it to those contexts? getRandomValues is specifically the crypto-secure alternative to Math.random, so its use implies that you're doing the sort of sensitive thing which you ought not do outside a secure context, but UUIDs are used widely in a variety of contexts.

@annevk
Copy link
Author

annevk commented Apr 12, 2021

Generally, there's a desire to expose new APIs in secure contexts only: https://blog.mozilla.org/security/2018/01/15/secure-contexts-everywhere/.

I think for cryptographic primitives there were other arguments as well, such as not letting insecure contexts interfere with cryptographic code.

@bakkot
Copy link

bakkot commented Apr 12, 2021

From that page:

Additionally, to avoid fracturing ecosystems that extend beyond the web, core language features and builtin libraries of JavaScript and WebAssembly will likely not be restricted to secure contexts.

I guess the argument is that this is somehow not a builtin library? Certainly it feels like one to me.

@broofa
Copy link
Collaborator

broofa commented Apr 12, 2021

Does the fact this is already available in Node.js make it part of "an ecosystem that extends beyond the web"?

@domenic
Copy link
Collaborator

domenic commented Apr 15, 2021

Personally I'm pretty torn on this.

I always like guarding more behind secure contexts. New sites should be using secure contexts, and only new sites should be using crypto.randomUUID() (since it's a new API).

But, as expressed in whatwg/urlpattern#29 (comment) and whatwg/urlpattern#29 (comment), secure contexts guards are tricky for APIs which are potentially library-facing (as opposed to application-facing). They have a viral effect, where any library that uses them either has to (a) update all its documentation to say "this library is only usable in secure contexts"; or (b) bundle a polyfill for non-secure contexts.

Unfortunately we haven't seen any indication of libraries taking the (a) route in the past. So in terms of the priority of constituencies, secure context guards for library-facing features like this are more likely to cause user harm (by adding bytes to the bundle) than user gain (by nudging more sites toward secure contexts). We could try to change that, so that libraries start requiring secure contexts, but I think that would take a more coordinated effort that goes against some of the principles Mozilla has articulated in the past, e.g. it would involve restricting new CSS features to secure contexts, or restricting new JavaScript and WebAssembly features to secure contexts.

On the other hand, maybe this API is special. If you're really intending to generate unique IDs, you have no guarantee that the API fulfills this contract on an insecure context. (Because a MITM could insert code that makes it always return a pre-determined non-unique ID.) Note that this point seems to apply about equally to crypto.randomUUID() and crypto.getRandomValues().

So I'm not sure where that leaves us. I guess I lean slightly more toward allowing this in insecure contexts, for symmetry with crypto.getRandomValues() and Math.random(). But I'd also be interested in investigating future paths where even library-facing APIs like those, or new CSS features like subgrid, get moved into secure contexts as part of the overall ecosystem nudge project.

@bakkot
Copy link

bakkot commented Apr 15, 2021

If you're really intending to generate unique IDs, you have no guarantee that the API fulfills this contract on an insecure context.

You don't have that guarantee anyway, since there could be a man-in-the-browser or other such attacker.

@domenic
Copy link
Collaborator

domenic commented Apr 15, 2021

That's not generally the threat model the web platform operates under when we're discussing secure context restrictions.

@bakkot
Copy link

bakkot commented Apr 15, 2021

It makes sense to consider man-in-the-browser as meaningfully different from HTTP tampering for stuff like payments or geolocation, where the concern is confidentiality. But if the concern is "can the server trust that this client-side generated data has [some property]", the answer is no, it can't. It's a very different concern. We should not be encouraging people to rely on HTTPS providing that guarantee.

@domenic
Copy link
Collaborator

domenic commented Apr 15, 2021

Agreed. I was not suggesting that secure contexts have anything to do with whether or not a server needs to perform validation on client-supplied data.

@bakkot
Copy link

bakkot commented Apr 15, 2021

I'm not sure what you meant by "If you're really intending to generate unique IDs, you have no guarantee that the API fulfills this contract on an insecure context", then, if not "given HTTPS, the server can trust IDs provided by this API to be unique".

@bcoe
Copy link
Collaborator

bcoe commented Apr 16, 2021

@annevk @bakkot @domenic I don't have a strong opinion on this topic; as several folks have stated, for modern websites using randomUUID, I'd expect that they're running in a secure context already... However,

One environment that comes to mind where randomUUID will immediately be very useful are platforms like Electron:

  • which will be able to take advantage of the feature faster, because they ship a known version of a browser.
  • which would be able to share UUIDs between the local browser and local Node.js contexts securely.

I'm concerned that restricting to secure contexts could make it harder to develop on these platforms?

@domenic
Copy link
Collaborator

domenic commented Apr 16, 2021

I suspect Electron apps can configure themselves to be treated as a secure context, if they aren't already. They're not governed by web specs (they just reuse a browser engine to implement a non-web platform) so it's really entirely up to them what they expose and how.

@cynthia
Copy link

cynthia commented Apr 20, 2021

We discussed this during our call today. Given the risks (as noted above: #23 (comment)) of frameworks/libraries possibly going down path (b) - we partially (this was a breakout call, so not everyone was on the call) agreed on this probably being one of the special cases where it makes sense to be usable under an insecure context.

In the long run, we'd prefer to make less exceptions.

And to be on record: As noted above, Electron apps have the liberty to interpret "secure" as they see fit, so how it works there does not concern us.

@cynthia
Copy link

cynthia commented Apr 21, 2021

Discussed this during our plenary, and we have consensus that this is a special case that should also be available in insecure contexts.

@broofa
Copy link
Collaborator

broofa commented May 12, 2021

[Continuing the thread from #24 here.]

@bakkot and I are both confused as to how this conversation led to #24 (restricting to secure contexts). Specifically...

It sounded like the group was okay with making an exception here (as relayed by @cynthia):

we have consensus that this is a special case that should also be available in insecure contexts.

And while @annevk did push back, he seems to acknowledge that the general case of crypto being secure-only already made exceptions for randomness-related APIs:

Limited to randomness it would be more reasonable as that's already exposed

(E.g.crypto.getRandomValues(), which strikes me as being conceptually very similar to randomUUID(), is available in insecure contexts. That these two methods would have different security profiles seems inconsistent.)

There's also @domenic's observation about the impact this is likely to have on libraries (having to expose the secure-context constraint and/or provide shim code). But he also points out that there's some value in bringing the spec inline with the current implementation(s).

As a reviewer of #24, I'm getting mixed messages. While my preference would be to lift the secure-context requirement, I'm fine proceeding either way. I'd just like to see some sort of consensus so the last comment here doesn't directly contradict the PR I'm being asked to approve. 😆

@domenic
Copy link
Collaborator

domenic commented May 12, 2021

For reviewing #24, I think the relevant question is "should the spec reflect implementations". (The answer is yes.)

Then this issue is about "what should implementations and the spec do in the future". I.e. this issue becomes a change request for the spec/implementations.

@cynthia
Copy link

cynthia commented May 13, 2021

Is it reflecting multiple implementations though? If it's one implementation than I think it should be malleable unless a lot of content already depends on it. (e.g. we have already failed with a lot of webkit specials, although some have unshipped..)

@domenic
Copy link
Collaborator

domenic commented May 13, 2021

There are zero implementations of crypto.randomUUID() that work in non-secure contexts. (And no implementations interested in doing so currently.)

There is one implementation of crypto.randomUUID() that works only in secure contexts. (And another implementation which has expressed some interest, although not yet an official position.)

@cynthia
Copy link

cynthia commented May 13, 2021

Are there potential security risks associated with this being exposed on insecure contexts? The bit we are afraid of is people falling back to suboptimal randomizers in insecure contexts due to this not being available.

While in the general case there is no excuse for shipping a new application insecure, if this is used by libraries or frameworks it would have to be polyfilled since there isn't any guarantee of a secure context.

I acknowledge that making this available on insecure against our general guidelines, but the associated risks are definitely there for making this secure only.

@bcoe
Copy link
Collaborator

bcoe commented Aug 20, 2021

The implementations so far have opted to expose randomUUID in a secure context.

Closing out this issue; we can revisit in the future if there are compelling cases for reversing this decision 👍

@bcoe bcoe closed this as completed Aug 20, 2021
@ljharb
Copy link

ljharb commented Aug 20, 2021

@bcoe meaning, only in a secure context? if so that's very unfortunate

@bcoe
Copy link
Collaborator

bcoe commented Sep 7, 2021

@ljharb crypto.randomUUID is currently only available in a SecureContext. Our reasoning being that we can see how websites are using randomUUID, and revisit the decision in the future with real information.

Can you provide some concrete reasons as to why this decision concerns you?

@bakkot
Copy link

bakkot commented Sep 7, 2021

I can't speak for @ljharb, but I have two major concerns:

First, the issues raised above: restricting this to secure contexts means it is essentially unavailable for use in libraries, which as a practical matter generally are not able or willing to restrict themselves to secure contexts. Waiting to see how websites are using randomUUID will give you no data whatsoever about this, because the concern is precisely that it won't be used by much of the code which otherwise would make use of it.

Second, as a process question - this was discussed here and in w3ctag and as far as I can tell the conclusion was that it should be available in insecure contexts. For the ultimate outcome to be that Chrome decided to ship only in secure contexts anyway, with no further discussion and despite the spec at the time, and then the spec to be changed to match Chrome - it makes participating in these conversations seem quite futile.

@domenic
Copy link
Collaborator

domenic commented Sep 7, 2021

Your best path here is trying to convince @annevk, since he (representing Mozilla) pushed the secure context restriction, and Chrome followed since we wanted the spec to have multi-implementer agreement.

However, I'll note that @annevk is probably bound by Mozilla policy, which prohibits shipping new features in insecure contexts unless other browsers already ship the feature insecurely, or requiring secure contexts causes undue implementation complexity.

Mozilla hasn't really applied this policy very consistently; in particular for many CSS features, and I believe some JS features, they have been the first to ship, but have not restricted themselves to secure contexts. But from my understanding that's their current position.

@bakkot
Copy link

bakkot commented Sep 7, 2021

@annevk, since he (representing Mozilla) pushed the secure context restriction

Wait, where did this happen?

@domenic
Copy link
Collaborator

domenic commented Sep 7, 2021

The OP of this very thread.

@bcoe
Copy link
Collaborator

bcoe commented Nov 25, 2021

I'd like to push on this thread a little bit, now that we've had crypto.randomUUID in the wild for a few months, by bringing up a few interactions I've seen.

I noticed a conversation where crypto.randomUUID would be good fit, but the user shies away from it due to the secure context caveat:

Hmm well I'd want this to be usable while deving like serving over http localhost. And ya I don't need it to be cryptographically secure. Just random. So those last two options seem good -- @JayCooperBell.

In the thread, a user is pointed towards crypto.randomUUID but is hesitant to use it for two reasons.

  1. lack of support in Safari (this will be solved soon 🎉).
  2. conern that the requirement for secure context will make it difficult to use crypto.randomUUID in a development environment. It's explained that 127.0.0.1 and localhost have access to secure context APIs, but I still understand the hesitancy. It's not uncommon for a development shop to deploy staging in an HTTP environment.

The end result is that the user opts to use Math.random().toString(), which they feel is sufficent for their testing use case ...

It's not uncommon for testing code to find its way into being production code, and I'd rather people reach for the cryptogrpahically secure option available to them.


In another thread, I saw someone proposing a DIY approach for generating IDs in JavaScript:

Fast and easy way to create unique IDs using JavaScript. - @SimonHoiberg

The appraoch they suggest has a couple potential problems which crypto.randomUUID would address:

  1. they bake the date into the ID, this has the potential to expose personal information (e.g., you might be able to infer their timezone).
  2. it uses an Math.random() which historically may not be a good entropy source on some platforms.
  3. I'd guesstimate the approach only has ~64 bits of entropy, vs., the 122 bits of UUID v4.

Someone responds to this thread, pointing the user towards crypto.randomUUID(), but again they have to include the caveat of secure contexts.


I think these two examples demonstrate a couple ways where the requirement for secure contexts hurts adoption of crypto.randomUUID (and demonstrate why its adoptoin is good for web users):

  1. folks trying to use crypto.randomUUID in a development environment may shy away from using the feature, if they're every serving staging, testing environments, etc., over HTTP.
  2. pointing users towards using crypto.randomUUID, in situations where it's clearly the right choice (more secure, better for privacy) require that a confusing caveat be included in the advice.

@marcoscaceres
Copy link
Collaborator

I also feel that it would be a bit unfortunate to not allow this in insecure contexts (for the reasons given above), particularly given its general utility in all contexts. crypto.randomUUID is not really on the level of a "new platform feature" or what we've been calling "powerful features", so - unless there is some user privacy pitfall I'm missing - I'd be supportive of this being universally exposed.

@annevk
Copy link
Author

annevk commented Nov 26, 2021

I'm not entirely persuaded. For 1, I agree that the development environment concern is real, but it's not unique to crypto.randomUUID() and is something we need to address to foster adoption of features limited to secure contexts. I don't think the solution should be giving up on secure contexts for the features where that is maybe possible. That seems shortsighted as we do plan to move away from insecure contexts long term and more importantly does not address the fundamental problem.

As for 2, I suspect that once 1 is solved that will be less of a problem.

@broofa
Copy link
Collaborator

broofa commented Nov 27, 2021

the development environment concern is real... and is something we need to address to foster adoption of features limited to secure contexts

@annevk : What is the plan for addressing these concerns? And what is the timeline for that plan?

Until such time as you can provide us with that information, I feel it's appropriate to drop the secure-context requirement. Continuing to impose this restriction needlessly hinders adoption of this API, and pushes users toward more ad-hoc (read, "less secure, more complex") solutions.

@bcoe
Copy link
Collaborator

bcoe commented Nov 28, 2021

the development environment concern is real... and is something we need to address to foster adoption of features limited to secure contexts

What is the plan for addressing these concerns? And what is the timeline for that plan?

Would love to pitch in and help with this problem. Perhaps the answer is a well supported polyfill for secure contexts, that we could point developers towards for their dev environments. Then, when the topic of secure contexts comes up, there's a consistent answer for people.

@ljharb
Copy link

ljharb commented Nov 28, 2021

That would just mean folks would ship the polyfill forever to ensure insecure contexts worked too.

@ctavan
Copy link
Collaborator

ctavan commented Dec 2, 2021

For awareness: the uuid npm module will soon implement the fallback for insecure contexts: uuidjs/uuid#602 (comment)

However, at least we're only falling back to crypto.getRandomValues() and not to Math.random().

@runspired
Copy link

Recently emberjs/data#8097 removed it’s polyfill for uuid generation because who needs another polyfill, right? I feel every app I’ve worked on has at least 4 uuid implementations in their codebase.

This obviously resulted in the library breaking our users who either dev or deploy in insecure contexts. We will have to revert, previously we were using getRandomBytes. uuid is a foundational web primitive, locking it behind secure contexts will only mean that every library continues to require or ship their own implementation.

@bakkot
Copy link

bakkot commented Aug 4, 2022

@martinthomson I'd like to revisit this decision in light of the above comment, but @annevk is no longer at Mozilla. Can you speak for Mozilla on issues like this? If not, do you know who can?

The short summary is, there's a new crypto.randomUUID API which @annevk felt should only be exposed on secure contexts as a nudge towards getting people off of insecure contexts. (As far as I am aware that was the only concern, nothing to do with security per se.) Safari and Chrome followed suit to reduce interop risk. It has turned out that this means libraries and such can't use this API and have to keep shipping polyfills, making it much less useful, so I would like to revisit that decision. Since @annevk, speaking for Mozilla, was the only person who pushed for this restriction the first place, I'm hoping to engage with someone who can speak for Mozilla now.

@martinthomson
Copy link

Leaving aside who speaks for Mozilla, Anne's request seems entirely reasonable in this case. As he said, this is a new feature for which it's technically trivial to restrict.

Adoption incentive arguments seem weak: this isn't a high impact API so it won't incentivise HTTPS adoption any more than a restriction will disincentivise adoption of this. Also, speaking personally, UUID is a little silly 1, so maybe a little disincentive is a good thing.

(I haven't read the entire thread here, so feel free to resurface arguments you think I missed.)

Footnotes

  1. Try a 15 byte random value, base64[url] encoded some time.

@bakkot
Copy link

bakkot commented Aug 4, 2022

The argument is basically this: if a new API is gated behind HTTPS, but is possible to implement in userland, then many consumers will choose not to adopt the API (because it will require making their libraries and components HTTPS-only). If it's worth adding an API at all, it's presumably because we want people to be able to use it instead of shipping a userland implementation forever. So gating it on HTTPS is counterproductive.

If we don't want people to use it, we shouldn't have added it in the first place. But assuming we do think it's worth having, there's no benefit to gating it on HTTPS except to drive people towards HTTPS, and that has to be weighed against the cost of people instead choosing to ship a userland implementation. I think the cost clearly outweighs the benefit here, given that in fact people are instead choosing to ship a userland implementation. Do you disagree?

@martinthomson
Copy link

I'm not getting from this discussion is details about the conditions under which an insecure context might access this. There was mention of electron, but as Domenic pointed out, whether something is "secure" or not is up to the app to decide. Same for nodejs and friends, where code can simply decide that the context is "secure".

Is this because a framework that includes this might be run on an unsecured page, but it is still expected to work? The only concerns I can see there are based on the presence of a fallback: (a) that might be insecure or (b) might add to code size. For (a), an insecure implementation was shipped without integrity protection, so maybe this is no net loss. For (b), maybe this means more sophisticated tree-shaking is needed so that you don't ship the fallback to production (which is presumably properly secured). But of course, not all browsers ship the new API (yet), so I'd be surprised if that fallback can really be removed in any reasonable time frame.

If the concern is that this new API won't be used at all, that doesn't seem a real risk based on how package maintainers are dealing with this issue.

@bakkot
Copy link

bakkot commented Aug 8, 2022

Is this because a framework that includes this might be run on an unsecured page, but it is still expected to work?

Yes, this is the concern. And it's not just theoretical, as you can see above. (Or a library, or any other code designed to be reused.)

The only concerns I can see there are based on the presence of a fallback: (a) that might be insecure or (b) might add to code size.

Code size (and the need for yet another dependency) is the main concern, yes. And really, if you're going to go to the effort of shipping a fallback, you're probably just going to only use the fallback. From the point of view of the library author, since you're paying the cost in code size of including the fallback either way there's not really any reason to bother feature-testing and making your logic conditional.

(There is, of course, also the hassle involved in using something which seems to be available in testing and only later discovering that you need to back it out. That's not a good experience for developers, especially when there is no good answer we can give them for why it was necessary to cause them this hassle.)

For (b), maybe this means more sophisticated tree-shaking is needed so that you don't ship the fallback to production (which is presumably properly secured).

In practice very few production applications are built with this level of tree shaking, in my experience. So the fact that it's theoretically possible isn't very compelling, I would think.

But of course, not all browsers ship the new API (yet), so I'd be surprised if that fallback can really be removed in any reasonable time frame.

This API has been shipping in all evergreen browsers for several months at this point. There's lots of libraries which only support browsers in which this API is available. Had it been available in insecure contexts at least some frameworks would already have been shipping it without a fallback.

But since it is not available on insecure contexts, anyone who is making code to be reused isn't going to adopt this API until everyone drops support for HTTP, which I think you'll agree is going to be at least a few more years yet.


So there is a real, if small, cost - more complexity, dependencies, and code size for libraries and frameworks, instead of being able to just use the platform. Avoiding those costs was the entire point of adding this API. And, as far as I can tell, there's no benefit to limiting this to secure contexts, except a nudge towards getting people off of HTTP, which doesn't seem to be very compelling. Is there some other benefit I'm missing?

@runspired
Copy link

runspired commented Aug 8, 2022

I'm not getting from this discussion is details about the conditions under which an insecure context might access this. There was mention of electron, but as Domenic pointed out, whether something is "secure" or not is up to the app to decide. Same for nodejs and friends, where code can simply decide that the context is "secure".

This comment seems to me to presuppose that you would only want a uuid generated in a secure context. There's plenty of insecure contexts in which folks ship apps in which you might want a unique string.

Data libraries and apps that want to support offline, client-side-cache, side-loading/side-posting of data, serializability of client generated data, transactional saves of related entities, or client-side create behaviors app often necessitate the generation of UUIDs on the client. These needs are common enough that in most apps I've worked on I've observed multiple UUID polyfills included in the build.

This led to me advocating with @dherman a number of years back for more core libraries for exactly things like this: UUID being the example I had to give. Especially because optimizing/random byte generation is something better left to browsers.

As a framework library author it's a non-trivial decision to force https on end users, though I'm extremely sympathetic to the viewpoint that more things that force https the better.

Fwiw, while I would hope that functionality that doesn't need to be gated by secure contexts and seems like a core lib feature wouldn't be gated in this way, we can likely escape in the context of the library I maintain by making it possible for the consuming app to choose to include the polyfill and defaulting to not using one. Not all libraries have this decision, but this is something we can do due to the ember community's strong conventions around shared tooling.

@cynthia
Copy link

cynthia commented Aug 8, 2022

(Non-TAG position, as I haven't discussed this with the group)

I am sympathetic to the situation, and see more risks continuing the secure contexts enforcement than making this available on insecure contexts. Given that this can be polyfilled easily (and likely through a worse implementation) I don't see a compelling argument that this would motivate developers to migrate to HTTPS.

Official TAG position on this is in a comment above.

@broofa
Copy link
Collaborator

broofa commented Aug 9, 2022

@annevk What's the best way to move this conversation forward in a way you / Mozilla would seriously consider making an exception for this API? Does one of us need to make a formal request to the dev.platform mailing list per your original blog post?

@hunterloftis
Copy link

The misalignment between getRandomValues and randomUUID is surprising. Both generate random values, often used for IDs, faster and with greater entropy than from a userspace library. One is available in secure and insecure contexts; one isn't.

What this seems to inevitably lead to is yet more divergence between client code and server code, with hacks needed to try to present a usable facade to end-users across both environments.

@jez9999
Copy link

jez9999 commented Nov 15, 2022

Yeah, I think that's preferable. Remember, the long term goal is to get rid of insecure contexts completely. Libraries will have to start accounting for this as well.

What does that mean? Completely ban HTTP under all circumstances? If so, I think that's frankly an absurd zealot-like position. HTTPS is not always required, plenty of data is not sensitive and why on earth should browser makers force everything to be HTTPS 100% of the time?

As for this only being available in secure contexts, well that makes it useless to me. I'm developing a web app locally and there are times when I simply need to use HTTP to test it. So I'll have to use an NPM module to generate a UUID instead.

I doubt this will have any effect, but my vote would be for this absurd "everything has to be secure 100% of the time" attitude to be dropped. Generating a UUID can be done for all sorts of reasons, plenty of them are not remotely security sensitive. I simply want to generate a random unique ID for anonymous web clients to create a temporary account for themselves on my server. It doesn't matter that the user can open the console and edit it.

threepointone added a commit to partykit/partykit that referenced this issue Jan 29, 2023
This uses a weaker implementation to generate conneciton IDs if crypto.randomUUID() isn't available in the browser (re: WICG/uuid#23) Fixes #53
threepointone added a commit to partykit/partykit that referenced this issue Jan 29, 2023
This uses a weaker implementation to generate conneciton IDs if crypto.randomUUID() isn't available in the browser (re: WICG/uuid#23) Fixes #53
threepointone added a commit to partykit/partykit that referenced this issue Jan 29, 2023
This uses a weaker implementation to generate conneciton IDs if crypto.randomUUID() isn't available in the browser (re: WICG/uuid#23) Fixes #53
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests