Replies: 24 comments
-
There is 1000% interest. It would probably turn out to be a separate repository / OpenPGP.js extention of sorts. Matter of fact, our company (FlowCrypt) has put up a $5,000 bounty in the past to explore connecting the browser to a PKCS#11 compatible device directly, without local agent. We were told that this necessarily involves support from hw vendors (such as Nitrokey). So I'm very glad to see this. Is there any chance at all that your solution may turn out to be universal, spec-wise, as long as other hw key vendors want to follow/join Nitrokey in the efforts? |
Beta Was this translation helpful? Give feedback.
-
I should also mention https://github.com/FlowCrypt/webusb-openpgp which is a very empty, and very sad, repository at the moment. |
Beta Was this translation helpful? Give feedback.
-
Absolutely. We intent to specify the interface in a RFC-like style to foster it's wider usage. Our implementation will be open source anyway. We are happy for all types of support. Given the fact we won't enable PKCS#11 compatible devices, do you think our approach would still be eligible for your bounty? :-) Is there anybody experienced with OpenPGP.js who could help or work on integrating it? |
Beta Was this translation helpful? Give feedback.
-
I think so. If you can produce a token that user can stick into the laptop & decrypt / sign OpenPGP messages directly from browser without installing other software, and if the process of doing so and the code that does it is open source, I think that counts. I didn't foresee a hw vendor claiming it, but if you guys are spending the effort to get it done, why not. Are you planning to have an external security review done? One option would be to spend the bounty on a security review. Or if you want to claim it directly and use it as you see fit, that's cool, too.
I can help integrating it. For example, if you can produce a JS function that I can call to decrypt session key eg Assuming RSA (not sure if function signature for other algos is different), this would look something like: export decryptPgpSessionKey = async (encryptedData) {
// ?
// return ?
} In most situations, we know the key fingerprint the data is encrypted for (but not always). If your token can store/use more than one key on it, we should also add that so that you can choose the right key & avoid trying all: export decryptPgpSessionKey = async (encryptedData, keyFingerprint = undefined) {
// ?
// return ?
} This is just a rough example of what we'll need. For the encrypted data argument and return type, we could use:
Then there's signing, keygen, storing existing keys onto the token (?), etc. Sooner or later OpenPGP.js maintainers will chip in here with some thoughts, I'm sure. |
Beta Was this translation helpful? Give feedback.
-
The bounty would help! :-) Yes, we could spend it on an external security review.
That would be very helpful.
Yes, we need the key's fingerprint and we expect to get it always. In which cases you don't know the fingerprint?
Yes. |
Beta Was this translation helpful? Give feedback.
-
Some implementations (not ours) will encrypt messages but don't say which keys it's encrypted for (privacy). Recipient is supposed to try all available keys. A "wild card" key id, 0x0000000000000000. On the bottom of https://tools.ietf.org/html/rfc4880#section-5.1
|
Beta Was this translation helpful? Give feedback.
-
If the token can give us information about which keyids are present on it, we could implement this on our end. When we encounter a wild card keyid, we'll ask the token which keyids it has, then try one by one. |
Beta Was this translation helpful? Give feedback.
-
We plan to primarily support dynamically derived keys (for various benefits) in which case we don't know which keys exists. We will also support resident keys in which case we do know the keys, but that should be a secondary approach. Applications which don't know the key ID could only use resident keys. |
Beta Was this translation helpful? Give feedback.
-
I imagine, the OpenPGP.js implementation would target use of primarily resident keys (keys that are on the device & known). It depends on the API. If, from the perspective of OpenPGP.js it makes no difference what kind of key it is (here is keyid, here is encrypted session key, decrypt this), then I imagine both will be supported. When, on the other hand, there are workflows that are not yet common in OpenPGP.js (I imagine during keygen if the token insists on using derived keys, with some additional parameters), then I imagine we would not target supporting derived keys in the initial phase. For that, I'd need to actually read your spec (which I haven't yet), get feedback from maintainers and from you, etc. |
Beta Was this translation helpful? Give feedback.
-
I would distinguish both use cases:
Please read our spec for further details. |
Beta Was this translation helpful? Give feedback.
-
I went through the spec. Looks good! I have a few questions about:
Thanks again. This looks really good. |
Beta Was this translation helpful? Give feedback.
-
Just to add, it would be really good if it can support also RSA decrypt and sign (and store, obviously). I understand you're targeting ECC first (curve25519?) and I think it's a fine choice. I also understand one cannot do everything at once and still keep quality up (and budget down). So I appreciate that you've split it up in stages. Just saying.. RSA would be really useful, and may allow you to target more enterprise-y customers, who may in turn help to fund your efforts over time. Win-win. |
Beta Was this translation helpful? Give feedback.
-
To clarify, your efforts 100% qualify for the 5k bounty, once PoC is available. You can spend the bounty on anything you want, you're the ones implementing this, so you know better where the money is needed. 👍 |
Beta Was this translation helpful? Give feedback.
-
It's not the final thought yet, but here is my current thinking: The key ID should be a 128 bit value, perhaps the lower 128 bit of the SHA-256 hash. For consistency, the key ID is required for resident keys and derived keys and the application has to keep track of it. I assume that each email app contains at least a basic key management functionality, including a key ring. In that case, it should be aware of own public keys (hashes). A list of derived key IDs can't be pulled from the device. Pulling a list of residential keys is technically possible. The only reason why this might not be desired is for privacy reasons. Because such list could be misused by malicious web apps to track individual devices/users. Not sure about this yet... I think we will focus on NIST P-256 and/or Curve25519 initially. The bounty would be great. Many topics to spend the money on. Thanks. |
Beta Was this translation helpful? Give feedback.
-
As long as we are talking about the public key hash (specifically, in OpenPGP we use "Long ID", commonly represented as a 16-character string when in hex form, such as In regards to pulling available longids from the device, you should also think about this scenario:
Therefore the device B would never know if there are any keys at all on the token? Or if it knew that the token was initialized, eg there are some keys on it, it would never know which keys until it happened to decrypt a message with one of them (because the message provides the LongID?). That sounds like a recipe for bad UX and confusing software (exactly what encrypted email is known for, and what we're trying to avoid). It would be really useful if the app could pull a list of longids, at least the residential ones, to know what it's working with.
Is it true that user can only use the token on a particular website after clicking through some browser prompt? Eg "Website XXX wants to use your hardware security device [allow] [deny]" or something like that. Therefore user will know the website is accessing it? If that's true and the user trusts the app to access the token and use the keys on it, I would say the user should also trust the app to have info about available key longids, and handle that info responsibly. You could further protect the "pull longids" operation with a key button touch/press by the user. I imagine this op would be used during app setup time - app should know what is available for use. A related note - is there any way an app can pull the whole public key from the device? The usecase would be the same - app on second device trying to recreate setup on the original device (without a 3rd party server brokering the info back and forth), just by using the token. |
Beta Was this translation helpful? Give feedback.
-
From our perspective OpenPGP may be the first but not the only use case. Therefore I didn't had in mind to exactly use OpenPGP long IDs (RFC 4880). I need to look deeper in our key derivation function in order to tell how the key ID has to look like.
The app would know for each particular email which key is required to decrypt it. It would simply try to decrypt it with the appropriate key on the device. It could also prompt the user "please insert your device with key ID XYZ". When user tries to decrypt it with device B, it would simply fail and the app could respond such as "Wrong device. Please insert your device with key ID XYZ".
That's correct. The device would know whether it's initialized or not but not if any or how many derived keys exists.
The above doesn't seem to be bad UX to me. Also the entire solution, including the device handling, needs to be considered. And derived keys have some significant advantages. I just added a section Reasoning for Derived Keys to the spec.
If we conclude that the workflow with residential keys would work fine, what for is this operation still required?
Yes. But for the browser as well as for the user such requests wouldn't be distinguishable from ordinary WebAuthentication requests to authenticate to a website. Assuming that WebAuthentication will become more and more popular in the future, the privacy risk would increase. Imagine that every 2nd website requires you to login via WebAuthentication, for sure some dubious websites would do so too. And those dubious websites could misuse that information to track users.
Yes, it's specified already. However I don't understand the use cases you have in mind. The app wouldn't know about individual devices A and B but expect the user inserting any device with the required key material. |
Beta Was this translation helpful? Give feedback.
-
Usecase for pulling public keys: When you're setting up an email client, you need to know the users public key. Later the pubkey may be published on a keyserver, or it may be attached to an outgoing email, or it may be attached as an autocrypt header, etc.
The technical community is used to the notion of public/private keys, and is quite comfortable "fixing up a broken setup" in one way or another, because they have the technical chops to do it. I feel that we as a technical community have a lack of empathy in terms of understanding what the experience is like for a non-technical person when things go wrong. We design systems for "when things are set up in optimal way", and the moment they are not, we kind of throw the problem in user's lap. I think that knowing the list of LongIDs on a particular device allows to design UX around being more helpful in guiding user to resolve problems. Two example usecases:
I'm wondering if there is any technical possibility to allow certain APIs for web extensions, but not for websites. I understand your concern when user navigates from website to website, and if the prompt is not distinguishable from normal Web Authentication prompts, this could be used for fingerprinting (with some user interaction). But with Web Extensions, this is not a problem - user makes a deliberate choice to install and use a particular Web Extension, and the overall security and privacy promises are different from 100s of websites users browse every day. Would you be able to distinguish if a call is made from a website vs a Web Extension?
While what I was mentioning above are (in my mind) serious inconveniences, that at least some users could work around, this one could be a potential blocker. When decrypting a RFC4880 message, all you have available is the message and the RFC4880 LongIDs the message was encrypted for. That's all. If the hw token was expecting key-ids in some other format for OpenPGP keys, I don't see how user could just respond to "please insert your device with key ID XYZ" and have the message actually decrypted, especially in combination with no way to pull the list of key ids from the device.
I assume, here Overall I'm happy we're having this conversation & thanks for the detailed responses above 👍 |
Beta Was this translation helpful? Give feedback.
-
Hi! |
Beta Was this translation helpful? Give feedback.
-
@szszszsz good idea. To not brake the current conversation thread I still respond to some comments below.
During generation of a new key, of course the public key gets exported from the device to the application. I expect the application to store user's public key in his account settings anyway and to publish it on a keyserver or through Autocrypt or alike. BTW, even if the device would allow exporting of derived public keys, this wouldn't be the complete public key because it would lack other peoples signatures. Therefore we can't rely on the device as an exclusive way to transfer public keys.
Note that FIDO/Web Authentication is designed the exact same way, allowing derivation of keys (for authentication). It works pretty well IMHO. In general, introducing new patterns doesn't implicate they are bad but instead they may require changing expectations. ;-)
Ideally applications should not distinguish between derived keys and resident keys. Your two example usecases are basically about setting up a new computer: User should have a proper backup of his account settings anyway. It should not be the job of a WebCrypt device to be a backup solution for the expense of sacrifying these advantages. The user would need to transfer or restore other information too, such as his keyring with recipients' PGP keys and settings. Technically the application would need to know the key's long ID to use. If it's not provided from the user directly (through his backup restore), the app could lookup a keyserver, Web Key Directory, or check the last sent email (IMAP folder "sent") which autocrypt key was attached. When using webmail, I assume the user doesn't need to setup his account from scratch but would simply login to it. Note, the spec draft is not final and perhaps it might look more like
Technically I believe it's possible. I need to look more into it and created this ticket. Using key ID according to RFC4880: Good point. Still it doesn't require to exactly use long IDs but it might be sufficient to find a way to transfer one into another. I created this separate issue. |
Beta Was this translation helpful? Give feedback.
-
Good idea @szszszsz to look at it issue by issue, thanks! I think I'm seeing a theme, and the theme is as follows: The Nitrokey team looks at the hw token as not responsible to be authoritative source of keys that the user could the use on another laptop (without other form of external bookkeeping). You guys see this as the user's responsibility, or the responsibility of a particular app, or both, to worry about backups. Whereas I'm seeing a great opportunity to make it reasonably easy to get the user out of trouble. You know users. The trouble is: Help me! I'm an idiot! I lost all my backups and my laptop is toast! What now? From my viewpoint, with the current design, about the only thing standing in the way of offering a convenient way to resolve it: a list of LongIDs :) Semi-related question: I assume there is no way whatsoever to store/retrieve arbitrary data blob? |
Beta Was this translation helpful? Give feedback.
-
I don't see a way to provide such option reliably without sacrifice these advantages. Also after discussing the use cases above, I still think that it's not required. How would it work with Flowcrypt? Doesn't Flowcrypt store user's public key or at least it's long ID online?
That's correct. In our thinking WebCrypt is designed for the Web and the transfer of arbitrary data could be better performed through some online means.
This is why we are so keen about a proper backup strategy which yields to derived keys. |
Beta Was this translation helpful? Give feedback.
-
I think this new option for Chrome (78+) is relevant as it will allow constant communication with the file system from the browser: https://web.dev/native-file-system/ |
Beta Was this translation helpful? Give feedback.
-
@rushglen Thank you for the suggestion, but linked NFS solution would be as secure as the currently used storage means. What we want to do is to move the private key and related operations' execution to the device, so the secret material would never leave it, thus protecting from leaking it. This is why we need a direct communication channel between browser and the device. |
Beta Was this translation helpful? Give feedback.
-
Further discussion and development: https://github.com/Nitrokey/nitrokey-webcrypt/ |
Beta Was this translation helpful? Give feedback.
-
In the early days of OpenPGP.js, we were already discussing the idea to store private keys in hardware devices. From that time this image originates which show Crypto Stick (predecessor of Nitrokey) as a key storage. This is fundamentally different to storing keys as files because the hardware would protect private keys from being extracted. Consequently key operations such as decryption and signing would be computed in the hardware, instead of in the software/OpenPGP.js. So far this idea never materialized, AFAIK. Now we at Nitrokey plan to develop a device which interface is available in any web browser and which would act as a secure key storage. The trick is to piggy-back the WebAuthentication interface so that no additional device driver or browser add-on would be required. We plan to develop a JavaScript/TypeScript library which would provide a convenient API to the device and which should be used by all 3rd party projects which want to use our interface. Please see our more detailed description.
Email encryption is not the only one but the first use cases we are going to support. Therefore an integration with OpenPGP.js would just be natural. Now I would like to find out if there is interest in OpenPGP.js community to adopt our interface. Please let me know your thoughts.
Beta Was this translation helpful? Give feedback.
All reactions