Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

zero keys/secrets upon Drop #63

Closed
warner opened this issue Dec 11, 2018 · 7 comments
Closed

zero keys/secrets upon Drop #63

warner opened this issue Dec 11, 2018 · 7 comments

Comments

@warner
Copy link
Collaborator

warner commented Dec 11, 2018

I've seen curve25519-dalek use some tools/libraries to zero out the memory of sensitive data structures (keys, intermediate values used to derive keys) just before they get dropped. We should do something similar. A lot of our temporaries are on the stack, so we should zero those too.

Basically, if the application above us is careful with their own memory, then our library should not introduce any additional exposure of secrets.

I think @david415 had some ideas on this one.

@david415
Copy link

seems like clear_on_drop is the popular choice:
https://crates.io/crates/clear_on_drop

I have not yet tried the "clear_stack_on_return" in my own project and so far have only used this crate to clean secrets on the heap:

#[derive(Clone, Debug, PartialEq)]
pub struct PrivateKey {
public_key: PublicKey,
private_bytes: ClearOnDrop<Box<[u8; KEY_SIZE]>>,
}

Whereas clear stack on return seems straight forward:

https://docs.rs/clear_on_drop/0.2.3/clear_on_drop/fn.clear_stack_on_return.html

@warner
Copy link
Collaborator Author

warner commented Sep 1, 2019

I also see Tony's zeroize, which is pure-Rust. clear_on_drop is used by curve25519-dalek (a hearty recommendation!) but we'd need to use its no_cc feature-flag to avoid causing problems compiling to WASM (#2).

@warner
Copy link
Collaborator Author

warner commented Sep 1, 2019

It sounds like curve25519-dalek is planning to move from clear_on_drop to zeroize once the latter has a stable 1.0 release, which may be soon-ish. So let's plan on using zeroize.

warner added a commit that referenced this issue Feb 5, 2020
warner added a commit that referenced this issue Feb 6, 2020
warner added a commit that referenced this issue Feb 6, 2020
This zeroes out the master "Key" struct (used to derive all phase keys), as well as
the derived phase keys.

We might also consider making the plaintext return values of Receive be
Zeroized<Vec<u8>>, since we allocate them, but I don't know how much that
might impact callers.

We should also make sure that upstream SPAKE2 zeroes its internal secrets.

refs #63
@warner
Copy link
Collaborator Author

warner commented Feb 6, 2020

Ok, that zeroes out the main Key struct, and all the "phase keys" we derive from it.

Should we change the Receive code to return Zeroized<Vec<u8>> to the application? I'm thinking that if we allocate it, we should be responsible for making sure it gets erased. But I'm not sure if that would be a hassle for application authors. In the tests, I had to use a funny-looking hex::encode(&*derived_key) to deal with it not being a plain Vec<u8>. I don't know if I just couldn't find a better way, or if maybe &* is entirely normal and no big deal, but it looked pretty weird and redundant to me.

@piegamesde
Copy link
Member

&* is called a "re-borrow" and is fairly common in Rust. If you don't like it, usually there is an alternative by calling a method (.deref() I think).

What I think what we want, however, is our key type to be as opaque as possible and only implement AsRef<&[u8]> (or maybe even Borrow?) for it. The usage would be then simply &key (or key.as_ref()).

@piegamesde
Copy link
Member

As far as I can tell, we are blocked on RustCrypto/AEADs#65 ?

@piegamesde
Copy link
Member

While this would be good to have, I don't think this is going to happen any time soon.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants