Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Document our security limitations #845

Merged
merged 5 commits into from Mar 25, 2014
Merged

Conversation

public
Copy link
Member

@public public commented Mar 22, 2014

Alternative "fix" for #7.

@jenkins-cryptography
Copy link

Test PASSed.
Refer to this link for build results: https://jenkins.cryptography.io/job/cryptography-pr-experimental/1249/

@jenkins-cryptography
Copy link

Test FAILed.
Refer to this link for build results: https://jenkins.cryptography.io/job/cryptography-pr-experimental/1250/

uninitialized memory.

Python exposes no API for us to implement this reliably and as such most
software in Python is vulnerable to this attack. However we do not currently
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we have anything we can point to as support for the assertion that it's probably not high risk? If not, should we expand on this a little bit?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The CERT secure coding guidelines categorise an attacker gaining access to private data via swap and "reusable resources" as "unlikely" and "Low severity, unlikely, expensive to repair."

I'll add those references too :)

@jenkins-cryptography
Copy link

Test PASSed.
Refer to this link for build results: https://jenkins.cryptography.io/job/cryptography-pr-experimental/1252/

@jenkins-cryptography
Copy link

Test PASSed.
Refer to this link for build results: https://jenkins.cryptography.io/job/cryptography-pr-experimental/1254/

@reaperhulk
Copy link
Member

I'm mostly okay with this, but I'm wondering if we should mention that in general before we release memory we do call cleanup commands that typically zero it...? As currently phrased it sounds like any use of key material will result in keys leaking to uninitialized memory.

@public
Copy link
Member Author

public commented Mar 24, 2014

How do you get the key into OpenSSL without first making a Python object which will never be memset_s'd or otherwise cleaned of it's contents before being released? I think currently we do always leak key material this way?

@reaperhulk
Copy link
Member

Good point :) Presumably Python tends to reuse memory so most of the time Python will reclaim it for other uses within its own pid but that's only helpful for long-lived processes.

@alex
Copy link
Member

alex commented Mar 24, 2014

I think we need to better articulate a threat model here. Particularly
given the fact that this is an issue that no other Python library can solve
either.

On Mon, Mar 24, 2014 at 5:28 AM, Paul Kehrer notifications@github.comwrote:

Good point :) Presumably Python tends to reuse memory so most of the time
Python will reclaim it for other uses within its own pid but that's only
helpful for long-lived processes.


Reply to this email directly or view it on GitHubhttps://github.com//pull/845#issuecomment-38437983
.

"I disapprove of what you say, but I will defend to the death your right to
say it." -- Evelyn Beatrice Hall (summarizing Voltaire)
"The people's good is the highest law." -- Cicero
GPG Key fingerprint: 125F 5C67 DFE9 4084

@public
Copy link
Member Author

public commented Mar 24, 2014

I'm not sure what you mean? The threat model is "software is bad." This is a thing you do to protect yourself from bad software. Not doing it doesn't immediately mean your system is broken, it just means that an attacker has an extra place to hunt for exploits.

I think between the current wording and the references that's explained OK but if you have some suggestions I'll work on it :)

@alex
Copy link
Member

alex commented Mar 24, 2014

I mean, what position does the attacker need to have to be able to exploit
this, or what other things need to happen. Like I said, no other Python
software is capable of doing anything here either, so I want to be sure
we're balancing complete info with just scaring our users.

(Did I really just type "we need to balance how much we tell people the
truth"? FML)

On Mon, Mar 24, 2014 at 8:05 AM, Alex Stapleton notifications@github.comwrote:

I'm not sure what you mean? The threat model is "software is bad." This is
a thing you do to protect yourself from bad software. Not doing it doesn't
immediately mean your system is broken, it just means that an attacker has
an extra place to hunt for exploits.

I think between the current wording and the references that's explained OK
but if you have some suggestions I'll work on it :)


Reply to this email directly or view it on GitHubhttps://github.com//pull/845#issuecomment-38455135
.

"I disapprove of what you say, but I will defend to the death your right to
say it." -- Evelyn Beatrice Hall (summarizing Voltaire)
"The people's good is the highest law." -- Cicero
GPG Key fingerprint: 125F 5C67 DFE9 4084

@public
Copy link
Member Author

public commented Mar 24, 2014

The attacker needs to be in the position of have some way of reading unitialized memory. That's all they need. This is a pervasive security problem and isn't at all unique to us but we do tell the user it's considered very low risk and unlikely to be exploited.

I guess I could add something like "we consider this unlikely to effect most users as it requires multiple additional security failures to exploit"?

@Ayrx
Copy link
Contributor

Ayrx commented Mar 25, 2014

I'll prefer "potentially vulnerable" instead of "vulnerable". I think the rest of the text is fine and has just enough information to be helpful without writing an entire thesis on memory wiping.

@public
Copy link
Member Author

public commented Mar 25, 2014

That is a good suggestion :-)

On 25 March 2014 06:22:51 Ayrx notifications@github.com wrote:

I'll prefer "potentially vulnerable" instead of "vulnerable". I think the
rest of the text is fine and has just enough information to be helpful
without writing an entire thesis on memory wiping.


Reply to this email directly or view it on GitHub:
#845 (comment)

@jenkins-cryptography
Copy link

Test PASSed.
Refer to this link for build results: https://jenkins.cryptography.io/job/cryptography-pr-experimental/1257/

Python exposes no API for us to implement this reliably and as such almost all
software in Python is potentially vulnerable to this attack. However the
`CERT secure coding guidelines`_ consider this issue as "low severity,
unlikely, expensive to repair" and we do not consider this a high risk for most
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nitpicking at this point, but should it just say users instead of most users?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It could be a risk for some users?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think "most users" is fine. Hey, unlikely as it may be there might be some use-cases where this is a potential high risk issue.

reaperhulk added a commit that referenced this pull request Mar 25, 2014
Document our security limitations
@reaperhulk reaperhulk merged commit 0201c35 into pyca:master Mar 25, 2014
@public public mentioned this pull request Mar 25, 2014
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Oct 3, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants