Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use a custom error type for invalid lengths, replacing fmt.Errorf #69

Merged
merged 2 commits into from Dec 30, 2020

Conversation

joewreschnig
Copy link
Contributor

This significantly improves the speed of failed parses due to wrong
lengths. Previously the fmt.Errorf call dominated, making this the
most expensive error and more expensive than successfully parsing:

BenchmarkParse-4                 29226529        36.1 ns/op
BenchmarkParseBadLength-4         6923106       174 ns/op
BenchmarkParseLen32Truncated-4   26641954        38.1 ns/op
BenchmarkParseLen36Corrupted-4   19405598        59.5 ns/op

When the formatting is not required and done on-demand, the failure per
se is much faster:

BenchmarkParse-4                 29641700        36.3 ns/op
BenchmarkParseBadLength-4        58602537        20.0 ns/op
BenchmarkParseLen32Truncated-4   30664791        43.6 ns/op
BenchmarkParseLen36Corrupted-4   18882410        61.9 ns/op

Add benchmarks for different kinds of invalid UUIDs, and a test case for too-short UUIDs to ensure visible behavior doesn’t
change.

Also add a test case for too-short UUIDs to ensure behavior doesn’t
change.
This significantly improves the speed of failed parses due to wrong
lengths. Previously the `fmt.Errorf` call dominated, making this the
most expensive error and more expensive than successfully parsing:

    BenchmarkParse-4                 29226529        36.1 ns/op
    BenchmarkParseBadLength-4         6923106       174 ns/op
    BenchmarkParseLen32Truncated-4   26641954        38.1 ns/op
    BenchmarkParseLen36Corrupted-4   19405598        59.5 ns/op

When the formatting is not required and done on-demand, the failure per
se is much faster:

    BenchmarkParse-4                 29641700        36.3 ns/op
    BenchmarkParseBadLength-4        58602537        20.0 ns/op
    BenchmarkParseLen32Truncated-4   30664791        43.6 ns/op
    BenchmarkParseLen36Corrupted-4   18882410        61.9 ns/op
@pborman pborman merged commit edef28d into google:master Dec 30, 2020
@pborman
Copy link
Collaborator

pborman commented Dec 30, 2020

Thank you. I will cut release v1.1.3 to include these changes.

@inliquid
Copy link

This is micro optimization by the cost of worse readibility with absolutely zero sense.

@joewreschnig
Copy link
Contributor Author

@inliquid I can provide some additional background as to how this is helpful for me, but 8x in a package's top-level entry point is usually not considered a "micro"optimization by any measure.

I have two services this helps in a major way. One is a batch importer of records by ID, which contains a lot of garbage IDs. Parsing the ID is a substantial, maybe 20%, part of handling each record (there are 3-4 other pieces of data about the size / complexity as the ID). But x% of the IDs are garbage for various reasons; one thing our customers are paying us for is to filter these out for them. So this is nearly a x% speedup. For some batches that's significant; in fact it was the top of the profile. In this case we don't need the error message (it would be millions of lines line); we just report how many were invalid. Secondly, we generate an internal 128 bit ID associated with lots of events in our system. If the trigger for that event was also a UUID, we generate the secondary ID differently (to handle varying case and dash placement as different third-party implementations have different opinions on this) - conceptually something like if IsValidUUID(v) { return genFromUUID(v); } else { return genFromArbitraryID(v); }. Well, genFromUUID is just going to parse it anyway; if the validation function differs in any way we have a big problem. And a failed parse is as cheap as any validation, except for this one specific case. Currently I optimize this with a len check before attempting to parse, which hurts readability even more. Again, we don't need the error message - just the result if it parsed successfully or we take the other path if not.

UUID handling often sits at a very low level in systems that need to handle high throughput / low latency. I really appreciate this package's focus on performance and compatibility, thanks @pborman.

@joewreschnig joewreschnig deleted the invalid-optim branch December 31, 2020 12:01
johejo added a commit to johejo/uuid that referenced this pull request Jan 4, 2021
Zero allocation by using non-pointer error.

related google#69

name               old time/op    new time/op    delta
ParseBadLength-16    15.4ns ± 0%     3.5ns ± 0%   ~     (p=1.000 n=1+1)

name               old alloc/op   new alloc/op   delta
ParseBadLength-16     8.00B ± 0%     0.00B        ~     (p=1.000 n=1+1)

name               old allocs/op  new allocs/op  delta
ParseBadLength-16      1.00 ± 0%      0.00        ~     (p=1.000 n=1+1)
pborman pushed a commit that referenced this pull request Jan 4, 2021
Zero allocation by using non-pointer error.

related #69

name               old time/op    new time/op    delta
ParseBadLength-16    15.4ns ± 0%     3.5ns ± 0%   ~     (p=1.000 n=1+1)

name               old alloc/op   new alloc/op   delta
ParseBadLength-16     8.00B ± 0%     0.00B        ~     (p=1.000 n=1+1)

name               old allocs/op  new allocs/op  delta
ParseBadLength-16      1.00 ± 0%      0.00        ~     (p=1.000 n=1+1)
@inliquid
Copy link

inliquid commented Jan 5, 2021

@joewreschnig it's easy to measure some Xes when you benchmark a particular function, it can be 8x or even 800x, it's just a relation of two values. What makes it a micro optimization is the lack of a problem as it is. This package has quite decent performance and no one ever complained about it. You see, there is always some room for some small "improvement" everywhere. Don't use fmt.Errorf - get "better performance", don't use interfaces, same, what's next avoid using fmt package at all? Avoid writing functions, because "function" is an overhead as well? This road leads to nowhere. You're getting just some damn nanoseconds per call. In order to see negligible microseconds you'll have to have a thousands of invocations per call, to have an order of milliseconds (still negligible in most cases) - millions of invocations per run. So unless you're writing some very special application, this enhancement is absolutely nothing. And in that case you'll probably use math or rand packages directly. On the other hand what you get is

return uuid, fmt.Errorf("invalid UUID length: %d", len(s))

versus

return uuid, &invalidLengthError{len(s)}

plus

type invalidLengthError struct{ len int }

func (err *invalidLengthError) Error() string {
	return fmt.Sprintf("invalid UUID length: %d", err.len)
}

which is bad, less readable code with redundant entities.

@pborman
Copy link
Collaborator

pborman commented Jan 5, 2021

I approved this change because it has identifiable impact on a use case I had not considered. Normally I see the error path as not performance critical, but if using this package to determine if something is or is not a UUID this reduces the time by quite a bit on the failure side. Further, with the second patch of going from a pointer to the direct value eliminates memory allocations. If all your input is valid it will not make a difference, however, if you have a large amount of non-valid data this will help both in CPU time as well as reduced GC load.

Just because something is micro doesn't mean it is useless. I disagree that the code is less readable in any meaningful way.

The truth is, return uuid, errors.New("invalid UUID format") should also be replaced with a predeclared error, e.g. var invalidFormat = errors.New("Invalid UUID format") and then return the global invalidFormat.

@inliquid
Copy link

inliquid commented Jan 5, 2021

@pborman what would happen with this performance improvement, if caller actually was interested about what the error is? I mean in that case it would call Error method on that type to get actual error? Right now you see some (still just a nanoseconds and in some cases) gains just because returned error in benchmark is totally dropped, which is far from real life.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants