Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: matrixToTransform() #1935

Merged
merged 24 commits into from Feb 5, 2024
Merged

fix: matrixToTransform() #1935

merged 24 commits into from Feb 5, 2024

Conversation

johnkenny54
Copy link
Contributor

@johnkenny54 johnkenny54 commented Jan 13, 2024

Closes #1268

The matrixToTransform() function in _transforms.js was causing mismatches in some of the regression tests. I tracked this down to some conditions where the code was trying to move the scale() transform to a different place. I tried fixing the original code, but the scale adjustments were woven all through the function, so I ended up rewriting it.

The new version fixes 23 regression errors compared to the original version, and reduces the compressed file size of the regression files by an additional 153,223 bytes over the original version.

Changes to the code:

  • The new code closely follows the algorithm referenced in the original code at
    https://frederic-wang.fr/decomposition-of-2d-transform-matrices.html.
  • The algorithm referenced above always decomposes a matrix in the form translate()rotate()scale()skewX(). The original code
    sometimes tried to move the scale() in front of the rotate(), but this is not always safe. The new code always preserves the
    decomposition order.
  • The code to convert transform(tx,ty)rotate(a) to rotate(a,cx,cy) has been rewritten. This code is independent of the decomposition algorithm. There are comments in the code describing the calculation.
  • The new version does not do any rounding, so no longer uses params.floatPrecision or params.transformPrecision.
  • There are some optimizations performed by the original code that are not in the current code. These could be
    added in the future as special cases, but given that this version reduces regression errors and reduces optimized file size compared to the original code, it would be best to keep this initial version as simple as possible.

Many of the expected test results changed:

  • in general, any expected result which starts with scale followed by a rotate will change
  • some expected results were replaced by equivalent non-matrix transforms
  • some transforms which were optimized by the previous code are not optimized by this code, and expected result is
    now a matrix (generally the original matrix unchanged)

plugins/_transforms.js Outdated Show resolved Hide resolved
@johnkenny54 johnkenny54 marked this pull request as ready for review January 13, 2024 02:59
if (r === 0) {
return [transform];
}
const angle_cos = a / r;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nitpicking, but could you use camelcase variables in your patch for consistency?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will do. Would be nice if the linter could catch this.

plugins/_transforms.js Outdated Show resolved Hide resolved
@SethFalco
Copy link
Member

SethFalco commented Jan 15, 2024

sometimes tried to move the scale() in front of the rotate(), but this is not always safe.

Can you share a concrete example where this is not safe?

I've spent a lot of time looking at that in #1916 and was confident that the logic was sound. 🤔

The new version does not do any rounding, so no longer uses params.floatPrecision or params.transformPrecision.

Of the 23 images you fixed, most of them were just from no longer rounding, which introduced minor shifts in the image before. We shouldn't drop transformPrecision altogether, but rather change how we define the default value.

This alone fixes 21 images:

-   let sx = toFixed(Math.hypot(data[0], data[1]), params.transformPrecision);
-   let sy = toFixed(
-     (data[0] * data[3] - data[1] * data[2]) / sx,
-     params.transformPrecision,
-   );
+   let sx = Math.hypot(data[0], data[1]);
+   let sy = (data[0] * data[3] - data[1] * data[2]) / sx;

The rest of the changes fix three images (**/klipper.svg) and break one image (view-certificate-sign.svg).


It looks like the issue you were trying to fix is ultimately the following, an MRE derived from klipper.svg:

<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 64 64">
  <text x="-32" y="32" transform="matrix(-1,0,-0.3,0.9,0,0)">uwu</text>
</svg>

The transform matrix(-1,0,-0.3,0.9,0,0) was being optimized to skewX(18)scale(1 -.9) when it should be scale(-1 .9)skewX(18).

The referenced website for decomposing matrices yields a similar result with the LU-like algorithm: scale(-1,0.9) skewX(16.69924423399362).

In this case, we should be investigating the algorithm, not taking parts out. I don't believe your initial statement is true regarding it being unsafe to move scale, but even it were, the problem is that in this case, scale is specifically meant to be moved to the front.

reduces the compressed file size of the regression files by an additional 153,223 bytes over the original version

This is quite odd to me. Removing an optimization, which we can see from the tests only ever makes the matrices larger, is improving compression in regression? 🤔

Are you able to identify where the bytes are shaved from? Are you running the same script from #1932 or have you made changes?

@johnkenny54
Copy link
Contributor Author

I'm working on responses to the issues you raised, but there are a lot of rabbit holes to go down, and I want to take the time to hopefully be mostly correct, so will address these as I come up with coherent answers.

I didn't start out intending to rewrite this, but in trying to fix a couple of the regression errors I kept breaking almost as much as I fixed, and I found the logic really hard to follow - it seems there are a lot of adjustments to the basic decomposition algorithm, and fewer comments than I would like. So I wrote my version mostly so I could understand the decomposition algorithm better. When I realized it actually seemed to be working better than the current implementation, I figured I would throw it out there and see what I could stir up.

@SethFalco
Copy link
Member

SethFalco commented Jan 17, 2024

I want to take the time to hopefully be mostly correct, so will address these as I come up with coherent answers

Take your time!

and I found the logic really hard to follow - it seems there are a lot of adjustments to the basic decomposition algorithm, and fewer comments than I would like

I really can relate to this. ^-^'

To fix #1916, I spent 3 whole days (30+ hours) staring at the code, the algorithm, reading about matrices and matrix transforms, etc to even get the fix that I did and understand why it's the solution.

If it only applied one of the algorithms from the website as is, it wouldn't be an issue. But because we're doing both the QR-like and LU-like depending on what's shorter, plus some custom bits to reduce bytes, it's quite a headache without documentation.

This weekend, I'd be happy to help with solving the issue with klipper.svg, or at least review the code and find out what's going wrong. However, I do believe most of the fixes in your current solution came from no longer applying precision, which shows we do need a better default value for transformPrecision. That might be worth investigating first so we can take that factor out of this discussion.

In general, I think the issue with klipper.svg might be that we need some of the scale inverting logic that we already have on rotate, but on skew as well. 🤔

As you've said though, this is a very complex area of the project, hence I can't provide an immediate solution. 😅

@KTibow
Copy link
Contributor

KTibow commented Jan 17, 2024

which shows we do need a better default value for transformPrecision

I wonder how hard it would be to do something like "for all of this node's rendered children, what is the resulting matrix? and how much precision can we lose on this matrix without going beyond the precision?"
(If that's too complex, you could still get gains if you only implemented it for one rendered child and used a high transformPrecision in other cases)

@johnkenny54
Copy link
Contributor Author

We shouldn't drop transformPrecision altogether, but rather change how we define the default value.

I'm not saying we should drop transformPrecision, just pointing out that in this version it wasn't necessary. The rounding gets done by the caller, in convertTransforms.js. I didn't use the params because the minimal number of optimizations I was doing in the initial version didn't require it.

Thinking about this more, I realized a couple of things. One is that I need to round at some point in this function to reliably do some optimizations (e.g, translate(.000001)).

But more importantly, there's a much better way to refactor this code. One reason the existing code is so hard to follow is that the decomposition and the optimization are intertwined. This leads to a ton of conditional logic. Even with the minimal number of optimizations my code is doing, it's starting to get confusing. A better way to organize this would be to split into two steps:

  • first decompose at full precision, with no attempt to optimize
  • then round and optimize

This makes both the decomposition and the optimization much simpler and more accurate, and it's also pretty straightforward to extend it to attempting multiple decompositions to see which one is best.

@johnkenny54
Copy link
Contributor Author

It looks like the issue you were trying to fix is ultimately the following, an MRE derived from klipper.svg:

<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 64 64">
  <text x="-32" y="32" transform="matrix(-1,0,-0.3,0.9,0,0)">uwu</text>
</svg>

Here's what I'm seeing with this:

  • If you do the decomposition with the QR algorithm, you get rotate(180)scale(1,-.9)skewX(16.7).
  • My algorithm processes this by removing the rotate and inverting the scale. This is one of the few optimizations it does in the original version - this construct seems to come up frequently.
  • So you end up with scale(-1,.9)skewX(16.7), which is correct.
  • In general, matrix multiplication is not commutative, so you can't change the order. But in this case you can change the order if you adjust the angle, and if you work through the math the answer you want is skewX(-18.435)scale(-1 .9).
  • The original code is not actually doing a decomposition in this case. It's calculating sx/sy as if it were doing a decomposition, but then the skew is calculated based on pattern matching of the input matrix rather than on the decomposition. It looks like the problem in this case is that since the algorithm has bailed on the decomposition, it can't use the sx/sy calculated based on the decomposition, and should be using the values from the input matrix.
  • This all seems really sketchy - interleaving decomposition based code and special cases is confusing at best.

In addition to all of the above, I discovered an unrelated problem. When I optimized this example using my code, I got scale(-1,.9)skewX(17). Looking at this vs. the original in a browser, there's a significant shift in the output. I traced this back to what seems to me like a very questionable heuristic in definePrecision() - there's a comment that says "No sense in angle precision more then number of significant digits in matrix". It looks like unless there's at least one element in the matrix with more than 2 digits, it rounds all angles to integers. I was able to defeat this by changing the -1 to something like -1.000000001 (close enough to -1 to not affect the calculations, but with enough digits to keep the heuristic happy). I suspect this is why the klipper.svg file failed on my version, though haven't tested it yet.

@SethFalco
Copy link
Member

SethFalco commented Jan 19, 2024

a very questionable heuristic in definePrecision() - there's a comment that says "No sense in angle precision more then number of significant digits in matrix"

I've run into problems with this before.

See:

@johnkenny54
Copy link
Contributor Author

I just updated the branch in this PR with refactored code which works as follows:

  • First decompose the matrix at full precision. There are no optimizations here other than omitting any components that are obvious identity matrices.
  • Then make a copy of the decomposition matrix and round the data.
  • Optimize rounded matrix. I believe this does everything the current version does except for dealing with things like skewY() which require an alternate decomposition in some cases. This should be easy to add.

This has better compression than any of the other versions I've looked at - 557,903 bytes vs. SVGO 3.2 (292,246) or SVGO 3.2 without initial rounding of sx/sy (487,448).

There were a couple of changes in regression:

  • svgs/oxygen-icons-5.113.0/scalable/actions/small/32x32/view-certificate-sign.svg pass
  • svgs/oxygen-icons-5.113.0/scalable/apps/hidef/kmail2.svg mismatch
  • svgs/oxygen-icons-5.113.0/scalable/apps/kmail2.svg mismatch

The change in the kmail2.svg file was a small shift in one component. I could just barely see it in my browser at 500% zoom. With the smallest version I could find that still showed the error, there were two matrices in the original file that were optimized to:

  • rotate(-73.507 1352.152 564.808)scale(.61677)
  • rotate(-73.507 1371.857 591.191)scale(.61677)

When these get multiplied out, you get

  • matrix(.1751 -.5914 .5914 .1751 426.71 1700.977)
  • matrix(.1751 -.5914 .5914 .1751 415.523 1738.764)

If you just return the original matrix, the regressions pass, and the rounded matrices are:

  • matrix(.1751 -.59139 .59139 .1751 426.703 1700.972)
  • matrix(.1751 -.59139 .59139 .1751 415.516 1738.758)

Which looks darn close to me. But apparently not close enough. I tried changing transformPrecision from 4 to 8, which added a bunch of digits to the tx/ty components, but had no effect on regression. When I changed transformPrecision from 5 to 6, that passed regression - and the only change to optimized result was that the scale changed from .61677 to 616768 - so for whatever reason this particular matrix is hypersensitive to the scale component.

One thing I realized when looking into this was that whenever you do an optimization like this where your're splitting a matrix into multiple components, you're introducing error. Whatever is rendering the SVG will have to multiply the transforms. So there's a difference between "round a matrix" and "round things that will then be multiplied". Possibly the solution is to increment the transformPrecision when you're doing an optimization with more than one component. But that seems out of scope for this PR.

Lastly, I moved a bunch of code from convertTransform.js to _transforms.js so I could do the rounding within this file. Probably the whole convertToShorts() function should be moved - the current version does some redundant rounding, so it would be more efficient to have everything in one place.

@johnkenny54
Copy link
Contributor Author

Are you able to identify where the bytes are shaved from?

I just compared the output from the SVGO 3.2 version with the latest code for this PR. I looked at the files with the biggest reduction and the biggest increase in size.

The biggest reduction was 3866 bytes. This was almost all from things where the 3.2 version used the original matrix and the new version optimized it to either translate()scale() or rotate(a,cx,cy)scale().

I think the current code doesn't work in a lot of cases because for some reason it really doesn't want to put the scale where the decomposition puts it. But it mostly detects the situations where this won't work, and just defaults to the original matrix.

The biggest increase was 191. This all seems to be because if Math.hypot(a, b)===0 the latest version just returns the original decomposition rather than doing the alternate decomposition you need to do in this case. It should be easy to add this if we decide to move ahead with this PR.

@johnkenny54
Copy link
Contributor Author

Are you running the same script from #1932 or have you made changes?

Mostly the same. I added logging of the number of pixel mismatches. I've run this against several different versions, here's what I've seen so far:

Version Regression Failures Bytes Reduced from Baseline Pixel Mismatch change from Baseline
Baseline – matrixToTransform does nothing 174 0 0
matrixToTransform returns original matrix 174 16,544 0
SVGO 3.2 with default options 194 292,246 906
PR 1935 – initial version 171 445,469 -583
SVGO 3.2 with sx/sy not rounded 173 487,448 -531
PR 1935 – refactored version 172 557,903 -554

@SethFalco
Copy link
Member

SethFalco commented Jan 26, 2024

Sorry for being slow to come back to you, and thank you for being patient with it. Since it's quite complex, I'm just having trouble setting a block of time aside to take an in-depth look. (Hence, you may find me handle smaller PRs, while seemingly neglecting this one.)

Thank you for continuing to look at this and answering questions, though.

I just updated the branch in this PR with refactored code which works as follows:

Great idea/execution, the code looks much better.

so for whatever reason this particular matrix is hypersensitive to the scale component

The only instance I've seen of this is where the scale of the x/y attributes are very different from the scale of the transform multipliers. For example, x="1035489583" and transform="scale(0.0000042524)", not sure if that's what's happened here, just noting that observation.

This was almost all from things where the 3.2 version used the original matrix and the new version optimized it to either translate()scale() or rotate(a,cx,cy)scale().

Thank you for noting that.


Overall, I am still wary on increasing the length of some of the unit tests where we knew the optimization was lossless without a concrete example that that optimization was flawed. However, next weekend, I'll review the code in more detail, run it locally to inspect the differences in files, etc.

In the end, if this simplifies the code, while fixing bugs, while also reducing bytes on average, then I can't argue with facts, so I'd be happy to merge it. We can explore adding back those optimizations later when we find out in what boundaries are they actually safe and can be applied again.

@johnkenny54
Copy link
Contributor Author

I am still wary on increasing the length of some of the unit tests where we knew the optimization was lossless without a concrete example that that optimization was flawed.

I looked in to the differences in expected results of the unit tests between the PR and original code. There are currently very few differences in the expected results (more details below). In general the reason for the differences is that there are often numerous equivalent decompositions, and you can rarely guarantee that any is the shortest, so you would have to try them all to find the shortest. This would add a lot of complexity to the code, and a lot of execution time to the algorithm.

We should keep in mind that there is not a lot of compression coming out of this algorithm. Running the current PR code against the regression files, there is a total compression of 764,476,085 bytes, of which only 557,903 is contributed by matrixToTransform(). I haven't seen any evidence that this compression can be increased significantly. There are definitely incremental improvements that can be made, but they come at a significant cost in complexity and execution time, so it doesn't seem like the highest priority to me.

I knew that the current implementation in this PR wasn't catching things that decomposed to skewY(), so I added the alternate decomposition from the Wang page. I haven't merged this into the PR yet (more discussion below), but this version restored at least one of the unit test results, and increased compression by this function from 557,903 bytes to 563,476 bytes (5573 bytes more compression). There was no change in pixel mismatches. To get these extra 5573 bytes, the function has to evaluate both decompositions to figure out which is shortest, so it is doing roughly twice the work.

With this change, there are very few differences in the expected results. Many are equivalent optimizations of the same length, several are shorter in the PR version, and I think there are two where the PR version is one byte longer.

The one notable exception was in convertTransform.02, where there was an input of scale(2 1)rotate(15) with the same output in the original version, and output of matrix(1.93185 .25882 -.51764 .96593 0 0) in the PR version.

  • In the PR version, both decompositions give a result of rotate()scale()skew() with lots of decimals, and thus are longer than the matrix.
  • In the original version, it works because it is pattern matching. Basically it's saying "if I have a matrix in the form scale(sx,sy)rotate(angle), then it will be the case that ab+cd===0" (rowsSum===0 in the original code).
  • This only works if you have really high precision. If instead of scale(2 1)rotate(15) as the input, you have matrix(1.93185 .25882 -.51764 .96593 0 0), it doesn't optimize, because rowsSumm ~= -0.000002588.
  • But the question all this raises for me is "why are we converting scale(2 1)rotate(15) to a matrix and then trying to convert it back"? If we already have a decomposition, that's a good starting point - in some cases it can be improved, but it should definitely be a candidate for the shortest option. This isn't a problem with matrixToTransform(), it's higher up in the plugin. Probably not too difficult to improve, but it should be a separate PR.

Given all of the above, I would like to merge the skewY() changes into the branch for this PR - they don't add much compression, but it's a good framework for making incremental improvements going forward. At that point I feel like we've got everything we need to merge this PR. There are definitely improvements that can be made, but they should be evaluated separately to determine cost/benefit. With the skewY() changes:

  • the new code has almost double the compression of the current version
  • has 22 fewer regression failures and 1460 fewer pixel mismatches
  • is easier to extend, either by adding decompositions or optimizations

@johnkenny54
Copy link
Contributor Author

Merged the code to handle alternate decompositions, and added test case 13 to test for converting to rotate()scale(), rotate()skewX() when starting with matrix. Both of these tests fail on SVGO 3.2 (it just returns the matrix).

Copy link
Member

@SethFalco SethFalco left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The rounding gets done by the caller, in convertTransforms.js

Seems I missed that statement when I went through this before. This makes sense!

There are definitely incremental improvements that can be made, but they come at a significant cost in complexity and execution time, so it doesn't seem like the highest priority to me.

My initial thought was because I didn't want us to effectively take logic out for a problem that was shared, but without a reproducible example.

(i.e. I didn't want us to fix a problem that didn't have a test case to prove it was a real problem.)

However, my tone has changed given with all of your revisions, you're reducing more bytes, including in the test cases this was regarding.

To get these extra 5573 bytes, the function has to evaluate both decompositions to figure out which is shortest, so it is doing roughly twice the work.

To make sure we don't drag this PR on too long, we can focus on just the functionality/bytes for now. We can tackle performance after this is merged.

We can run a benchmark and see what performance is like relative to the original implementation, and if this does indeed take longer, we can either comb through it to make performance optimizations or introduce options.

In my tests so far between main and your branch, it performs reasonably well.

main: 13.7 minutes
matrixToTransform (rebased with main): 14 minutes

But the question all this raises for me is "why are we converting scale(2 1)rotate(15) to a matrix and then trying to convert it back"?

Because it's easier to work with a single consistent input, rather than deal with every permutation of possible decomposed matrices.

This is mostly an issue because we have a flaw in how we check the length of transform before applying it. We should be checking the length of the resulting string, not the length of the array of transform items.

We could do something like this in #convertTransform:

const convertTransform = (item, attrName, params) => {
  const originalAttrValue = item.attributes[attrName];
  const originalTransform = transform2js(originalAttrValue);
  let data = originalTransform.map((transformItem) => ({
    name: transformItem.name,
    data: [...transformItem.data]
  }));
  params = definePrecision(data, params);

  if (params.collapseIntoOne && data.length > 1) {
    data = [transformsMultiply(data)];
  }

  if (params.convertToShorts) {
    data = convertToShorts(data, params);
  } else {
    data.forEach((item) => roundTransform(item, params));
  }

  if (params.removeUseless) {
    data = removeUseless(data);
  }

  if (data.length) {
    const shortest = [
      js2transform(originalTransform, params),
      js2transform(data, params),
    ].reduce(
      (acc, val) => val.length < acc.length ? val : acc, 
      originalAttrValue
    );

    item.attributes[attrName] = shortest;
  } else {
    delete item.attributes[attrName];
  }
};

This effectively makes it so instead of checking the number of transform items, it checked the overall length of the transform string, which solves the issue. (And is probably what was intended.)

has 22 fewer regression failures and 1460 fewer pixel mismatches

On my side, I can see the following images, "break":

  • media-optical-dvd.svg which is a false positive imo
  • kmail2.svg, which you've raised already, the visible difference is shading is clear to me

Despite kmail2.svg actually having a visual difference, I would say it's non-blocking. The issue has been flagged, the result is acceptable, and overall this pull request a big improvement. All we need is better, not perfect. 👍🏽


Overall, this PR is awesome. I'd like if we could get at least two things addressed before merging, then the rest is optional, whatever you don't do, we'll put an issue to tackle later.

  • We should only apply the "optimized" transform if it's actually shorter. (Refer to my implementation of #convertTransform for one way to do that.)
  • The issue with copying the array raised in one of the suggestions.

I'm still reviewing the code in general, but I'm happy enough that there's no need to block this pull request. 👍🏽

test/plugins/convertTransform.06.svg.txt Outdated Show resolved Hide resolved
test/plugins/convertTransform.14.svg.txt Outdated Show resolved Hide resolved
test/plugins/convertTransform.01.svg.txt Show resolved Hide resolved
const decomposeQRAB = (matrix) => {
const data = matrix.data;

// Where applicable, variables are named in accordance with the frederic-wang document referenced above.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// Where applicable, variables are named in accordance with the frederic-wang document referenced above.

Imo there's no need for this comment. We already have the site references in the JSDoc. If you think it's helpful though, could you at least move it to the JSDoc.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I moved it to the JSDoc. I'm very much in agreement that code should be self-documenting as much as possible, but in this case a, b, c, d etc. are not variable names I would typically choose.

plugins/_transforms.js Outdated Show resolved Hide resolved
plugins/_transforms.js Outdated Show resolved Hide resolved
Comment on lines 479 to 485
// Make a copy of the decomposed matrix, and round all data.
const roundedTransforms = decomposition.map((a) => {
return { ...a };
});
roundedTransforms.map((transform) => {
roundTransform(transform, params);
});
Copy link
Member

@SethFalco SethFalco Feb 4, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm trying to get us away from adding comments that literally just describe the line of code beneath it, and instead have clearer code.

Also, I think there's a mistake here.

This is probably what you were actually going for, which now actually does copy both properties of TransformIndex by value. Before, you were copying the array by reference. ({ ...xyz } shallow copies!)

Suggested change
// Make a copy of the decomposed matrix, and round all data.
const roundedTransforms = decomposition.map((a) => {
return { ...a };
});
roundedTransforms.map((transform) => {
roundTransform(transform, params);
});
const roundedTransforms = decomposition.map((transformItem) => {
const transformCopy = {
name: transformItem.name,
data: [...transformItem.data],
};
return roundTransform(transformCopy, params);
});

This has a minor impact on our tests. (rounding)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for catching this. I updated the code and the comment to clarify why the matrix is being copied. If you're really opposed to the comment I can take it out, but I've had enough experiences coming back to code a year later and wondering what I was thinking that I try to comment where I can't make things clear with variable or function names.

plugins/_transforms.js Outdated Show resolved Hide resolved
plugins/convertTransform.js Outdated Show resolved Hide resolved
@SethFalco
Copy link
Member

SethFalco commented Feb 4, 2024

I ran the regression tests after all local changes I made (all noted in suggestions/comments) and it seems one of them unintentionally fixed one more regression failure btw:

  • svgs/oxygen-icons-5.113.0/scalable/mimetypes/text-x-po.svg

johnkenny54 and others added 9 commits February 4, 2024 12:07
Co-authored-by: Seth Falco <seth@falco.fun>
Co-authored-by: Seth Falco <seth@falco.fun>
Co-authored-by: Seth Falco <seth@falco.fun>
Co-authored-by: Seth Falco <seth@falco.fun>
Co-authored-by: Seth Falco <seth@falco.fun>
Co-authored-by: Seth Falco <seth@falco.fun>
@johnkenny54
Copy link
Contributor Author

I ran the regression tests after all local changes I made (all noted in suggestions/comments) and it seems one of them unintentionally fixed one more regression failure btw:

  • svgs/oxygen-icons-5.113.0/scalable/mimetypes/text-x-po.svg

I found the same thing - this was due to the fix in copying the matrix correctly before rounding. Unsurprisingly (to me at least), there were 1,740 other files that were affected. I've found that small changes to the algorithm or changing the precision a little bit can have a big impact on behavior. In many cases the size difference between the optimized/unoptimized versions is small, so it's easy to make a change that flips a bunch of matrices between using the matrix to the transforms or vice-versa. In this case the impact was pretty benign - only 2 other files changed in pixel mismatches (from 0 to 1) and there was an almost even split between files increasing and decreasing compression - net loss in compression was 217 bytes.

@johnkenny54
Copy link
Contributor Author

I'd like if we could get at least two things addressed before merging, then the rest is optional, whatever you don't do, we'll put an issue to tackle later.

  • We should only apply the "optimized" transform if it's actually shorter. (Refer to my implementation of #convertTransform for one way to do that.)
  • The issue with copying the array raised in one of the suggestions.

The second issue should be taken care of with the latest commit. As for the first, I'd like to defer that for another PR. A couple of times I was tempted to mess around with both the plugin itself and the convertToShorts() function, but in the end I intentionally left these unchanged because there was enough going on with matrixToTransform() that I didn't want to introduce more complexity, particularly changing the basic behavior of the plugin. I don't think any of the changes in this PR introduce any problems into the plugin itself.

If you're OK with deferring changes to the plugin itself, I think I've addressed the other issues you raised - let me know if there's something I missed.

@SethFalco
Copy link
Member

Yeah sure! Let's get this merged then.

I will propose some refactors later this week to clean things up, but it'll be great to get the benefits of this already. Thanks for working on it!

I'll CC you when I open that PR in case you want to chime in. 👍🏽

@SethFalco SethFalco merged commit 4884d59 into svg:main Feb 5, 2024
8 checks passed
@johnkenny54 johnkenny54 deleted the matrixToTransform branch February 16, 2024 02:56
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

convertTransform changes gradientTransform and breaks file
4 participants