Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update rules_python to 0.16.1 #7160

Merged
merged 2 commits into from Dec 12, 2022

Conversation

mattsoulanille
Copy link
Member

@mattsoulanille mattsoulanille commented Dec 10, 2022

This update includes new lock files for pypi packages that make sure their versions don't change between builds. These lock files can be generated with the update_locked_deps.sh script.

As part of this update, the PR pins flax to 0.6.2. It does not fix the 0.6.3 circular dependency issue.

Additionally, python dependencies will only be fetched when a build requires them, so first time javascript-only builds should see a speedup.

To see the logs from the Cloud Build CI, please join either our discussion or announcement mailing list.


This change is Reviewable

@mattsoulanille
Copy link
Member Author

Most of the lines in this PR are in the lock files.

This update includes new lock files for pypi packages that make sure their versions don't change between builds. These lock files can be generated with the update_locked_deps.sh script.

As part of this update, the PR pins flax to 0.6.2.

Additionally, python dependencies will only be fetched when a build requires them, so first time javascript-only builds should see a speedup.
Copy link
Collaborator

@pyu10055 pyu10055 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed 7 of 7 files at r1.
Reviewable status: :shipit: complete! 2 of 1 approvals obtained

@mattsoulanille mattsoulanille merged commit aea97c4 into tensorflow:master Dec 12, 2022
AdamLang96 added a commit to CodeSmithDSMLProjects/tfjs that referenced this pull request Dec 13, 2022
* started resize bicubic

* started padding algorithm for bicubic forward pass in cpu backend

* started padding algorithm for bicubic forward pass in cpu backend

* Mark all calls to 'op()' as pure (tensorflow#7155)

Mark calls to the `op()` function that creates the exported op as pure by using [`/* @__PURE__ */` annotations](https://esbuild.github.io/api/#ignore-annotations) (this also works for Rollup, but I can't find the docs). This comment instructs bundlers that the function call has no side-effects, so it can be removed if the result is not used.

This is okay for the `op` function because, although it references ENGINE, it does so [in a closure](https://github.com/tensorflow/tfjs/blob/master/tfjs-core/src/ops/operation.ts#L48-L61) that it never calls, so while its return value may cause side effects when called, it itself does not.

This has no immediate effect because we still maintain a list of `sideEffects` in the package.json, but it is a step towards removing that list.

Co-authored-by: Linchenn <40653845+Linchenn@users.noreply.github.com>

* need to fix padding algo

* Update rules_python to 0.16.1 (tensorflow#7160)

This update includes new lock files for pypi packages that make sure their versions don't change between builds. These lock files can be generated with the update_locked_deps.sh script.

As part of this update, the PR pins flax to 0.6.2.

Additionally, python dependencies will only be fetched when a build requires them, so first time javascript-only builds should see a speedup.

* Register optimizers in a centralized location (tensorflow#7157)

Optimizers are currently registered in the same file they are defined in, which is a side effect. This would make it impossible to tree-shake them in a custom bundle when `sideEffects` is removed from the package.json.

This PR moves Optimizer registration to the `register_optimizers.ts` file. The files that the Optimizers are defined in no longer have side effects. To exclude Optimizers from a custom bundle, the `registerOptimizers` function is called from `index.ts`. Custom bundles replace `index.ts` with a different file that does not call this function.

* Simplify how Optimizers are re-exported in train.ts (tensorflow#7156)

`train.ts` exports optimizers by copying them from the `OptimizerConstructors` class onto a `train` object. This is unnecessary because the `OptimizerConstructors` class constructor is a subtype of the `train` object's type (i.e. it has all the properties that `train` has). Instead of creating a new `train` object, this PR re-exports `OptimizerConstructors` as `train`.

This has no direct effect now, but if / when re remove the `sideEffects` field from `package.json`, it helps some bundlers (esbuild) do tree-shaking.

* Use static getters to get optimizer class names (tensorflow#7168)

Each `Optimizer` lists its class name as a static property of the class so it can be serialized and deserialized. This prevents the class from being tree-shaken because bundlers will compile it like this:

```
class SomeOptimizer {
  ...
}

// The bundler can not remove this assignment because
// SomeOptimizer.className could be a setter with a side effect.
SomeOptimizer.className = 'SomeOptimizer';
```

This PR uses a static getter for the class name instead, which bundlers can tree-shake properly.

* need corners

* padding is functional

* debugging padding tool for multiple channels

Co-authored-by: Matthew Soulanille <msoulanille@google.com>
Co-authored-by: Linchenn <40653845+Linchenn@users.noreply.github.com>
Linchenn pushed a commit to Linchenn/tfjs that referenced this pull request Jan 9, 2023
This update includes new lock files for pypi packages that make sure their versions don't change between builds. These lock files can be generated with the update_locked_deps.sh script.

As part of this update, the PR pins flax to 0.6.2.

Additionally, python dependencies will only be fetched when a build requires them, so first time javascript-only builds should see a speedup.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants