Skip to content

Releases: pytorch/TensorRT

Torch-TensorRT v2.2.0

14 Feb 01:49
Compare
Choose a tag to compare

Dynamo Frontend for Torch-TensorRT, PyTorch 2.2, CUDA 12.1, TensorRT 8.6

Torch-TensorRT 2.2.0 targets PyTorch 2.2, CUDA 12.1 (builds for CUDA 11.8 are available via the PyTorch package index - https://download.pytorch.org/whl/cu118) and TensorRT 8.6. This release is the second major release of Torch-TensorRT as the default frontend has changed from TorchScript to Dynamo allowing for users to more easily control and customize the compiler in Python.

The dynamo frontend can support both JIT workflows through torch.compile and AOT workflows through torch.export + torch_tensorrt.compile. It targets the Core ATen Opset (https://pytorch.org/docs/stable/torch.compiler_ir.html#core-aten-ir) and currently has 82% coverage. Just like in Torchscript graphs will be partitioned based on the ability to map operators to TensorRT in addition to any graph surgery done in Dynamo.

Output Format

Through the Dynamo frontend, different output formats can be selected for AOT workflows via the output_format kwarg. The choices are torchscript where the resulting compiled module will be traced with torch.jit.trace, suitable for Pythonless deployments, exported_program a new serializable format for PyTorch models or finally if you would like to run further graph transformations on the resultant model, graph_module will return a torch.fx.GraphModule.

Multi-GPU Safety

To address a long standing source of overhead, single GPU systems will now operate without typical required device checks. This check can be re-added when multiple GPUs are available to the host process using torch_tensorrt.runtime.set_multi_device_safe_mode

# Enables Multi Device Safe Mode
torch_tensorrt.runtime.set_multi_device_safe_mode(True)

# Disables Multi Device Safe Mode [Default Behavior]
torch_tensorrt.runtime.set_multi_device_safe_mode(False)

# Enables Multi Device Safe Mode, then resets the safe mode to its prior setting
with torch_tensorrt.runtime.set_multi_device_safe_mode(True):
    ...

More information can be found here: https://pytorch.org/TensorRT/user_guide/runtime.html

Capability Validators

In the Dynamo frontend, tests can be written and associated with converters to dynamically enable or disable them based on conditions in the target graph.

For example, the convolution converter in dynamo only supports 1D, 2D, and 3D convolution. We can therefore create a lambda which given a convolution FX node can determine if the convolution is supported:

@dynamo_tensorrt_converter(
    torch.ops.aten.convolution.default, 
     capability_validator=lambda conv_node: conv_node.args[7] in ([0], [0, 0], [0, 0, 0])
)  # type: ignore[misc]
def aten_ops_convolution(
    ctx: ConversionContext,
    target: Target,
    args: Tuple[Argument, ...],
    kwargs: Dict[str, Argument],
    name: str,
) -> Union[TRTTensor, Sequence[TRTTensor]]:

In such a case where the Node is not supported, the node will be partitioned out and run in PyTorch.
All capability validators are run prior to partitioning, after the lowering phase.

More information on writing converters for the Dynamo frontend can be found here: https://pytorch.org/TensorRT/contributors/dynamo_converters.html

Breaking Changes

  • Dynamo (torch.export) is now the default frontend for Torch-TensorRT. The TorchScript and FX frontends are now in maintenance mode. Therefore any torch.nn.Modules or torch.fx.GraphModules provided to torch_tensorrt.compile will by default be exported using torch.export then compiled. This default can be overridden by setting the ir=[torchscript|fx] kwarg. Any bugs reported will first be attempted to be resolved in the dynamo stack before attempting other frontends however pull requests for additional functionally in the TorchScript and FX frontends from the community will still be accepted.

What's Changed

  • chore: Update Torch and Torch-TRT versions and docs on main by @gs-olive in #1784
  • fix: Repair invalid schema arising from lowering pass by @gs-olive in #1786
  • fix: Allow full model compilation with collection inputs (input_signature) by @gs-olive in #1656
  • feat(//core/conversion): Add support for aten::size with dynamic shaped models for Torchscript backend. by @peri044 in #1647
  • feat: add support for aten::baddbmm by @mfeliz-cruise in #1806
  • [feat] Add dynamic conversion path to aten::mul evaluator by @mfeliz-cruise in #1710
  • [fix] aten::stack with dynamic inputs by @mfeliz-cruise in #1804
  • fix undefined attr issue by @bowang007 in #1783
  • fix: Out-Of-Bounds bug in Unsqueeze by @gs-olive in #1820
  • feat: Upgrade Docker build to use custom TRT + CUDNN by @gs-olive in #1805
  • fix: include str ivalue type conversion by @bowang007 in #1785
  • fix: dependency order of inserted long input casts by @mfeliz-cruise in #1833
  • feat: Add ts converter support for aten::all.dim by @mfeliz-cruise in #1840
  • fix: Error caused by invalid binding name in TRTEngine.to_str() method by @gs-olive in #1846
  • fix: Implement aten.mean.default and aten.mean.dim converters by @gs-olive in #1810
  • feat: Add converter for aten::log2 by @mfeliz-cruise in #1866
  • feat: Add support for aten::where with scalar other by @mfeliz-cruise in #1855
  • feat: Add converter support for logical_and by @mfeliz-cruise in #1856
  • feat: Refactor FX APIs under dynamo namespace for parity with TS APIs by @peri044 in #1807
  • fix: Add version checking for torch._dynamo import in __init__ by @gs-olive in #1881
  • fix: Improve Docker build robustness, add validation by @gs-olive in #1873
  • fix: Improve input weight handling to acc_ops convolution layers in FX by @gs-olive in #1886
  • fix: Upgrade main to TRT 8.6, CUDA 11.8, CuDNN 8.8, Torch Dev by @gs-olive in #1852
  • feat: Wrap dynamic size handling in a compilation flag by @peri044 in #1851
  • fix: Add torchvision legacy CI parameter by @gs-olive in #1918
  • Sync fb internal change to OSS by @wushirong in #1892
  • fix: Reorganize Dynamo directory + backends by @gs-olive in #1928
  • fix: Improve partitioning + lowering systems in torch.compile path by @gs-olive in #1879
  • fix: Upgrade TRT to 8.6.1, parallelize FX tests in CI by @gs-olive in #1930
  • feat: Add issue template for Story by @gs-olive in #1936
  • feat: support type promotion in aten::cat converter by @mfeliz-cruise in #1911
  • Reorg for converters in (FX Converter Refactor [1/N]) by @narendasan in #1867
  • fix: Add support for default dimension in aten.cat by @gs-olive in #1863
  • Relaxing glob pattern for CUDA12 by @borisfom in #1950
  • refactor: Centralizing sigmoid implementation (FX Converter Refactor [2/N]) <Target: converter_reorg_proto> by @narendasan in #1868
  • fix: Address .numpy() issue on fake tensors by @gs-olive in #1949
  • feat: Add support for passing through build issues in Dynamo compile by @gs-olive in #1952
  • fix: int/int=float division by @mfeliz-cruise in #1957
  • fix: Support dims < -1 in aten::stack converter by @mfeliz-cruise in #1947
  • fix: Resolve issue in isInputDynamic with mixed static/dynamic shapes by @mfeliz-cruise in #1883
  • DLFW changes by @apbose in #1878
  • feat: Add converter for aten::isfinite by @mfeliz-cruise in #1841
  • Reorg for converters in hardtanh(FX Converter Refactor [5/N]) <Target: converter_reorg_proto> by @apbose in #1901
  • fix/feat: Add lowering pass to resolve most aten::Int.Tensor uses by @gs-olive in #1937
  • fix: Add decomposition for aten.addmm by @gs-olive in #1953
  • Reorg for converters tanh (FX Converter Refactor [4/N]) <Target: converter_reorg_proto> by @apbose in #1900
  • Reorg for converters leaky_relu (FX Converter Refactor [6/N]) <Target: converter_reorg_proto> by @apbose in #1902
  • Upstream 3 features to fx_ts_compat: MS, VC, Optimization Level by @wu6u3tw in #1935
  • fix: Add lowering pass to remove output repacking in convert_method_to_trt_engine calls by @gs-olive in #1945
  • Fixing aten::slice invalid schema and i...
Read more

Torch-TensorRT v1.4.0

03 Jun 04:05
Compare
Choose a tag to compare

PyTorch 2.0, CUDA 11.8, TensorRT 8.6, Support for the new torch.compile API, compatibility mode for FX frontend

Torch-TensorRT 1.4.0 targets PyTorch 2.0, CUDA 11.8, TensorRT 8.5. This release introduces a number of beta features to set the stage for working with PyTorch and TensorRT in the 2.0 ecosystem. Primarily, this includes a new torch.compile backend targeting Torch-TensorRT. It also adds a compatibility layer that allows users of the TorchScript frontend for Torch-TensorRT to seamlessly try FX and Dynamo.

torch.compile` Backend for Torch-TensorRT

One of the most prominent new features in PyTorch 2.0 is the torch.compile workflow, which enables users to accelerate code easily by specifying a backend of their choice. Torch-TensorRT 1.4.0 introduces a new backend for torch.compile as a beta feature, including a convenience frontend to perform accelerated inference. This frontend can be accessed in one of two ways:

import torch_tensorrt
torch_tensorrt.dynamo.compile(model, inputs, ...)
​
##### OR #####torch_tensorrt.compile(model, ir="dynamo_compile", inputs=inputs, ...)

For more examples, see the provided sample scripts, which can be found here

This compilation method has a couple key considerations:

  1. It can handle models with data-dependent control flow
  2. It automatically falls back to Torch if the TRT Engine Build fails for any reason
  3. It uses the Torch FX aten library of converters to accelerate models
  4. Recompilation can be caused by changing the batch size of the input, or providing an input which enters a new control flow branch
  5. Compiled models cannot be saved across Python sessions (yet)

    The feature is currently in beta, and we expect updates, changes, and improvements to the above in the future.

fx_ts_compat Frontend

As the ecosystem transitions from TorchScript to Dynamo, users of Torch-TensorRT may want start to experiment with this stack. As such we have introduced a new frontend for Torch-TensorRT which exposes the same APIs as the TorchScript frontend but will use the FX/Dynamo compiler stack. You can try this frontend by using the ir="fx_ts_compat" setting

torch_tensorrt.compile(..., ir="fx_ts_compat")

What's Changed

Read more

Torch-TensorRT v1.3.0

01 Dec 02:36
8dc1a06
Compare
Choose a tag to compare

PyTorch 1.13, CUDA 11.7, TensorRT 8.5, Support for Dynamic Batch for Partially Compiled Modules, Engine Profiling, Experimental Unified Runtime for FX and TorchScript Frontends

Torch-TensorRT 1.3.0 targets PyTorch 1.13, CUDA 11.7, cuDNN 8.5 and TensorRT 8.5. This release focuses on adding support for Dynamic Batch Sizes for partially compiled modules using the TorchScript frontend (this is also supported with the FX frontend). It also introduces a new execution profiling utility to understand the execution of specific engine sub blocks that can be used in conjunction with PyTorch profiling tools to understand the performance of your model post compilation. Finally this release introduces a new experimental unified runtime shared by both the TorchScript and FX frontends. This allows you to start using the FX frontend to generate torch.jit.traceable compiled modules.

Dynamic Batch Sizes for Partially Compiled Modules via the TorchScript Frontend

A long-standing limitation of the partitioning system in the TorchScript function is lack of support for dynamic shapes. In this release we address a major subset of these use cases with support for dynamic batch sizes for modules that will be partially compiled. Usage is the same as the fully compiled workflow where using the torch_tensorrt.Input class, you may define the range of shapes that an input may take during runtime. This is represented as a set of 3 shape sizes: min, max and opt. min and max define the dynamic range of the input Tensor. opt informs TensorRT what size to optimize for provided there are multiple valid kernels available. TensorRT will select kernels that are valid for the full range of input shapes but most efficient at the opt size. In this release, partially compiled module inputs can vary in shape for the highest order dimension.

For example:

min_shape: (1, 3, 128, 128)
opt_shape: (8, 3, 128, 128)
max_shape: (32, 3, 128, 128)

Is a valid shape range, however:

min_shape: (1, 3, 128, 128)
opt_shape: (1, 3, 256, 256)
max_shape: (1, 3, 512, 512)

is still not supported.

Engine Profiling [Experimental]

This release introduces a number of profiling tools to measure the performance of TensorRT sub blocks in compiled modules. This can be used in conjunction with PyTorch profiling tools to get a picture of the performance of your model. Profiling for any particular sub block can be enabled by the enabled_profiling() method of any __torch__.classes.tensorrt.Engine attribute, or of any torch_tensorrt.TRTModuleNext. The profiler will dump trace files by default in /tmp, though this path can be customized by either setting the profile_path_prefix of __torch__.classes.tensorrt.Engine or as an argument to torch_tensorrt.TRTModuleNext.enable_precision(profiling_results_dir=""). Traces can be visualized using the Perfetto tool (https://perfetto.dev)

Screenshot 2022-11-21 at 6 23 01 PM

Engine Layer information can also be accessed using get_layer_info which returns a JSON string with the layers / fusions that the engine contains.

Unified Runtime for FX and TorchScript Frontends [Experimental]

In previous versions of Torch-TensorRT, the FX and TorchScript frontends were mostly separate and each had their distinct benefits and limitations. Torch-TensorRT 1.3.0 introduces a new unified runtime to support both FX and TorchScript meaning that you can choose the compilation workflow that makes the most sense for your particular use case, be it pure Python conversion via FX or C++ Torchscript compilation. Both frontends use the same primitives to construct their compiled graphs be it fully compiled or just partially.

Basic Usage

The TorchScript frontend uses the new runtime by default. No additional workflow changes are necessary.

Note: The runtime ABI version was increased to support this feature, as such models compiled with previous versions of Torch-TensorRT will need to be recompiled

For the FX frontend, the new runtime can be chosen but setting use_experimental_fx_rt=True as part of your compile settings to either torch_tensorrt.compile(my_mod, ir="fx", use_experimental_fx_rt=True, explicit_batch_dimension=True) or torch_tensorrt.fx.compile(my_mod, use_experimental_fx_rt=True, explicit_batch_dimension=True)

Note: The new runtime only supports explicit batch dimension

TRTModuleNext

The FX frontend will return a torch.nn.Module containing torch_tensorrt.TRTModuleNext submodules instead of torch_tensorrt.fx.TRTModules. The features of these modules are nearly identical but with a few key improvements.

  1. TRTModuleNext profiling dumps a trace visualizable with Perfetto (see above for more details).
  2. TRTModuleNext modules are torch.jit.trace-able, meaning you can save FX compiled modules as TorchScript for python-less / C++ deployment scenarios. Traced compiled modules have the same deployment instructions as compiled modules produced by the TorchScript frontend.
  3. TRTModuleNext maintains the same serialization workflows TRTModule supports as well (state_dict / extra_state, torch.save/torch.load)

Examples

model_fx = model_fx.cuda()
inputs_fx = [i.cuda() for i in inputs_fx]
trt_fx_module_f16 = torch_tensorrt.compile(
    model_fx,
    ir="fx",
    inputs=inputs_fx,
    enabled_precisions={torch.float16},
    use_experimental_fx_rt=True,
    explicit_batch_dimension=True
)

# Save model using torch.save 

torch.save(trt_fx_module_f16, "trt.pt")
reload_trt_mod = torch.load("trt.pt")

# Trace and save the FX module in TorchScript
scripted_fx_module = torch.jit.trace(trt_fx_module_f16, example_inputs=inputs_fx)
scripted_fx_module.save("/tmp/scripted_fx_module.ts")
scripted_fx_module = torch.jit.load("/tmp/scripted_fx_module.ts")
... #Get a handle for a TRTModuleNext submodule

# Extract state dictionary
st = trt_mod.state_dict()

# Load the state dict into a new module
new_trt_mod = TRTModuleNext()
new_trt_mod.load_state_dict(st)

Using TRTModuleNext as an arbirary TensorRT engine holder

Using TorchScript you have long been able to embed an arbritrary TensorRT engine from any source in a TorchScript module using torch_tensorrt.ts.embed_engine_in_new_module. Now you can do this at the torch.nn.Module level by directly using TRTModuleNext and access all the benefits enumerated above.

trt_mod = TRTModuleNext(
            serialized_engine,
            name="TestModule",
            input_binding_names=input_names,
            output_binding_names=output_names,
 )

The intention is in a future release to have torch_tensorrt.TRTModuleNext replace torch_tensorrt.fx.TRTModule as the default TensorRT Module implementation. Feedback on this class or how it is used, the runtime in general or associated features (profiler, engine inspector) is welcomed.

What's Changed

Read more

Torch-TensorRT v1.2.0

14 Sep 03:48
Compare
Choose a tag to compare

PyTorch 1.12, Collections based I/O, FX Frontend, torchtrtc custom op support, CMake build system and Community Window Support

Torch-TensorRT 1.2.0 targets PyTorch 1.12, CUDA 11.6, cuDNN 8.4 and TensorRT 8.4. This release focuses on a couple key new APIs to handle function I/O that uses collection types which should enable whole new model classes to be compiled by Torch-TensorRT without source code modification. It also introduces the "FX Frontend", a new frontend for Torch-TensorRT which leverages FX, a high level IR built into PyTorch with extensive Python APIs. For uses cases which do not need to be run outside of Python this may be a strong option to try as it is easily extensible in a familar development enviornment. In Torch-TensorRT 1.2.0, the FX frontend should be considered beta level in stability. torchtrtc has received improvements which target the ability to handle operators outside of the core PyTorch op set. This includes custom operators from libraries such as torchvision and torchtext. Similarlly users can provide custom converters to torchtrtc to extend the compilers support from the command line instead of having to write an application to do so. Finally, Torch-TensorRT introduces community supported Windows and CMake support.

New Dependencies

nvidia-tensorrt

For previous versions of Torch-TensorRT, users had to install TensorRT via system package manager and modify their LD_LIBRARY_PATH in order to set up Torch-TensorRT. Now users should install the TensorRT Python API as part of the installation proceedure. This can be done via the following steps:

pip install nvidia-pyindex
pip install nvidia-tensorrt==8.4.3.1
pip install torch-tensorrt==1.2.0 -f https://github.com/pytorch/tensorrt/releases

Installing the TensorRT pip package will allow Torch-TensorRT to automatically load the TensorRT libraries without any modification to enviornment variables. It is also a necessary dependency for the FX Frontend.

torchvision

Some FX frontend converters are designed to target operators from 3rd party libraries like torchvision. As such, you must have torchvision installed in order to use them. However, this dependency is optional for cases where you do not need this support.

Jetson

Starting from this release we will be distributing precompiled binaries of our NGC release branches for aarch64 (as well as x86_64), starting with ngc/22.11. These releases are designed to be paired with NVIDIA distributed builds of PyTorch including the NGC containers and Jetson builds and are equivalent to the prepackaged distribution of Torch-TensorRT that comes in the containers. They represent the state of the master branch at the time of branch cutting so may lag in features by a month or so. These releases will come separately to minor version releases like this one. Therefore going forward, these NGC releases should be the primary release channel used on Jetson (including for building from source).

NOTE: NGC PyTorch builds are not identical to builds you might install through normal channels like pytorch.org. In the past this has caused issues in portability between pytorch.org builds and NGC builds. Therefore we strongly recommend in workflows such as exporting a TorchScript module on an x86 machine and then compiling on Jetson to ensure you are using the NGC container release on x86 for your host machine operations. More information about Jetson support can be found along side the 22.07 release (https://github.com/pytorch/TensorRT/releases/tag/v1.2.0a0.nv22.07)

Collections based I/O [Experimental]

Torch-TensorRT previously has operated under the assumption that nn.Module forward functions can trivially be reduced to the form forward([Tensor]) -> [Tensor]. Typically this implies functions fo the form forward(Tensor, Tensor, ... Tensor) -> (Tensor, Tensor, ..., Tensor). However as model complexity increases, grouping inputs may make it easier to manage many inputs. Therefore, function signatures similar to forward([Tensor], (Tensor, Tensor)) -> [Tensor] or forward((Tensor, Tensor)) -> (Tensor, (Tensor, Tensor)) might be more common. In Torch-TensorRT 1.2.0, more of these kinds of uses cases are supported using the new experimental input_signature compile spec API. This API allows users to group Input specs similar to how they might group the input Tensors they would use to call the original module's forward function. This informs Torch-TensorRT on how to map a Tensor input from its location in a group to the engine and from the engine into its grouping returned back to the user.

To make this concrete consider the following standard case:

class StandardTensorInput(nn.Module):
    def __init__(self):
        super(StandardTensorInput, self).__init__()

    def forward(self, x, y):
        r = x + y
        return r

x = torch.Tensor([1,2,3]).to("cuda")
y = torch.Tensor([4,5,6]).to("cuda")
module = StandardTensorInput().eval().to("cuda")

trt_module = torch_tensorrt.compile(
    module,
    inputs=[
        torch_tensorrt.Input(x.shape),
        torch_tensorrt.Input(y.shape)
    ],
    min_block_size=1
)

out = trt_module(x,y)
print(out)

Here a user has defined two explicit tensor inputs and used the existing list based API to define the input specs.

With Torch-TensorRT the following use cases are now possible using the new input_signature API:

  • Tuple based input collection
class TupleInput(nn.Module):
    def __init__(self):
        super(TupleInput, self).__init__()

    def forward(self, z: Tuple[torch.Tensor, torch.Tensor]):
        r = z[0] + z[1]
        return r

x = torch.Tensor([1,2,3]).to("cuda")
y = torch.Tensor([4,5,6]).to("cuda")
module = TupleInput().eval().to("cuda")

trt_module = torch_tensorrt.compile(
    module,
    input_signature=((x, y),), # Note how inputs are grouped with the new API
    min_block_size=1
)

out = trt_module((x,y))
print(out)
  • List based input collection
class ListInput(nn.Module):
    def __init__(self):
        super(ListInput, self).__init__()

    def forward(self, z: List[torch.Tensor]):
        r = z[0] + z[1]
        return r

x = torch.Tensor([1,2,3]).to("cuda")
y = torch.Tensor([4,5,6]).to("cuda")
module = ListInput().eval().to("cuda")

trt_module = torch_tensorrt.compile(
    module,
    input_signature=([x,y],), # Again, note how inputs are grouped with the new API
    min_block_size=1
)

out = trt_module([x,y])
print(out)

Note how the input specs (in this case just example tensors) are provided to the compiler. The input_signature argument expects a Tuple[Union[torch.Tensor, torch_tensorrt.Input, List, Tuple]] grouped in a format representative of how the function would be called. In these cases its just a list or tuple of specs.

More advanced cases are supported as we:

  • Tuple I/O
class TupleInputOutput(nn.Module):
    def __init__(self):
        super(TupleInputOutput, self).__init__()

    def forward(self, z: Tuple[torch.Tensor, torch.Tensor]):
        r1 = z[0] + z[1]
        r2 = z[0] - z[1]
        r1 = r1 * 10
        r = (r1, r2)
        return r

x = torch.Tensor([1,2,3For previous versions of Torch-TensorRT, users had to install TensorRT via ]).to("cuda")
y = torch.Tensor([4,5,6]).to("cuda")
module = TupleInputOutput()

trt_module = torch_tensorrt.compile(
    module,
    input_signature=((x,y),), # Again, note how inputs are grouped with the new API
    min_block_size=1
)

out = trt_module((x,y))
print(out)
  • List I/O
class ListInputOutput(nn.Module):
    def __init__(self):
        super(ListInputOutput, self).__init__()

    def forward(self, z: List[torch.Tensor]):
        r1 = z[0] + z[1]
        r2 = z[0] - z[1]
        r = [r1, r2]
        return r

x = torch.Tensor([1,2,3]).to("cuda")
y = torch.Tensor([4,5,6]).to("cuda")
module = ListInputOutput()

trt_module = torch_tensorrt.compile(
    module,
    input_signature=([x,y],), # Again, note how inputs are grouped with the new API
    min_block_size=1
)

out = trt_module((x,y))
print(out)
  • Multple Groups of Mixed Types
class MultiGroupIO(nn.Module):
    def __init__(self):
        super(MultiGroupIO, self).__init__()

    def forward(self, z: List[torch.Tensor], a: Tuple[torch.Tensor, torch.Tensor]):
        r1 = z[0] + z[1]
        r2 = a[0] + a[1]
        r3 = r1 - r2
        r4 = [r1, r2]
        return (r3, r4)
    
x = torch.Tensor([1,2,3]).to("cuda")
y = torch.Tensor([4,5,6]).to("cuda")
module = MultiGroupIO().eval.to("cuda")

trt_module = torch_tensorrt.compile(
    module,
    input_signature=([x,y],(x,y)), # Again, note how inputs are grouped with the new API
    min_block_size=1
)

out = trt_module([x,y],(x,y))
print(out)   

These features are also supported in C++ as well:

torch::jit::Module mod;
try {
  // Deserialize the ScriptModule from a file using torch::jit::load().
  mod = torch::jit::load(path);
} catch (const c10::Error& e) {
  std::cerr << "error loading the model\n";
}
mod.eval();
mod.to(torch::kCUDA);

std::vector<torch::jit::IValue> inputs_;

for (auto in : inputs) {
  inputs_.push_back(torch::jit::IValue(in.clone()));
}

std::vector<torch::jit::IValue> complex_inputs;
auto input_list = c10::impl::GenericList(c10::TensorType::get());
input_list.push_back(inputs_[0]);
input_list.push_back(inputs_[0]);

torch::jit::IValue input_list_ivalue = torch::jit::IValue(input_list);

complex_inputs.push_back(input_list_ivalue);

auto input_shape = torch_tensorrt::Input(in0.sizes(), torch_tensorrt::DataType::kHalf);
auto input_shape_ivalue = torch::jit::IValue(std::move(c10::make_intrusive<torch_tensorrt::Input>(input_shape)));

c10::TypePtr elementType = input_shape_ivalue.type();
auto ...
Read more

Torch-TensorRT v1.1.1

16 Jul 01:58
c2e396a
Compare
Choose a tag to compare

Adding support for Torch-TensorRT on Jetpack 5.0 Developer Preview

Torch-TensorRT 1.1.1 is a patch release for Torch-TensorRT 1.1 that targets PyTorch 1.11, CUDA 11.4/11.3, TensorRT 8.4 EA/8.2 and cuDNN 8.3/8.2 intended to add support for Torch-TensorRT on Jetson / Jetpack 5.0 DP. As this release is primarily targeted at adding support for Jetpack 5.0DP for the 1.1 feature set we will not be distributing pre-compiled binaries for this release so as not to break compatibility with the current stack for existing users who install directly from GitHub. Please follow the instructions for installation on Jetson in the documentation to install this release: https://pytorch.org/TensorRT/tutorials/installation.html#compiling-from-source

Known Limitations

  • We have observed in testing, higher than normal numerical instability on Jetpack 5.0 DP. These issues are not observed on x86_64 based platforms. This numerical instability has not been found to decrease model accuracy in our test suite.

What's Changed

Full Changelog: v1.1.0...v1.1.1

Operators Supported

Operators Currently Supported Through Converters

  • aten::_convolution(Tensor input, Tensor weight, Tensor? bias, int[] stride, int[] padding, int[] dilation, bool transposed, int[] output_padding, int groups, bool benchmark, bool deterministic, bool cudnn_enabled, bool allow_tf32) -> (Tensor)
  • aten::_convolution.deprecated(Tensor input, Tensor weight, Tensor? bias, int[] stride, int[] padding, int[] dilation, bool transposed, int[] output_padding, int groups, bool benchmark, bool deterministic, bool cudnn_enabled) -> (Tensor)
  • aten::abs(Tensor self) -> (Tensor)
  • aten::acos(Tensor self) -> (Tensor)
  • aten::acosh(Tensor self) -> (Tensor)
  • aten::adaptive_avg_pool1d(Tensor self, int[1] output_size) -> (Tensor)
  • aten::adaptive_avg_pool2d(Tensor self, int[2] output_size) -> (Tensor)
  • aten::adaptive_avg_pool3d(Tensor self, int[3] output_size) -> (Tensor)
  • aten::adaptive_max_pool1d(Tensor self, int[2] output_size) -> (Tensor, Tensor)
  • aten::adaptive_max_pool2d(Tensor self, int[2] output_size) -> (Tensor, Tensor)
  • aten::adaptive_max_pool3d(Tensor self, int[3] output_size) -> (Tensor, Tensor)
  • aten::add.Scalar(Tensor self, Scalar other, Scalar alpha=1) -> (Tensor)
  • aten::add.Tensor(Tensor self, Tensor other, Scalar alpha=1) -> (Tensor)
  • aten::add_.Tensor(Tensor(a!) self, Tensor other, *, Scalar alpha=1) -> (Tensor(a!))
  • aten::asin(Tensor self) -> (Tensor)
  • aten::asinh(Tensor self) -> (Tensor)
  • aten::atan(Tensor self) -> (Tensor)
  • aten::atanh(Tensor self) -> (Tensor)
  • aten::avg_pool1d(Tensor self, int[1] kernel_size, int[1] stride=[], int[1] padding=[0], bool ceil_mode=False, bool count_include_pad=True) -> (Tensor)
  • aten::avg_pool2d(Tensor self, int[2] kernel_size, int[2] stride=[], int[2] padding=[0, 0], bool ceil_mode=False, bool count_include_pad=True, int? divisor_override=None) -> (Tensor)
  • aten::avg_pool3d(Tensor self, int[3] kernel_size, int[3] stride=[], int[3] padding=[], bool ceil_mode=False, bool count_include_pad=True, int? divisor_override=None) -> (Tensor)
  • aten::batch_norm(Tensor input, Tensor? gamma, Tensor? beta, Tensor? mean, Tensor? var, bool training, float momentum, float eps, bool cudnn_enabled) -> (Tensor)
  • aten::bmm(Tensor self, Tensor mat2) -> (Tensor)
  • aten::cat(Tensor[] tensors, int dim=0) -> (Tensor)
  • aten::ceil(Tensor self) -> (Tensor)
  • aten::clamp(Tensor self, Scalar? min=None, Scalar? max=None) -> (Tensor)
  • aten::clamp_max(Tensor self, Scalar max) -> (Tensor)
  • aten::clamp_min(Tensor self, Scalar min) -> (Tensor)
  • aten::constant_pad_nd(Tensor self, int[] pad, Scalar value=0) -> (Tensor)
  • aten::cos(Tensor self) -> (Tensor)
  • aten::cosh(Tensor self) -> (Tensor)
  • aten::cumsum(Tensor self, int dim, *, int? dtype=None) -> (Tensor)
  • aten::div.Scalar(Tensor self, Scalar other) -> (Tensor)
  • aten::div.Tensor(Tensor self, Tensor other) -> (Tensor)
  • aten::div.Tensor_mode(Tensor self, Tensor other, *, str? rounding_mode) -> (Tensor)
  • aten::div_.Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!))
  • aten::div_.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!))
  • aten::elu(Tensor self, Scalar alpha=1, Scalar scale=1, Scalar input_scale=1) -> (Tensor)
  • aten::embedding(Tensor weight, Tensor indices, int padding_idx=-1, bool scale_grad_by_freq=False, bool sparse=False) -> (Tensor)
  • aten::eq.Scalar(Tensor self, Scalar other) -> (Tensor)
  • aten::eq.Tensor(Tensor self, Tensor other) -> (Tensor)
  • aten::erf(Tensor self) -> (Tensor)
  • aten::exp(Tensor self) -> (Tensor)
  • aten::expand(Tensor(a) self, int[] size, *, bool implicit=False) -> (Tensor(a))
  • aten::expand_as(Tensor(a) self, Tensor other) -> (Tensor(a))
  • aten::fake_quantize_per_channel_affine(Tensor self, Tensor scale, Tensor zero_point, int axis, int quant_min, int quant_max) -> (Tensor)
  • aten::fake_quantize_per_tensor_affine(Tensor self, float scale, int zero_point, int quant_min, int quant_max) -> (Tensor)
  • aten::flatten.using_ints(Tensor self, int start_dim=0, int end_dim=-1) -> (Tensor)
  • aten::floor(Tensor self) -> (Tensor)
  • aten::floor_divide(Tensor self, Tensor other) -> (Tensor)
  • aten::floor_divide.Scalar(Tensor self, Scalar other) -> (Tensor)
  • aten::ge.Scalar(Tensor self, Scalar other) -> (Tensor)
  • aten::ge.Tensor(Tensor self, Tensor other) -> (Tensor)
  • aten::gru_cell(Tensor input, Tensor hx, Tensor w_ih, Tensor w_hh, Tensor? b_ih=None, Tensor? b_hh=None) -> (Tensor)
  • aten::gt.Scalar(Tensor self, Scalar other) -> (Tensor)
  • aten::gt.Tensor(Tensor self, Tensor other) -> (Tensor)
  • aten::hardtanh(Tensor self, Scalar min_val=-1, Scalar max_val=1) -> (Tensor)
  • aten::hardtanh_(Tensor(a!) self, Scalar min_val=-1, Scalar max_val=1) -> (Tensor(a!))
  • aten::index.Tensor(Tensor self, Tensor?[] indices) -> (Tensor)
  • aten::instance_norm(Tensor input, Tensor? weight, Tensor? bias, Tensor? running_mean, Tensor? running_var, bool use_input_stats, float momentum, float eps, bool cudnn_enabled) -> (Tensor)
  • aten::layer_norm(Tensor input, int[] normalized_shape, Tensor? gamma, Tensor? beta, float eps, bool cudnn_enabled) -> (Tensor)
  • aten::le.Scalar(Tensor self, Scalar other) -> (Tensor)
  • aten::le.Tensor(Tensor self, Tensor other) -> (Tensor)
  • aten::leaky_relu(Tensor self, Scalar negative_slope=0.01) -> (Tensor)
  • aten::leaky_relu_(Tensor(a!) self, Scalar negative_slope=0.01) -> (Tensor(a!))
  • aten::linear(Tensor input, Tensor weight, Tensor? bias=None) -> (Tensor)
  • aten::log(Tensor self) -> (Tensor)
  • aten::lstm_cell(Tensor input, Tensor[] hx, Tensor w_ih, Tensor w_hh, Tensor? b_ih=None, Tensor? b_hh=None) -> (Tensor, Tensor)
  • aten::lt.Scalar(Tensor self, Scalar other) -> (Tensor)
  • aten::lt.Tensor(Tensor self, Tensor other) -> (Tensor)
  • aten::masked_fill.Scalar(Tensor self, Tensor mask, Scalar value) -> (Tensor)
  • aten::matmul(Tensor self, Tensor other) -> (Tensor)
  • aten::max(Tensor self) -> (Tensor)
  • aten::max.dim(Tensor self, int dim, bool keepdim=False) -> (Tensor values, Tensor indices)
  • aten::max.other(Tensor self, Tensor other) -> (Tensor)
  • aten::max_pool1d(Tensor self, int[1] kernel_size, int[1] stride=[], int[1] padding=[], int[1] dilation=[], bool ceil_mode=False) -> (Tensor)
  • aten::max_pool2d(Tensor self, int[2] kernel_size, int[2] stride=[], int[2] padding=[0, 0], int[2] dilation=[1, 1], bool ceil_mode=False) -> (Tensor)
  • aten::max_pool3d(Tensor self, int[3] kernel_size, int[3] stride=[], int[3] padding=[], int[3] dilation=[], bool ceil_mode=False) -> (Tensor)
  • aten::mean(Tensor self, *, int? dtype=None) -> (Tensor)
  • aten::mean.dim(Tensor self, int[] dim, bool keepdim=False, *, int? dtype=None) -> (Tensor)
  • aten::min(Tensor self) -> (Tensor)
  • aten::min.other(Tensor self, Tensor other) -> (Tensor)
  • aten::mul.Scalar(Tensor self, Scalar other) -> (Tensor)
  • aten::mul.Tensor(Tensor self, Tensor other) -> (Tensor)
  • aten::mul_.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!))
  • aten::narrow(Tensor(a) self, int dim, int start, int length) -> (Tensor(a))
  • aten::narrow.Tensor(Tensor(a) self, int dim, Tensor start, int length) -> (Tensor(a))
  • aten::ne.Scalar(Tensor self, Scalar other) -> (Tensor)
  • aten::ne.Tensor(Tensor self, Tensor other) -> (Tensor)
  • aten::neg(Tensor self) -> (Tensor)
  • aten::norm.ScalarOpt_dim(Tensor self, Scalar? p, int[1] dim, bool keepdim=False) -> (Tensor)
  • aten::permute(Tensor(a) self, int[] dims) -> (Tensor(a))
  • aten::pixel_shuffle(Tensor self, int upscale_factor) -> (Tensor)
  • aten::pow.Tensor_Scalar(Tensor self, Scalar exponent) -> (Tensor)
  • aten::pow.Tensor_Tensor(Tensor self, Tensor exponent) -> (Tensor)
  • aten::prelu(Tensor self, Tensor weight) -> (Tensor)
  • aten::prod(Tensor self, *, int? dtype=None) -> (Tensor)
  • aten::prod.dim_int(Tensor self, int dim, bool keepdim=False, *, int? dtype=None) -> (Tensor)
  • aten::reciprocal(Tensor self) -> (Tensor)
  • aten::reflection_pad1d(Tensor self, int[2] padding) -> (Tensor)
  • aten::reflection_pad2d(Tensor self, int[4] padding) -> (Tensor)
  • aten::relu(Tensor input) -> (Tensor)
  • aten::relu_(Tensor(a!) self) -> (Tensor(a!))
  • aten::repeat(Tensor self, int[] repeats) -> (Tensor)
  • aten::replication_pad1d(Tensor self, int[2] padding) -> (Tensor)
  • aten::replication_pad2d(Tensor self, int[4] padding) -> (Tensor)
  • aten::replication_pad3d(Tensor self, int[6] padding) -> (Tensor)
  • aten::reshape(Tensor self, int[] shape) -> (Tensor)
  • aten::roll(Tensor self, int[1] shifts, int[1] dims=[]) -> (Tensor)
  • aten::rsub.Scalar(Tensor self, Scalar other, Scalar alpha=1) -> (Tensor)
  • aten::rsub.Tensor(Tensor self, Tensor other, Scalar alpha=1) -> (Tensor)
  • aten::select.int(Tensor(a) self, int dim, int i...
Read more

Torch-TensorRT v1.1.0

10 May 08:23
Compare
Choose a tag to compare

Support for PyTorch 1.11, Various Bug Fixes, Partial aten::Int support, New Debugging Tools, Removing Max Batch Size

Torch-TensorRT 1.1.0 targets PyTorch 1.11, CUDA 11.3, cuDNN 8.2 and TensorRT 8.2. Due to recent JetPack upgrades, this release does not support Jetson (Jetpack 5.0DP or otherwise). Jetpack 5.0DP support will arrive in a mid-cycle release (Torch-TensorRT 1.1.x) along with support for TensorRT 8.4. 1.1.0 also drops support for Python 3.6 as it has reached end of life. Following 1.0.0, this release is focused on stabilizing and improving the core of Torch-TensorRT. Many improvements have been made to the partitioning system addressing limitation many users hit while trying to partially compile PyTorch modules. Torch-TensorRT 1.1.0 also addresses a long standing issue with aten::Int operators (albeit) partially. Now certain common patterns which use aten::Int can be handled by the compiler without resorting to partial compilation. Most notably, this means that models like BERT can be run end to end with Torch-TensorRT, resulting in significant performance gains.

New Debugging Tools

With this release we are introducing new syntax sugar that can be used to more easily debug Torch-TensorRT compilation and execution through the use of context managers. For example, in Torch-TensorRT 1.0.0 this may be a common pattern to turn on then turn off debug info:

import torch_tensorrt
...
torch_tensorrt.logging.set_reportable_log_level(torch_tensorrt.logging.Level.Debug)
trt_module = torch_tensorrt.compile(my_module, ...)
torch_tensorrt.logging.set_reportable_log_level(torch_tensorrt.logging.Level.Warning)
results = trt_module(input_tensors)

With Torch-TensorRT 1.1.0, this now can be done with the following code:

import torch_tensorrt
...
with torch_tensorrt.logging.debug():
    trt_module = torch_tensorrt.compile(my_module,...)
results = trt_module(input_tensors)

You can also use this API to debug the Torch-TensorRT runtime as well:

import torch_tensorrt
torch_tensorrt.logging.set_reportable_log_level(torch_tensorrt.logging.Level.Error)
...
trt_module = torch_tensorrt.compile(my_module,...)
with torch_tensorrt.logging.warnings():
    results = trt_module(input_tensors)

The following levels are available:

# Only internal TensorRT failures will be logged
with torch_tensorrt.logging.internal_errors():

# Internal TensorRT failures + Torch-TensorRT errors will be logged
with torch_tensorrt.logging.errors():

# All Errors plus warnings will be logged
with torch_tensorrt.logging.warnings():

# First verbosity level, information about major steps occurring during compilation and execution
with torch_tensorrt.logging.info():

# Second verbosity level, each step is logged + information about compiler state will be outputted
with torch_tensorrt.logging.debug():

# Third verbosity level, all above information + intermediate transformations of the graph during lowering
with torch_tensorrt.logging.graphs():

Removing Max Batch Size, Strict Types

In this release we are removing the max_batch_size and strict_types settings. These settings directly corresponded to the TensorRT settings, however were not always respected which often lead to confusion. Therefore we thought it best to disable these features as deterministic behavior could not be ensured.

Porting forward from max_batch_size, strict_types:

  • max_batch_size: The first dim in shapes provided to Torch-TensorRT are considered batch dimensions, therefore instead of setting max_batch_size, you can just use the Input objects directly
  • strict_types: A replacement with more deterministic behavior will come with an upcoming TensorRT release.

Dependencies

- Bazel 5.1.1
- LibTorch 1.11.0
- CUDA 11.3 (on x86_64, by default, newer CUDA 11 supported with compatible PyTorch Build)
- cuDNN 8.2.4.15
- TensorRT 8.2.4.2

1.1.0 (2022-05-10)

Bug Fixes

  • add at::adaptive_avg_pool1d in interpolate plugin and fix #791 (deb9f74)
  • Added ipywidget dependency to notebook (0b2040a)
  • Added test case names (296e98a)
  • Added truncate_long_and_double (417c096)
  • Adding truncate_long_and_double to ptq tests (3a0640a)
  • Avoid resolving non-tensor inputs to torch segment_blocks unneccessarily (3e090ee)
  • Considering rtol and atol in threshold comparison for floating point numbers (0b0ba8d)
  • Disabled mobilenet_v2 test for DLFW CI (40c611f)
  • fix bug that python api doesn't pass truncate_long_and_double value to internal.partition_info (828336d)
  • fix bugs in aten::to (2ecd187)
  • Fix BUILD file for tests/accuracy (8b0170e)
  • Fix existing uninstallation of Torch-TRT (9ddd7a8)
  • Fix for torch scripted module faiure with DLFW (88c02d9)
  • Fix fuse addmm pass (58e9ea0)
  • Fix pre_built name change in bazelrc (3ecee21)
  • fix the bug that introduces kLong Tensor in prim::NumToTensor (2c3e1d9)
  • Fix when TRT prunes away an output (9465e1d)
  • Fixed bugs and addressed review comments (588e1d1)
  • Fixed failures for host deps sessions (ec2232f)
  • Fixed typo in the path (43fab56)
  • Getting unsupported ops will now bypass non-schema ops avoiding redundant failures (d7d1511)
  • Guard test activation for CI testing (6d1a1fd)
  • Implement a patch for gelu schema change in older NGC containers (9ee3a04)
  • Missing log severity (6a4daef)
  • Preempt torch package override via timm in nox session (8964d1b)
  • refactor the resegmentation for TensorRT segments in ResolveNonTensorInput (3cc2dfb)
  • remove outdated member variables (0268da2)
  • Removed models directory dependencies (c4413e1)
  • Resolve issues in exception elmination pass (99cea1b)
  • Review comments incorporated (962660d)
  • Review comments incorporated (e9865c2)
  • support dict type for input in shape analysis (630f9c4)
  • truncate_long_and_double incur torchscript inference issues (c83aa15)
  • Typo fix for test case name (2a516b2)
  • Update "reduceAxes" variable in GlobalPoolingConverter function and add corresponding uTests (f6f5e3e)
  • //core/conversion/evaluators: Change how schemas are handled (20e5d41)
  • Update base container for dockerfile (1b3245a)
  • //core: Take user setting in the case we can't determine the (01c89d1), closes #814
  • Update test for new Exception syntax (2357099)
  • //core/conversion: Add special case for If and Loop (eacde8d)
  • //core/runtime: Support more delimiter variants (819c911)
  • //cpp/bin/torchtrtc: Fix mbs (aca175f)
  • //docsrc: Fix dependencies for docgen (806e663)
  • //notebooks: Render citrinet (12dbda1)
  • //py: Constrain the CUDA version in container builds (a21a045)
  • Use user provided dtype when we can't infer it from the graph (14650d1)

Code Refactoring

  • removing the strict_types and max_batch_size apis (b30cbd9)
  • Rename enabled precisions arugment to (10957eb)
  • Removing the max-batch-size argument (03bafc5)

Features

  • //core/conversion: Better tooling for debugging (c5c5c47)
  • //core/conversion/evaluators: aten::pow support (c4fdfcb)
  • //docker: New base container to let master build in container ([446bf18](https://github.com...
Read more

Torch-TensorRT v1.0.0

09 Nov 08:26
Compare
Choose a tag to compare

New Name!, Support for PyTorch 1.10, CUDA 11.3, New Packaging and Distribution Options, Stabilized APIs, Stabilized Partial Compilation, Adjusted Default Behavior, Usability Improvements, New Converters, Bug Fixes

This is the first stable release of Torch-TensorRT targeting PyTorch 1.10, CUDA 11.3 (on x86_64, CUDA 10.2 on aarch64), cuDNN 8.2 and TensorRT 8.0 with backwards compatible source for TensorRT 7.1. On aarch64 TRTorch targets Jetpack 4.6 primarily with backwards compatible source for Jetpack 4.5. This version also removes deprecated APIs such as InputRange and op_precision

New Name

TRTorch is now Torch-TensorRT! TRTorch started out as a small experimental project compiling TorchScript to TensorRT almost two years ago and now as we are hitting v1.0.0 with APIs and major features stabilizing we felt that the name of the project should reflect the ecosystem of tools it is joining with this release, namely TF-TRT (https://blog.tensorflow.org/2021/01/leveraging-tensorflow-tensorrt-integration.html) and MXNet-TensorRT(https://mxnet.apache.org/versions/1.8.0/api/python/docs/tutorials/performance/backend/tensorrt/tensorrt). Since we were already significantly changing APIs with this release to reflect what we learned over the last two years of using TRTorch, we felt this is was the right time to change the name as well.

The overall process to port forward from TRTorch is as follows:

  • Python

    • The library has been renamed from trtorch to torch_tensorrt
    • Components that used to all live under the trtorch namespace have now been separated. IR agnostic components: torch_tensorrt.Input, torch_tensorrt.Device, torch_tensorrt.ptq, torch_tensorrt.logging will continue to live under the top level namespace. IR specific components like torch_tensorrt.ts.compile, torch_tensorrt.ts.convert_method_to_trt_engine, torch_tensorrt.ts.TensorRTCompileSpec will live in a TorchScript specific namespace. This gives us space to explore the other IRs that might be relevant to the project in the future. In the place of the old top level compile and convert_method_to_engine are new ones which will call the IR specific versions based on what is provided to them. This also means that you can now provide a raw torch.nn.Module to torch_tensorrt.compile and Torch-TensorRT will handle the TorchScripting step for you. For the most part the sole change that will be needed to change over namespaces is to exchange trtorch to torch_tensorrt
  • C++

    • Similar to Python the namespaces in C++ have changed from trtorch to torch_tensorrt and components specific to the IR like compile, convert_method_to_trt_engine and CompileSpec are in a torchscript namespace, while agnostic components are at the top level. Namespace aliases for torch_tensorrt -> torchtrt and torchscript -> ts are included. Again the port forward process for namespaces should be a find and replace. Finally the libraries libtrtorch.so, libtrtorchrt.so and libtrtorch_plugins.so have been renamed to libtorchtrt.so, libtorchtrt_runtime.so and libtorchtrt_plugins.so respectively.
  • CLI:

    • trtorch has been renamed to torchtrtc

New Distribution Options and Packaging

Starting with nvcr.io/nvidia/pytorch:21.11, Torch-TensorRT will be distributed as part of the container (https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch). The version of Torch-TensorRT in container will be the state of the master at the time of building. Torch-TensorRT will be validated to run correctly with the version of PyTorch, CUDA, cuDNN and TensorRT in the container. This will serve as the easiest way to have a full validated PyTorch end to end training to inference stack and serves as a great starting point for building DL applications.

Also as part of Torch-TensorRT we are now starting to distribute the full C++ package within the wheel files for the Python packages. By installing the wheel you now get the Python API, the C++ libraries + headers and the CLI binary. This is going to be the easiest way to install Torch-TensorRT on your stack. After installing with pip

pip3 install torch-tensorrt -f https://github.com/NVIDIA/Torch-TensorRT/releases

You can add the following to your PATH to set up the CLI

PATH=$PATH:<PATH TO TORCHTRT PYTHON PACKAGE>/bin

Stabilized APIs

Python

Many of the APIs have change slighly in this release to be more self consistent and more usable. These changes begin with the Python API for which compile, convert_method_to_trt_engine and TensorRTCompileSpec now instead of dictionaries use kwargs. As features many features came out of beta and experimental stability the necessity to have multiple levels of nesting in settings has decreased, therefore kwargs make much more sense. You can simply port forward to the new APIs by unwrapping your existing compile_spec dict in the arguments to compile or similar functions.

Example:
compile_settings = {
    "inputs": [torch_tensorrt.Input(
        min_shape=[1, 3, 224, 224],
        opt_shape=[1, 3, 512, 512],
        max_shape=[1, 3, 1024, 1024],
        # For static size shape=[1, 3, 224, 224]
        dtype=torch.half, # Datatype of input tensor. Allowed options torch.(float|half|int8|int32|bool)
    )],
    "enabled_precisions": {torch.half}, # Run with FP16
}

trt_ts_module = torch_tensorrt.compile(torch_script_module, **compile_settings)

This release also introduces support for providing tensors as examples to Torch-TensorRT. In place of a torch_tensorrt.Input in the list of inputs you can pass a Tensor. This can only be used to set a static input size. There are also some things to be aware of which will be discussed later in the release notes.

Now that Torch-TensorRT separates components specific to particular IRs to their own namespaces, there is now a replacement for the old compile and convert_method_to_trt_engine functions on the top level. These functions take any PyTorch generated format including torch.nn.Modules and decides the best way to compile it down to TensorRT. In v1.0.0 this means to go through TorchScript and return a Torch.jit.ScriptModule. You can specify the IR to try using the ir arg for these functions.

Due to partial compilation becoming stable in v1.0.0, there are now four new fields which replace the old torch_fallback struct.

  • old:
complie_spec = {
  "torch_fallback": {
      "enabled": True, # Turn on or turn off falling back to PyTorch if operations are not supported in TensorRT
      "force_fallback_ops": [
          "aten::max_pool2d" # List of specific ops to require running in PyTorch
      ],
      "force_fallback_modules": [
          "mypymod.mytorchmod" # List of specific torch modules to require running in PyTorch
      ],
      "min_block_size": 3 # Minimum number of ops an engine must incapsulate to be run in TensorRT
  }
}
  • new:
torch_tensorrt.compile(...,
    require_full_compilation=False, 
    min_block_size=3, 
    torch_executed_ops=[ "aten::max_pool2d" ], 
    torch_executed_modules=["mypymod.mytorchmod"])

C++

The changes for the C++ API other than the reorganization and renaming of the namespaces, mostly serve to make Torch-TensorRT consistent between Python and C++ namely by renaming trtorch::CompileGraph to torch_tensorrt::ts::compile and trtorch::ConvertGraphToTRTEngine to torch_tensorrt::ts::convert_method_to_trt_engine. Beyond that similar to Python, the partial compilation struct TorchFallback has been removed and replaced by four fields in torch_tensorrt::ts::CompileSpec

  • old:
  /**
   * @brief A struct to hold fallback info
   */
  struct TRTORCH_API TorchFallback {
    /// enable the automatic fallback feature
    bool enabled = false;

    /// minimum consecutive operation number that needs to be satisfied to convert to TensorRT
    uint64_t min_block_size = 1;

    /// A list of names of operations that will explicitly run in PyTorch
    std::vector<std::string> forced_fallback_ops;

    /// A list of names of modules that will explicitly run in PyTorch
    std::vector<std::string> forced_fallback_modules;

    /**
     * @brief Construct a default Torch Fallback object, fallback will be off
     */
    TorchFallback() = default;

    /**
     * @brief Construct from a bool
     */
    TorchFallback(bool enabled) : enabled(enabled) {}

    /**
     * @brief Constructor for setting min_block_size
     */
    TorchFallback(bool enabled, uint64_t min_size) : enabled(enabled), min_block_size(min_size) {}
  };
  • new:
  /**
   * Require the full module be compiled to TensorRT instead of potentially running unsupported operations in PyTorch
   */
  bool require_full_compilation = false;

  /**
   * Minimum number of contiguous supported operators to compile a subgraph to TensorRT
   */
  uint64_t min_block_size = 3;

  /**
   * List of aten operators that must be run in PyTorch. An error will be thrown if this list is not empty but
   * ``require_full_compilation`` is True
   */
  std::vector<std::string> torch_executed_ops;

  /**
   * List of modules that must be run in PyTorch. An error will be thrown if this list is not empty but
   * ``require_full_compilation`` is True
   */
  std::vector<std::string> torch_executed_modules;

CLI

Similarly these partial compilation fields have been renamed in torchtrtc:

    --require-full-compilation        Require that the model should be fully
                                      compiled to TensorRT or throw an error
    --teo=[torch-executed-ops...],
    --torch-executed-ops=[torch-executed-ops...]
                                      (Repeatable) Operator in the graph that
              ...
Read more

TRTorch v0.4.1

06 Oct 19:14
92d6851
Compare
Choose a tag to compare

TRTorch v0.4.1

Bug Fixes for Module Ignorelist for Partial Compilation, trtorch.Device, Version updates for PyTorch, TensorRT, cuDNN

Target Platform Changes

This is the first patch of TRTorch v0.4. It now targets by default PyTorch 1.9.1, TensorRT 8.0.3.4 and cuDNN 8.2.4.15 and CUDA 11.1. Older versions of PyTorch, TensorRT, cuDNN are still supported in the same manner as TRTorch v0.4.0

Module Ignorelist for Partial Compilation

There was an issue with the pass marking modules to be ignored during compilation where it unsafely assumed that methods are named forward all the way down the module tree. While this was fine for 1.8.0, with PyTorch 1.9.0, the TorchScript codegen changed slightly to sometimes use methods of other names for modules which reduce trivially to a functional api. This fix now will identify method calls as the recursion point and then use those method calls to select modules to recurse on. It will also check to verify existence of these modules and methods before recursing. Finally this pass was run by default even if the ignore list was empty causing issues for users not using the feature. Therefore this pass is now disabled unless explicitly enabled

trtorch.Device

Some of the constructors for trtorch.Device would not work or incorrectly configure the device. This patch will fix those issues.

Dependencies

- Bazel 4.0.0
- LibTorch 1.9.1
- CUDA 11.1 (on x86_64, by default, newer CUDA 11 supported with compatible PyTorch Build), 10.2 (on aarch64)
- cuDNN 8.2.3.4
- TensorRT 8.0.3.4

0.4.1 (2021-10-06)

Bug Fixes

  • //core/lowering: Fixes module level fallback recursion (2fc612d)
  • Move some lowering passes to graph level logging (0266f41)
  • //py: Fix trtorch.Device alternate contructor options (ac26841)

Operators Supported

Operators Currently Supported Through Converters

  • aten::_convolution(Tensor input, Tensor weight, Tensor? bias, int[] stride, int[] padding, int[] dilation, bool transposed, int[] output_padding, int groups, bool benchmark, bool deterministic, bool cudnn_enabled, bool allow_tf32) -> (Tensor)
  • aten::_convolution.deprecated(Tensor input, Tensor weight, Tensor? bias, int[] stride, int[] padding, int[] dilation, bool transposed, int[] output_padding, int groups, bool benchmark, bool deterministic, bool cudnn_enabled) -> (Tensor)
  • aten::abs(Tensor self) -> (Tensor)
  • aten::acos(Tensor self) -> (Tensor)
  • aten::acosh(Tensor self) -> (Tensor)
  • aten::adaptive_avg_pool1d(Tensor self, int[1] output_size) -> (Tensor)
  • aten::adaptive_avg_pool2d(Tensor self, int[2] output_size) -> (Tensor)
  • aten::adaptive_max_pool2d(Tensor self, int[2] output_size) -> (Tensor, Tensor)
  • aten::add.Scalar(Tensor self, Scalar other, Scalar alpha=1) -> (Tensor)
  • aten::add.Tensor(Tensor self, Tensor other, Scalar alpha=1) -> (Tensor)
  • aten::add_.Tensor(Tensor(a!) self, Tensor other, *, Scalar alpha=1) -> (Tensor(a!))
  • aten::asin(Tensor self) -> (Tensor)
  • aten::asinh(Tensor self) -> (Tensor)
  • aten::atan(Tensor self) -> (Tensor)
  • aten::atanh(Tensor self) -> (Tensor)
  • aten::avg_pool1d(Tensor self, int[1] kernel_size, int[1] stride=[], int[1] padding=[0], bool ceil_mode=False, bool count_include_pad=True) -> (Tensor)
  • aten::avg_pool2d(Tensor self, int[2] kernel_size, int[2] stride=[], int[2] padding=[0, 0], bool ceil_mode=False, bool count_include_pad=True, int? divisor_override=None) -> (Tensor)
  • aten::avg_pool3d(Tensor self, int[3] kernel_size, int[3] stride=[], int[3] padding=[], bool ceil_mode=False, bool count_include_pad=True, int? divisor_override=None) -> (Tensor)
  • aten::batch_norm(Tensor input, Tensor? gamma, Tensor? beta, Tensor? mean, Tensor? var, bool training, float momentum, float eps, bool cudnn_enabled) -> (Tensor)
  • aten::bmm(Tensor self, Tensor mat2) -> (Tensor)
  • aten::cat(Tensor[] tensors, int dim=0) -> (Tensor)
  • aten::ceil(Tensor self) -> (Tensor)
  • aten::clamp(Tensor self, Scalar? min=None, Scalar? max=None) -> (Tensor)
  • aten::clamp_max(Tensor self, Scalar max) -> (Tensor)
  • aten::clamp_min(Tensor self, Scalar min) -> (Tensor)
  • aten::constant_pad_nd(Tensor self, int[] pad, Scalar value=0) -> (Tensor)
  • aten::cos(Tensor self) -> (Tensor)
  • aten::cosh(Tensor self) -> (Tensor)
  • aten::cumsum(Tensor self, int dim, *, int? dtype=None) -> (Tensor)
  • aten::div.Scalar(Tensor self, Scalar other) -> (Tensor)
  • aten::div.Tensor(Tensor self, Tensor other) -> (Tensor)
  • aten::div_.Scalar(Tensor(a!) self, Scalar other) -> (Tensor(a!))
  • aten::div_.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!))
  • aten::elu(Tensor self, Scalar alpha=1, Scalar scale=1, Scalar input_scale=1) -> (Tensor)
  • aten::embedding(Tensor weight, Tensor indices, int padding_idx=-1, bool scale_grad_by_freq=False, bool sparse=False) -> (Tensor)
  • aten::eq.Scalar(Tensor self, Scalar other) -> (Tensor)
  • aten::eq.Tensor(Tensor self, Tensor other) -> (Tensor)
  • aten::erf(Tensor self) -> (Tensor)
  • aten::exp(Tensor self) -> (Tensor)
  • aten::expand(Tensor(a) self, int[] size, *, bool implicit=False) -> (Tensor(a))
  • aten::expand_as(Tensor(a) self, Tensor other) -> (Tensor(a))
  • aten::fake_quantize_per_channel_affine(Tensor self, Tensor scale, Tensor zero_point, int axis, int quant_min, int quant_max) -> (Tensor)
  • aten::fake_quantize_per_tensor_affine(Tensor self, float scale, int zero_point, int quant_min, int quant_max) -> (Tensor)
  • aten::flatten.using_ints(Tensor self, int start_dim=0, int end_dim=-1) -> (Tensor)
  • aten::floor(Tensor self) -> (Tensor)
  • aten::floor_divide(Tensor self, Tensor other) -> (Tensor)
  • aten::floor_divide.Scalar(Tensor self, Scalar other) -> (Tensor)
  • aten::ge.Scalar(Tensor self, Scalar other) -> (Tensor)
  • aten::ge.Tensor(Tensor self, Tensor other) -> (Tensor)
  • aten::gelu(Tensor self) -> (Tensor)
  • aten::gru_cell(Tensor input, Tensor hx, Tensor w_ih, Tensor w_hh, Tensor? b_ih=None, Tensor? b_hh=None) -> (Tensor)
  • aten::gt.Scalar(Tensor self, Scalar other) -> (Tensor)
  • aten::gt.Tensor(Tensor self, Tensor other) -> (Tensor)
  • aten::hardtanh(Tensor self, Scalar min_val=-1, Scalar max_val=1) -> (Tensor)
  • aten::hardtanh_(Tensor(a!) self, Scalar min_val=-1, Scalar max_val=1) -> (Tensor(a!))
  • aten::instance_norm(Tensor input, Tensor? weight, Tensor? bias, Tensor? running_mean, Tensor? running_var, bool use_input_stats, float momentum, float eps, bool cudnn_enabled) -> (Tensor)
  • aten::layer_norm(Tensor input, int[] normalized_shape, Tensor? gamma, Tensor? beta, float eps, bool cudnn_enabled) -> (Tensor)
  • aten::le.Scalar(Tensor self, Scalar other) -> (Tensor)
  • aten::le.Tensor(Tensor self, Tensor other) -> (Tensor)
  • aten::leaky_relu(Tensor self, Scalar negative_slope=0.01) -> (Tensor)
  • aten::leaky_relu_(Tensor(a!) self, Scalar negative_slope=0.01) -> (Tensor(a!))
  • aten::linear(Tensor input, Tensor weight, Tensor? bias=None) -> (Tensor)
  • aten::log(Tensor self) -> (Tensor)
  • aten::lstm_cell(Tensor input, Tensor[] hx, Tensor w_ih, Tensor w_hh, Tensor? b_ih=None, Tensor? b_hh=None) -> (Tensor, Tensor)
  • aten::lt.Scalar(Tensor self, Scalar other) -> (Tensor)
  • aten::lt.Tensor(Tensor self, Tensor other) -> (Tensor)
  • aten::masked_fill.Scalar(Tensor self, Tensor mask, Scalar value) -> (Tensor)
  • aten::matmul(Tensor self, Tensor other) -> (Tensor)
  • aten::max(Tensor self) -> (Tensor)
  • aten::max.other(Tensor self, Tensor other) -> (Tensor)
  • aten::max_pool1d(Tensor self, int[1] kernel_size, int[1] stride=[], int[1] padding=[], int[1] dilation=[], bool ceil_mode=False) -> (Tensor)
  • aten::max_pool2d(Tensor self, int[2] kernel_size, int[2] stride=[], int[2] padding=[0, 0], int[2] dilation=[1, 1], bool ceil_mode=False) -> (Tensor)
  • aten::max_pool3d(Tensor self, int[3] kernel_size, int[3] stride=[], int[3] padding=[], int[3] dilation=[], bool ceil_mode=False) -> (Tensor)
  • aten::mean(Tensor self, *, int? dtype=None) -> (Tensor)
  • aten::mean.dim(Tensor self, int[] dim, bool keepdim=False, *, int? dtype=None) -> (Tensor)
  • aten::min(Tensor self) -> (Tensor)
  • aten::min.other(Tensor self, Tensor other) -> (Tensor)
  • aten::mul.Scalar(Tensor self, Scalar other) -> (Tensor)
  • aten::mul.Tensor(Tensor self, Tensor other) -> (Tensor)
  • aten::mul_.Tensor(Tensor(a!) self, Tensor other) -> (Tensor(a!))
  • aten::narrow(Tensor(a) self, int dim, int start, int length) -> (Tensor(a))
  • aten::narrow.Tensor(Tensor(a) self, int dim, Tensor start, int length) -> (Tensor(a))
  • aten::ne.Scalar(Tensor self, Scalar other) -> (Tensor)
  • aten::ne.Tensor(Tensor self, Tensor other) -> (Tensor)
  • aten::neg(Tensor self) -> (Tensor)
  • aten::norm.ScalarOpt_dim(Tensor self, Scalar? p, int[1] dim, bool keepdim=False) -> (Tensor)
  • aten::permute(Tensor(a) self, int[] dims) -> (Tensor(a))
  • aten::pixel_shuffle(Tensor self, int upscale_factor) -> (Tensor)
  • aten::pow.Tensor_Scalar(Tensor self, Scalar exponent) -> (Tensor)
  • aten::pow.Tensor_Tensor(Tensor self, Tensor exponent) -> (Tensor)
  • aten::prelu(Tensor self, Tensor weight) -> (Tensor)
  • aten::prod(Tensor self, *, int? dtype=None) -> (Tensor)
  • aten::prod.dim_int(Tensor self, int dim, bool keepdim=False, *, int? dtype=None) -> (Tensor)
  • aten::reciprocal(Tensor self) -> (Tensor)
  • aten::relu(Tensor input) -> (Tensor)
  • aten::relu_(Tensor(a!) self) -> (Tensor(a!))
  • aten::repeat(Tensor self, int[] repeats) -> (Tensor)
  • aten::replication_pad1d(Tensor self, int[2] padding) -> (Tensor)
  • aten::replication_pad2d(Tensor self, int[4] padding) -> (Tensor)
  • aten::replication_pad3d(Tensor self, int[6] padding) -> (Tensor)
  • aten::reshape(Tensor self, int[] shape) -> (Tensor)
  • aten::rsub.Scalar(Tensor self, Scalar other, Scalar alpha=1) -> (Tensor)
  • aten::rsub.Tensor(Tensor self, Tensor other, Scalar alpha=1) -> (Tensor)
  • aten::select.int(Tensor(a) self, int dim, int index) -> (Tensor(a))
  • aten::sig...
Read more

TRTorch v0.4.0

24 Aug 21:49
Compare
Choose a tag to compare

TRTorch v0.4.0

Support for PyTorch 1.9, TensorRT 8.0. Introducing INT8 Execution for QAT models, Module Based Partial Compilation, Auto Device Configuration, Input Class, Usability Improvements, New Converters, Bug Fixes

Target Platform Changes

This is the fourth beta release of TRTorch, targeting PyTorch 1.9, CUDA 11.1 (on x86_64, CUDA 10.2 on aarch64), cuDNN 8.2 and TensorRT 8.0 with backwards compatible source for TensorRT 7.1. On aarch64 TRTorch targets Jetpack 4.6 primarily with backwards compatibile source for Jetpack 4.5. When building on Jetson, the flag --platforms //toolchains:jetpack_4.x must be now be provided for C++ compilation to select the correct dependency paths. For python by default it is assumed the Jetpack version is 4.6. To override this add the --jetpack-version 4.5 flag when building.

TensorRT 8.0

This release adds support for compiling models trained with Quantization aware training (QAT) allowing users using the TensorRT PyTorch Quantization Toolkit (https://github.com/NVIDIA/TensorRT/tree/master/tools/pytorch-quantization) to compile their models using TRTorch. For more information and a tutorial, refer to https://www.github.com/NVIDIA/TRTorch/tree/v0.4.0/examples/int8/qat. It also adds support for sparsity via the sparse_weights flag in the compile spec. This allows TensorRT to utilize specialized hardware in Ampere GPUs to minimize unnecessary computation and therefore increase computational efficiency.

Partial Compilation

In v0.4.0 the partial compilation feature of TRTorch can now be considered beta level stability. New in this release is the ability to specify entire PyTorch modules to run in PyTorch explicitly as part of partial compilation. This should let users isolate troublesome code easily when compiling. Again, feedback on this feature is greatly appreciated.

Automatic Device Configuration at Runtime

v0.4.0 also changes the "ABI" of TRTorch to now include information about the target device for the program. Programs compiled with v0.4.0 will look for and select the most compatible available device. The rules used are: Any valid device option must have the same SM capability as the device building the engine. From there, TRTorch prefers the same device (e.g. Built on A100 so A100 is better than A30) and finally prefers the same device ID. Users will be warned if this selected device is not the current active device in the course of execution as overhead may be incurred in transferring input tensors from the current device to the target device. Users can then modify their code to avoid this. Due to this ABI change, existing compiled TRTorch programs are incompatible with the TRTorch v0.4.0 runtime. From v0.4.0 onwards an internal ABI version will check program compatibility. This ABI version is only incremented with breaking changes to the ABI.

API Changes (Input, enabled_precisions, Device)

TRTorch v0.4.0 changes the API for specifying Input shapes and data types to provide users more control over configuration. The new API makes use of the class trtorch.Input which lets users set the shape (or shape range) as well as memory layout and expected data type. These input specs are set in the input field of the CompileSpec.

"inputs": [
        trtorch.Input((1, 3, 224, 224)), # Static input shape for input #1
        trtorch.Input(
            min_shape=(1, 224, 224, 3),
            opt_shape=(1, 512, 512, 3),
            max_shape=(1, 1024, 1024, 3),
            dtype=torch.int32,
            format=torch.channel_last,
        ) # Dynamic input shape for input #2, input type int and channel last format
    ],

The legacy input_shapes field and associated usage with lists of tuples/InputRanges should now be considered deprecated. They remain usable in v0.4.0 but will be removed in the next release. Similarly, the compile spec field op_precision is now also deprecated in favor of enabled_precisions. enabled_precisions is a set containing the data types that kernels will be allowed to use. Whereas setting op_precision = torch.int8 would implicitly enable FP32 and FP16 kernels as well, now enabled_precisions should be set as {torch.float32, torch.float16, torch.int8} to do the same. In order to maintain similar behavior to normal PyTorch, if FP16 is the lowest precision enabled but no explicit data type is set for the inputs to the model, the expectation will be that inputs will be in FP16 . For other cases (FP32, INT8) FP32 is the default, similar to PyTorch and previous versions of TRTorch. Finally in the Python API, a class trtorch.Device has been added. While users can continue to use torch.Device or other torch APIs, trtorch.Device allows for better control for the specific use cases of compiling with TRTorch (e.g. setting DLA core and GPU fallback). This class is very similar to the C++ version with a couple additions of syntactic sugar to make the class easier and more familiar to use:

trtorch.Device("dla:0", allow_gpu_fallback=False) #Set device as DLA Core 0 (implicitly sets the GPU managing DLA cores as the GPU and sets fallback to false)

trtorch.Device can be used instead of a dictionary in the compile spec if desired.

trtorchc has been updated to reflect these API changes. Users can set the shape, dtype and format of inputs from the command line using the following format "[(MIN_N,..,MIN_C,MIN_H,MIN_W);(OPT_N,..,OPT_C,OPT_H,OPT_W);(MAX_N,..,MAX_C,MAX_H,MAX_W)]@DTYPE%FORMAT" e.g. (3, 3, 32,32)@f16%NHWC. -p is now a repeatable flag to enable multiple precisions. Also added are repeatable flags --ffm and --ffo to mark specific modules and operators for running in PyTorch respectively. To use these two options, --allow-torch-fallback should be set. Options for embedding serialized engines (--embed-engine) and sparsity (--sparse-weights) added as well.

Usability

Finally, TRTorch v0.4.0 also now includes the ability to provide backtraces for locations in your model which TRTorch does not support. This can help in identifying locations in the model that might need to change for TRTorch support or modules which should run fully in PyTorch via partial compilation.

Dependencies

- Bazel 4.0.0
- LibTorch 1.9.0
- CUDA 11.1 (on x86_64, by default, newer CUDA 11 supported with compatible PyTorch Build), 10.2 (on aarch64)
- cuDNN 8.2.2.3
- TensorRT 8.0.1.6

0.4.0 (2021-08-24)

  • feat(serde)!: Refactor CudaDevice struct, implement ABI versioning, (9327cce)
  • feat(//py)!: Implementing top level python api changes to reflect new (482265f)
  • feat(//cpp)!: Changes to TRTorch C++ api reflecting Input and (08b4942)
  • feat!: Pytorch 1.9 version bump (a12d249)
  • feat(//core/runtime)!: Better and more portable names for engines (6eb3bb2)

Bug Fixes

  • //core/conversion/conversionctx: Guard final engine building (dfa9ae8)
  • //core/lowering: use lower_info as parameter (370aeb9)
  • //cpp/ptq: fixing bad accuracy in just the example code (7efa11d)
  • //py: Fix python setup.py with new libtrtorch.so location (68ba63c)
  • //tests: fix optional jetson tests (4c32a83)
  • //tests: use right type for masked_fill test (4a5c28f)
  • aten::cat: support neg dim for cat (d8ca182)
  • aten::select and aten::var: Fix converters to handle negative axes (3a734a2)
  • aten::slice: Allow slicing of pytorch tensors (50f012e)
  • aten::tensor: Last dim doesnt always get written right (b68d4aa)
  • aten::tensor: Last dim doesnt always get written right (38744bc)
  • Address review comments, fix failing tests due to bool mishandling (13eef91)
  • Final working version of QAT in TRTorch (521a0cb)
  • fix aten::sub.scalar operator (9a09514)
  • Fix linear lowering pass, lift layer_norm scale layer restriction and matmul layer nbdims restriction (930d582)
  • Fix testcases using old InputRange API (ff87956)
  • Fix TRT8 engine capability flags (2b69742)
  • Fix warnings thrown by noexcept functions (c5f7eea)
  • Fix warnings thrown by noexcept functions (ddc8950)
  • Minor fixes to qat scripts (b244423)
  • Restrict TRTorch to compile only forward methods (9f006d5)
  • Transfer calibration data to gpu when it is not a batch (23739cb)
  • typo in aten::batch_norm (d47f48f)
  • qat: Rescale input data for C++ application (9dc6061)
  • Use len() to get size of datase...
Read more

TRTorch v0.3.0

14 May 00:55
Compare
Choose a tag to compare

TRTorch v0.3.0

Support for PyTorch 1.8.x (by default 1.8.1), Introducing Plugin Library, PTQ from Python, Arbitrary TRT engine embedding, Preview Release of Partial Compilation, New Converters, Bug Fixes

This is the third beta release of TRTorch, targeting PyTorch 1.8.x, CUDA 11.1 (on x86_64), TensorRT 7.2, cuDNN 8. TRTorch 0.3.0 binary releases target PyTorch 1.8.1 specifically, these builds are not compatible with 1.8.0, though the source code remains compatible with any PyTorch 1.8.x version. On aarch64 TRTorch targets JetPack 4.5.x. This release introduces libtrtorch_plugins.so. This library is a portable distribution of all TensorRT plugins used in TRTorch. The intended usecase is to support TRTorch programs that utilize TensorRT plugins deployed on systems with only the runtime library available or in the case that TRTorch was used to create a TensorRT engine to be run outside the TRTorch runtime, which makes uses of TRTorch plugins. An example on how to use this library can be found here: https://www.github.com/NVIDIA/TRTorch/tree/v0.3.0/examples/sample_rt_app. TRTorch 0.3.0 also now allows users to repurpose PyTorch Dataloaders to do post training quantization in Python similar to the workflow supported in C++ currently. It also introduces a new API to wrap arbitrary TensorRT engines in a PyTorch Module wrapper, making the serializable by torch.jit.save and completely compatible with other PyTorch modules. Finally, TRTorch 0.3.0 also includes a preview of the new partial compilation capability of the TRTorch compiler. With this feature, users can now instruct TRTorch to keep operations that are not supported but TRTorch/TensorRT in PyTorch. Partial compilation should be considered alpha stability and we are seeking feedback on bugs, pain points and feature requests surrounding using this feature.

Dependencies:

- Bazel 4.0.0
- LibTorch 1.8.1 (on x86_64), 1.8.0 (on aarch64)
- CUDA 11.1 (on x86_64, by default , newer CUDA 11 supported with compatible PyTorch Build), 10.2 (on aarch64)
- cuDNN 8.1.1
- TensorRT 7.2.3.4

0.3.0 (2021-05-13)

Bug Fixes

  • //plugins: Readding cuBLAS BUILD to allow linking of libnvinfer_plugin on Jetson (a8008f4)

  • //tests/../concat: Concat test fix (2432fb8)

  • //tests/core/partitioning: Fixing some issues with the partition (ff89059)

  • erase the repetitive nodes in dependency analysis (80b1038)

  • fix a typo for debug (c823ebd)

  • fix typo bug (e491bb5)

  • aten::linear: Fixes new issues in 1.8 that cause script based (c5057f8)

  • register the torch_fallback attribute in Python API (8b7919f)

  • support expand/repeat with IValue type input (a4882c6)

  • support shape inference for add_, support non-tensor arguments for segmented graphs (46950bb)

  • feat!: Updating versions of CUDA, cuDNN, TensorRT and PyTorch (71c4dcb)

  • feat(WORKSPACE)!: Updating PyTorch version to 1.8.1 (c9aa99a)

Features

  • //.github: Linter throws 1 when there needs to be style changes to (a39dea7)
  • //core: New API to register arbitrary TRT engines in TorchScript (3ec836e)
  • //core/conversion/conversionctx: Adding logging for truncated (96245ee)
  • //core/partitioing: Adding ostream for Partition Info (b3589c5)
  • //core/partitioning: Add an ostream implementation for (ee536b6)
  • //core/partitioning: Refactor top level partitioning API, fix a bug with (abc63f6)
  • //core/plugins: Gating plugin logging based on global config (1d5a088)
  • added user level API for fallback (f4c29b4)
  • allow users to set fallback block size and ops (6d3064a)
  • insert nodes by dependencies for nonTensor inputs/outputs (4e32eff)
  • support aten::arange converter (014e381)
  • support aten::transpose with negative dim (4a1d2f3)
  • support Int/Bool and other constants' inputs/outputs for TensorRT segments (54e407e)
  • support prim::Param for fallback inputs (ec2bbf2)
  • support prim::Param for input type after refactor (3cebe97)
  • support Python APIs for Automatic Fallback (100b090)
  • support the case when the injected node is not supported in dependency analysis (c67d8f6)
  • support truncate long/double to int/float with option (740eb54)
  • Try to submit review before exit (9a9d7f0)
  • update truncate long/double python api (69e49e8)
  • //docker: Adding Docker 21.03 (9b326e8)
  • update truncate long/double warning message (60dba12)
  • //docker: Update CI container (df63467)
  • //py: Allowing people using the PyTorch backend to use TRTorch/TRT (6c3e0ad)
  • //py: Catch when bazel is not in path and error out when running (1da999d)
  • //py: Gate partial compilation from to_backend API (bf1b2d8)
  • //py: New API to embed engine in new module (88d07a9)
  • aten::floor: Adds floor.int evaluator (a6a46e5)

BREAKING CHANGES

  • PyTorch version has been bumped to 1.8.0
    Default CUDA version is CUDA 11.1
    TensorRT version is TensorRT 7.2.3.4
    cuDNN version is now cuDNN 8.1

Signed-off-by: Naren Dasan naren@narendasan.com
Signed-off-by: Naren Dasan narens@nvidia.com

  • Due to issues with compatability between PyTorch 1.8.0
    and 1.8.1 in the Torch Python API, TRTorch 0.3.0 compiled for 1.8.0 does not
    work with PyTorch 1.8.1 and will show an error about use_input_stats.
    If you see this error make sure the version of libtorch you are
    compiling with is PyTorch 1.8.1

TRTorch 0.3.0 will target PyTorch 1.8.1. There is no backwards
compatability with 1.8.0. If you need this specific version compile from
source with the dependencies in WORKSPACE changed

Signed-off-by: Naren Dasan naren@narendasan.com
Signed-off-by: Naren Dasan narens@nvidia.com

Supported Operators in TRTorch v0.3.0

Operators Currently Supported Through Converters

  • aten::_convolution(Tensor input, Tensor weight, Tensor? bias, int[] stride, int[] padding, int[] dilation, bool transposed, int[] output_padding, int groups, bool benchmark, bool deterministic, bool cudnn_enabled, bool allow_tf32) -> (Tensor)
  • aten::_convolution.deprecated(Tensor input, Tensor weight, Tensor? bias, int[] stride, int[] padding, int[] dilation, bool transposed, int[] output_padding, int groups, bool benchmark, bool deterministic, bool cudnn_enabled) -> (Tensor)
  • aten::abs(Tensor self) -> (Tensor)
  • aten::acos(Tensor self) -> (Tensor)
  • aten::acosh(Tensor self) -> (Tensor)
  • aten::adaptive_avg_pool2d(Tensor self, int[2] output_size) -> (Tensor)
  • aten::add.Scalar(Tensor self, Scalar other, Scalar alpha=1) -> (Tensor)
  • aten::add.Tensor(Tensor self, Tensor other, Scalar alpha=1) -> (Tensor)
  • aten::add_.Tensor(Tensor(a!) self, Tensor other, *, Scalar alpha=1) -> (Tensor(a!))
  • aten::asin(Tensor self) -> (Tensor)
  • aten::asinh(Tensor self) -> (Tensor)
  • aten::atan(Tensor self) -> (Tensor)
  • aten::atanh(Tensor self) -> (Tensor)
  • aten::avg_pool1d(Tensor self, int[1] kernel_size, int[1] stride=[], int[1] padding=[0], bool ceil_mode=False, bool count_include_pad=True) -> (Tensor)
  • aten::avg_pool2d(Tensor self, int[2] kernel_size, int[2] stride=[], int[2] padding=[0, 0], bool ceil_mode=False, bool count_include_pad=True, int? divisor_override=None) -> (Tensor)
  • aten::avg_pool3d(Tensor self, int[3] kernel_size, int[3] stride=[], int[3] padding=[], bool ceil_mode=False, bool count_include_pad=True, int? divisor_override=None) -> (Tensor)
  • aten::batch_norm(Tensor input, Tensor? gamma, Tensor? beta, Tensor? mean, Tensor? var, bool training, float momentum, float eps, bool cudnn_enabled) -> (Tensor)
  • aten::cat(Tensor[] tensors, int dim=0) -> (Tensor)
  • aten::ceil(Tensor self) -> (Tensor)
  • aten::clamp(Tensor self, Scalar? min=None, Scalar? max=None) -> (Tensor)
  • aten::cos(Tensor self) -> (Tensor)
  • aten::cosh(Tensor self) -> (Tensor)
  • aten::div.Scalar(Tensor self, Scalar other) -> (Tensor)
  • aten::div.Tensor(Tensor self, Tensor other) -> (Tensor)
  • ate...
Read more