newton.utils.OnnxRuntime#

class newton.utils.OnnxRuntime(path, device=None, batch_size=1, input_batch_axes=None)[source]#

Bases: object

Lightweight ONNX inference engine for graph-capturable MLP policies.

Parameters:
  • path (str) – Path to an .onnx file.

  • device (str | None) – Warp device string (e.g. "cuda:0"). None uses the current default device.

  • batch_size (int) – Fixed batch dimension used to pre-allocate intermediate buffers. Defaults to 1.

  • input_batch_axes (int | dict[str, int] | None) – Optional batch-axis override for graph inputs. If an integer is provided, it is applied to every graph input; if a dictionary is provided, it maps graph input names to their batch axis. The selected axes are replaced with batch_size even when the ONNX model exported them as fixed dimensions.

__init__(path, device=None, batch_size=1, input_batch_axes=None)#