newton.actuators.ControllerNeuralMLP#
- class newton.actuators.ControllerNeuralMLP(model_path)[source]#
Bases:
ControllerMLP-based neural network controller.
Uses a pre-trained MLP to compute joint effort from position error and velocity error history.
The network receives concatenated, scaled position-error and velocity-error history as input. The output is multiplied by
effort_scaleto convert from network units to physical effort [N or N·m]. All three scale factors default to1.0(no scaling).Configuration parameters (
input_order,input_idx,pos_scale,vel_scale,effort_scale) are read from the checkpoint metadata, falling back to defaults when absent. Supported checkpoint formats: TorchScript (.ptsaved withtorch.jit.save) and state-dict bundles ({"model": state_dict, "metadata": {...}}saved withtorch.save).- classmethod resolve_arguments(args)#
- __init__(model_path)#
Initialize MLP controller from a checkpoint file.
Supported checkpoint formats: TorchScript (
.ptsaved withtorch.jit.save) and state-dict bundles ({"model": state_dict, "metadata": {...}}saved withtorch.save).Configuration is read from checkpoint metadata:
input_order(str):"pos_vel"or"vel_pos"(default"pos_vel").input_idx(list[int]): history timestep indices (default[0]).pos_scale(float): position-error scaling (default1.0).vel_scale(float): velocity-error scaling (default1.0).effort_scale(float): output effort scaling (default1.0).
- Parameters:
model_path (str) – Path to the checkpoint (
.pt).
- compute(positions, velocities, target_pos, target_vel, feedforward, pos_indices, vel_indices, target_pos_indices, target_vel_indices, forces, state, dt, device=None)#
- finalize(device, num_actuators)#
- is_graphable()#
- is_stateful()#
- state(num_actuators, device)#
- update_state(current_state, next_state)#