ParticleProjection

SmoothParticleNets

Description

The ParticleProjection layer is designed to allow comparison of the particle state with a camera image. It does this by projecting the particles onto a virtual camera image, which can then be compared to other camera images as desired. Each particle is projected onto the virtual image as a small Gaussian, which allows for smooth gradients with respect to the particle positions or camera pose. The layer computes the image coordinate of a given particle location using the pinhole camera model, not taking into account any distortions, e.g., radial distortion. ParticleProjection currently only supports 3D particle locations.

ParticleProjection is implemented as a subclass of torch.nn.Module. This allows it to be used in the same manner as any other PyTorch layer (e.g., conv2d). ParticleProjection can compute gradients with respect to the camera or particle poses, and is implemented with Cuda support for efficient computation.

Example

Assume locs is a BxNxD tensor containing the locations of N D-dimensional particles across B batches.

# First create the ParticleProjection layer.
proj = ParticleProjection(camera_fl=540, camera_size=(480, 640), filter_std=5.0, filter_scale=10.0)
# Setup the camera pose.
camera_pose = torch.Tensor([0.0, 0.0, 0.0])
camera_rotation = torch.Tensor([0.0, 0.0, 0.0, 1.0])
image = proj(locs, camera_pose, camera_rotation)

Documentation

ParticleProjection provides two functions: a constructor and forward. Forward is called by calling the layer object itself (in the same manner as any standard PyTorch layer).