ConvSP

SmoothParticleNets

Description

The ConvSP layer is the main workhorse layer of SmoothParticleNets. ConvSP stands for Smooth Particle Convolution. The ConvSP layer operates on unordered particle sets. Each particle has a feature vector associated with it, and the ConvSP performs a convolution on these features, similar to how a Conv2D layer performs a convolution on the channels of a feature image. However, unlike in a standard convolution on a gird, the features associated with each particle here create a continuous vector field across space.

More formally, a set of particles represents a continuous vector field in space. That is, at everypoint in space it is possible to evaluate the features represented by the particle set. This is illustrated in the following diagram and equation

Given an arbitrary query location (the red dot), the features of each nearby particle (x_j) are averaged together, weighted based on their distance to the query point using a kernel function W.

This is then used to perform convolutions. Unlike in the standard convolution, here there isn’t a well-defined grid to convolve on. Instead, the ConvSP layer convolves in free space. This is illustrated in the following diagram.

In the above 2D case, the kernel used is 3x3. Given a query location (the large red dot), the kernel is placed on top of that location. Then the above field lookup equation is used to evaluate the continuous vector field at the center of each kernel cell (small red dots). The resulting values are then multiplied by kernel weights and summed in the same manner as a standard convolution. The key difference between ConvSP and a standard convolution is the use of the smoothing kernel average above to allow evaluating the kernel at any arbitrary point in space.

ConvSP is implemented as a subclass of torch.nn.Module. This allows it to be used in the same manner as any other PyTorch layer (e.g., conv2d). ConvSP is implemented with gradients so that it can be used during a backward call. ConvSP is impelemented in native code with Cuda support, so it can be evaluated efficiently.

Example

Assume locs is a BxNxD tensor containing the locations of N D-dimensional particles across B batches and data is a tensor containing a feature vector for each particle.

# Create a ConvSP layer with 5 output channels, 3 size kernel with dilation of 0.05, and a radius of 0.1.
conv = ConvSP(in_channels=data.shape[2], out_channels=5, locs.shape[2], kernel_size=3, dilation=0.05, radius=0.1, dis_norm=False, with_params=True, kernel_fn='spiky')
# The ConvSP layer requires a ParticleCollision layer to generate the neighbor list. The radius of the neighbor list should be the maximum distance a neighor of any kernel cell could be from the center of the kernel, which is radius + kernel_size/2*dilation.
coll = ParticleCollision(ndim=locs.shape[2], radius=(0.1 + 0.05))
# PartileCollision reorders locs and data.
locs, data, idxs, neighbors = coll(locs, data)
# Get the new features. We'll use the particle locations as the query locations, so we won't be passing anything for qlocs.
new_data = conv(locs, data, neighbors)
# new_data is still reordered according to the reordered locs, but we might want them in the original order.
reorder = ReorderData(reverse=True)
locs, new_data = reorder(idxs, locs, new_data)

Documentation

ConvSP provides two functions: a constructor and forward. Forward is called by calling the layer object itself (in the same manner as any standard PyTorch layer).