Polka: A unified node for all pointcloud pre-processing/merging

Hello folks,

Point cloud pre-processing including deskewing, merging, and filtering traditionally requires a chain of nodes working in tandem, many of which are no longer actively maintained. Setting up these individual filtering stages often consumes excessive CPU cycles and precious DDS bandwidth.

What if you had a single, low-latency node that could voxellize, deskew, downsample, and merge scans in one go? By passing only mission-critical features to your odometry nodes and downstream, you significantly reduce lag and bandwidth usage across your entire navigation or SLAM stack. A single node to accomplish this.

I developed Polka to solve this. It’s a drop-in replacement for multiple pre-processing nodes and if you need to save CPU, you can run the entire pipeline on your GPU.

Latency across both being ~40ms.

Current features:

  • Merge Pointclouds and laser scans

  • Input/output frame filtering.

  • Defined footprint, height, and angular box filters.

  • Voxel downsampling.

  • GPU acceleration support.

  • Deskewing Pointclouds (WIP)

I’d love your feedback, and if you find the project useful, please consider leaving a star on GitHub!

GitHub - Pana1v/polka

2 Likes

Thanks, what’s the speedup achievable with GPU? Do you have some measurements or estimates?

Regarding CPU pipeline, if you need to save bandwidth and unnecessary copies, that’s what the filters package is here for.

To get a simple filter chain allowing to process PointCloud2 messages in filters, there is (our) sensor_filters package . @solonovamax helps us to get the ROS 2 port working and it’s around the corner! (ros2 port by solonovamax · Pull Request #7 · ctu-vras/sensor_filters · GitHub) .

Does Polka do something that would not be representable as a filter? (Except GPU support which is currently out of scope)

Is there any reason why the link in your post leads to google and not directly to github?

Hi,

Polka, was meant to be primarily a laserscan and pointcloud merger, which happens to come with its own de-skewing, voxels, filters and an alternate GPU pipeline.

I yaml file would give a good idea of it’s capabilities, attaching it here

polka:
ros__parameters:
# --------------------------------------------------
# General
# --------------------------------------------------

output_frame_id: "base_link"     # all points are transformed into this frame
output_rate: 20.0                # Hz - merged output publish rate
source_timeout: 0.5              # seconds - drop source if no data within this window
enable_gpu: true                 # use CUDA GPU pipeline when available (falls back to CPU)
timestamp_strategy: "latest"     # "earliest" | "latest" | "average" | "local"
max_source_spread_warn: 0.05     # seconds - warn if source timestamps differ by more

# --------------------------------------------------
# Motion Compensation (IMU-based deskewing)
# --------------------------------------------------
# Corrects for robot motion during LiDAR scans using IMU data.
# Per-point deskewing uses per-point timestamps from the LiDAR driver
# and the SE(3) exponential map motion model (angular velocity + acceleration).
# Inter-source alignment corrects for timing offsets between different sensors.
# Motion model inspired by rko_lio (Malladi et al., 2025, arXiv:2509.06593).

motion_compensation:
  enabled: false
  imu_topic: ""                  # sensor_msgs/Imu topic (empty = disabled)
  max_imu_age: 0.2              # seconds - reject stale IMU data
  imu_buffer_size: 200           # ring buffer capacity (~1s at 200Hz)
  per_point_deskew: true         # per-point correction within each scan
                                 # requires per-point timestamps in PointCloud2
                                 # (auto-detects 'time', 't', 'timestamp', etc.)
  deskew_timestamp_field: "auto" # "auto" or specific field name (e.g. "time")

# --------------------------------------------------


# Outputs
# --------------------------------------------------

outputs:

  # === Merged PointCloud2 ===
  cloud:
    enabled: true
    topic: "~/merged_cloud"

    # Post-merge filters (applied to the merged cloud before publishing)
    # All filters are independent and applied in this order:
    #   range -> angular -> box -> height filter -> footprint filter -> voxel
    filters:

      # Range filter - keep points within a spherical distance from the origin
      range:
        enabled: false
        min: 0.1                 # meters - minimum distance
        max: 30.0                # meters - maximum distance

      # Angular filter - keep or exclude points by horizontal angle (yaw)
      # Angles are in degrees, measured counter-clockwise from +X axis (0 = forward)
      # Ranges are [start, end] pairs; multiple pairs can be specified
      angular:
        enabled: false
        invert: false            # false = KEEP listed ranges, true = EXCLUDE them
        ranges: [0.0, 360.0]     # degrees - pairs of [start, end]

      # Box filter - keep points within an axis-aligned bounding box
      box:
        enabled: false
        x_min: -20.0             # meters (behind / forward)
        x_max:  20.0
        y_min: -20.0             # meters (right / left)
        y_max:  20.0
        z_min:  -2.0             # meters (below / above)
        z_max:   5.0

    # Height filter - clip the output cloud to a z-range (in output frame)
    # Useful for removing ground and overhead points
    height_cap:
      enabled: false
      z_min: -1.0                # meters - lowest point to keep
      z_max:  3.0                # meters - highest point to keep

    # Footprint filter - remove points that hit the robot body
    # Define one or more axis-aligned exclusion boxes (in output frame).
    # Any point inside ANY box is removed. Name each box in box_names
    # and define its bounds below.
    self_filter:
      enabled: false
      box_names: ["chassis", "sensor_mast"]
      chassis:                   # main robot body
        x_min: -0.30
        x_max:  0.30
        y_min: -0.25
        y_max:  0.25
        z_min: -0.10
        z_max:  0.50
      sensor_mast:               # vertical pole holding sensors
        x_min: -0.05
        x_max:  0.05
        y_min: -0.05
        y_max:  0.05
        z_min:  0.50
        z_max:  1.20

    # Voxel downsampling - reduce point density uniformly
    # Set leaf_size for uniform voxels, or leaf_x/y/z for per-axis control
    voxel:
      enabled: false
      leaf_size: 0.05            # meters - uniform voxel size
      # leaf_x: 0.05             # per-axis overrides (take priority over leaf_size)
      # leaf_y: 0.05
      # leaf_z: 0.10

  # === Merged LaserScan ===
  # Flattens the merged 3D cloud into a 2D scan (closest point per angular bin)
  scan:
    enabled: true
    topic: "~/merged_scan"
    z_min: -0.10                 # meters - bottom of horizontal slice
    z_max:  0.50                 # meters - top of horizontal slice
    angle_min: -3.14159265       # radians - scan start angle (full 360)
    angle_max:  3.14159265       # radians - scan end angle
    angle_increment: 0.00436332  # radians per bin (0.25 degrees = 1440 bins)
    range_min: 0.10              # meters - minimum valid range
    range_max: 30.0              # meters - maximum valid range

# --------------------------------------------------
# Sources
# --------------------------------------------------
# Each source is an independent LiDAR input.
#   type: "pointcloud2" (3D LiDAR) or "laserscan" (2D LiDAR)
#
# Per-source filters run BEFORE merging, in each sensor's own frame.
# This lets you crop irrelevant data early (less work for the merge step).
# Available per-source filters: range, angular, box
# (same parameters as the output filters above)

source_names: ["front_3d", "rear_2d"]

sources:

  # --- 3D LiDAR (PointCloud2) ---
  front_3d:
    topic: "/front_lidar/points"
    type: "pointcloud2"
    qos_reliability: "best_effort"   # "best_effort" or "reliable"
    qos_history_depth: 1
    filters:
      # Drop self-reflections too close to the sensor
      range:
        enabled: true
        min: 0.30                # meters - ignore returns within 30 cm
        max: 25.0
      # Keep only the front 180 degrees
      angular:
        enabled: true
        invert: false
        ranges: [270.0, 90.0]    # wraps around 0 to cover front hemisphere
      # Not using box filter on this source
      box:
        enabled: false

  # --- 2D LiDAR (LaserScan, auto-converted to PointCloud2) ---
  rear_2d:
    topic: "/rear_lidar/scan"
    type: "laserscan"
    qos_reliability: "best_effort"
    qos_history_depth: 1
    filters:
      # Tighter range for the rear sensor
      range:
        enabled: true
        min: 0.20
        max: 15.0
      # Keep only the rear 180 degrees
      angular:
        enabled: true
        invert: false
        ranges: [90.0, 270.0]    # rear hemisphere
      # Example: crop to a box behind the robot (disabled here)
      box:
        enabled: false
        # x_min: -5.0
        # x_max:  0.0
        # y_min: -3.0
        # y_max:  3.0
        # z_min: -1.0
        # z_max:  2.0

Performance Metrics:
On building with release tag,

GPU processing time : ~45 ms
Usage on CPU 3% (Overall)
CPU processing time : ~45 ms
Usage on CPU 12.5% (Overall)

PS: I’ve fixed the link :slight_smile: