DEV Community

Cover image for From raw bytes to 3D worlds: visualizing LiDAR data with Python
Ștefan Donosă
Ștefan Donosă

Posted on

From raw bytes to 3D worlds: visualizing LiDAR data with Python

Ștefan Donosă | 2025

Introduction

In my previous article, I detailed the detective work required to reverse-engineer the M1C1 rotational LiDAR. We cracked the protocol, identified the "Ghost packets" that were breaking our navigation stack, and learned how to parse the raw hex data.

But a stream of numbers in a terminal isn't enough for a robot—or a human—to navigate. We need a map.

In this article, I will walk you through the exact Python implementation used to transform those raw bytes into:

  1. A high-precision 2D grid map for obstacle avoidance.
  2. A cinematic 3D visualization (5K resolution) that mimics a SLAM point cloud.

Part 1: The mathematical foundation

Before writing code, we need to handle the geometry. The LiDAR doesn't give us X and Y coordinates; it gives us polar coordinates.

Polar to cartesian conversion

The sensor reports:

  • Angle (theta): The direction the laser is pointing (0° to 360°).
  • Distance (r): How far away the obstacle is.

To plot this on a computer screen (which uses a grid of pixels), we must convert these into cartesian coordinates (x, y):

x = r * cos(theta) y = r * sin(theta) 
Enter fullscreen mode Exit fullscreen mode

The scaling challenge

A raw coordinate like (3.5m, 1.2m) is useless for an image library like OpenCV. We need to map the physical world to pixels.

  • Map size: 800x800 pixels.
  • Center point: The robot sits at pixel $(400, 400)$.
  • Scale: If we want to see 5 meters in every direction, we calculate a PIXELS_PER_METER factor.

Part 2: The code architecture

The script is divided into three main "engines":

  1. The decoder: Reads serial data and extracts points.
  2. The 2D mapper: Plots points on a top-down grid.
  3. The 3D renderer: Extrudes the 2D data into a 3D mesh.

1. The decoder & "ghost" filtering

This is where we apply the lessons from Article 1. We define a dictionary of PACKET_TYPES to instantly recognize valid data versus status updates.

# Packet Definitions: (Header Byte 3, Header Byte 4) -> (Total Length, Num Measurements) PACKET_TYPES = { (0x01, 0x01): (12, 0, False), # Type B: Status Packet (IGNORE THIS)  (0x00, 0x19): (40, 14, False), # Type A: Standard Scan } 
Enter fullscreen mode Exit fullscreen mode

The logic:
When we read a packet, we check if it exists in our PACKET_TYPES dictionary. If it's a Type B packet (the one causing "ghost obstacles"), we skip it entirely. If it's Type A, we extract the 14 measurements inside using bitwise operations.

  • Key detail: The sensor provides a Start Angle and an End Angle for the packet, but contains 14 distance points. We use linear interpolation to calculate the precise angle for every intermediate point.

2. The 2D mapper (OpenCV)

For the 2D map, I used OpenCV. It's fast and efficient for manipulating pixel arrays.

def draw_2d_map(points): # Create a black canvas  map_image = np.zeros((800, 800, 3), dtype=np.uint8) for angle, dist in points: # 1. Math: Convert to Cartesian  x, y = polar_to_cartesian_2d(angle, dist) # 2. Scale: Convert Meters to Pixels  px = int(CENTER_X + x * PIXELS_PER_METER) py = int(CENTER_Y - y * PIXELS_PER_METER) # 3. Draw: Plot a blue dot  cv2.circle(map_image, (px, py), 2, (255, 0, 0), -1) 
Enter fullscreen mode Exit fullscreen mode

3. The 3D generator (matplotlib & image processing)

This is the most complex part. Since the LiDAR is 2D (it scans a flat line), how do we get a 3D image? We use a technique called 2.5D Extrusion.

The process:

  1. Grid generation: We take the 2D points and place them on a "floor" grid.
  2. Artificial height: We assign a fixed height (ALTITUDE_OBSTACLE) to every point where an obstacle was detected.
  3. Organic smoothing (The "Secret Sauce"): Raw LiDAR data is noisy and scattered. If we plot it directly, it looks like jagged spikes. To fix this, I applied two image processing techniques to the altitude grid:
    • Dilation: I used a 7x7 kernel to "fatten" the points. This connects adjacent dots, turning scattered points into solid walls.
    • Gaussian blur: I applied a 21x21 blur filter. This smooths the sharp edges, making the terrain look like rolling hills or organic structures rather than digital noise.
# Smoothing the terrain for a better visual kernel = np.ones((7, 7), np.uint8) grid_dilated = cv2.dilate(altitude_grid, kernel, iterations=1) grid_smooth = cv2.GaussianBlur(grid_dilated, (21, 21), 0) 
Enter fullscreen mode Exit fullscreen mode
  1. Rendering: Finally, we use Matplotlib to render this surface. I enabled antialiasing and set the resolution to 5K for a crisp, professional output.

The complete code

Here is the full, open-source implementation. You can run this on a Raspberry Pi or any PC with a USB-to-UART converter.

#!/usr/bin/env python3 # ----------------------------------------------------------------- # Copyright (c) 2025 Ștefan Donosă # Licensed under the MIT License. # -----------------------------------------------------------------  import serial import time import math import numpy as np import cv2 import argparse import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D # --- Configuration --- SERIAL_PORT = "/dev/serial0" BAUD_RATE = 115200 DEFAULT_SCAN_DURATION_SECONDS = 30 # Packet Definitions based on Reverse Engineering PACKET_TYPES = { (0x01, 0x01): (12, 0, False), # Status Packet (Ignore)  (0x00, 0x19): (40, 14, False), # Standard Scan } # Add variable length packets (Type D) for i in range(1, 25): if i == 0x19: continue PACKET_TYPES[(0x00, i)] = (10 + (i * 2), i, False) # Map Settings MAP_SIZE = 800 MAX_METERS = 5 PIXELS_PER_METER = MAP_SIZE / (MAX_METERS * 2) CENTER_X, CENTER_Y = MAP_SIZE // 2, MAP_SIZE // 2 def decode_lidar_packet(packet, num_measurements, has_checksum): """ Extracts distance and angle data from the raw bytes. Uses Linear Interpolation to fill in angles between Start and End. """ points = [] try: # Bytes 6-7: Start Angle (in 0.01 degrees)  start_angle = (packet[6] | (packet[7] << 8)) / 100.0 # Bytes 8-9: End Angle  end_angle = (packet[8] | (packet[9] << 8)) / 100.0 # Handle the wrap-around (e.g., 359 deg -> 1 deg)  angle_diff = end_angle - start_angle if angle_diff < 0: angle_diff += 360.0 step = angle_diff / (num_measurements - 1) if num_measurements > 1 else 0 for i in range(num_measurements): current_angle = (start_angle + i * step) % 360 # Distance is at offset 10, 2 bytes per measurement  idx = 10 + i * 2 if idx + 1 >= len(packet): break # Little-Endian decoding of distance  dist_mm = packet[idx] | (packet[idx + 1] << 8) if dist_mm > 0: points.append((current_angle, dist_mm / 1000.0)) except Exception as e: print(f"Decoding error: {e}") return points return points def polar_to_cartesian(angle, dist): """Converts Polar (Angle, Dist) to Cartesian (X, Y).""" rad = math.radians(angle) # Note: Y is inverted for screen coordinates usually, but kept standard here  return -dist * math.sin(rad), dist * math.cos(rad) def generate_3d_map(frontal_points): """ Generates a 5K resolution 3D surface map using Gaussian smoothing. """ print("Generating 3D Mesh...") # 1. Create the Grid  altitude_grid = np.zeros((MAP_SIZE, MAP_SIZE), dtype=np.float32) obstacle_height = 13.5 # cm  # 2. Populate Grid  for angle, dist in frontal_points: if dist > MAX_METERS: continue x, y = polar_to_cartesian(angle, dist) px = int(CENTER_X + x * PIXELS_PER_METER) py = int(CENTER_Y - y * PIXELS_PER_METER) if 0 <= px < MAP_SIZE and 0 <= py < MAP_SIZE: altitude_grid[py, px] = obstacle_height # 3. Apply Image Processing (Dilation + Blur)  # This connects individual points into solid walls  kernel = np.ones((7, 7), np.uint8) dilated = cv2.dilate(altitude_grid, kernel, iterations=1) # This makes the walls look "organic" rather than pixelated  smooth_grid = cv2.GaussianBlur(dilated, (21, 21), 0) # 4. Render with Matplotlib  print("Rendering 5K Image...") dpi = 300 fig = plt.figure(figsize=(5120/dpi, 2880/dpi)) ax = fig.add_subplot(111, projection='3d') # Subsample grid for performance (render every 8th pixel)  stride = 8 X, Y = np.meshgrid(np.arange(MAP_SIZE), np.arange(MAP_SIZE)) ax.plot_surface( X[::stride, ::stride], Y[::stride, ::stride], smooth_grid[::stride, ::stride], cmap=plt.cm.Reds, antialiased=True, alpha=0.9 ) # Set view angle (Front-Left)  ax.view_init(elev=25, azim=165) ax.set_facecolor('#111111') fig.set_facecolor('#111111') plt.savefig("lidar_3d_render.png", dpi=dpi, facecolor='#111111') print("Saved: lidar_3d_render.png") def main(): parser = argparse.ArgumentParser() parser.add_argument('--duration', type=int, default=DEFAULT_SCAN_DURATION_SECONDS) args = parser.parse_args() all_points = [] # --- Serial Collection Loop ---  try: ser = serial.Serial(SERIAL_PORT, BAUD_RATE, timeout=0.1) print(f"Scanning for {args.duration}s...") start_time = time.time() buffer = bytearray() while time.time() - start_time < args.duration: if ser.in_waiting: buffer.extend(ser.read(ser.in_waiting)) # Find Header 0xAA 0x55  idx = buffer.find(b'\xaa\x55') if idx == -1: if len(buffer) > 100: buffer = buffer[-100:] continue buffer = buffer[idx:] if len(buffer) < 4: continue # Check Packet Type  header = (buffer[2], buffer[3]) if header in PACKET_TYPES: length, count, _ = PACKET_TYPES[header] if len(buffer) >= length: if count > 0: pts = decode_lidar_packet(buffer[:length], count, False) all_points.extend(pts) buffer = buffer[length:] else: buffer = buffer[2:] # Unknown packet, skip header  except Exception as e: print(f"Error: {e}") return # --- Generate Outputs ---  if all_points: # 1. Generate 2D Map  img_2d = np.zeros((MAP_SIZE, MAP_SIZE, 3), dtype=np.uint8) for a, d in all_points: x, y = polar_to_cartesian(a, d) px = int(CENTER_X + x * PIXELS_PER_METER) py = int(CENTER_Y - y * PIXELS_PER_METER) cv2.circle(img_2d, (px, py), 2, POINT_COLOR_2D, -1) cv2.imwrite("lidar_2d_map.png", img_2d) print("Saved: lidar_2d_map.png") # 2. Generate 3D Map (Frontal Arc Only)  # We filter for 315deg to 45deg to get a clean "Driver's View"  frontal = [p for p in all_points if p[0] >= 315 or p[0] <= 45] if frontal: generate_3d_map(frontal) if __name__ == "__main__": main() 
Enter fullscreen mode Exit fullscreen mode

2D MAP

Conclusion

By combining low-level byte parsing with high-level computer vision techniques (OpenCV and Matplotlib), we transformed a cheap sensor into a powerful visualization tool.

The transition from "ghost obstacles" to a clear, high-resolution 3D mesh validates the importance of understanding your hardware protocols. This script now serves as the foundation for the robot's local path planner.

Have you ever had to reverse engineer a hardware component? Or maybe you have a different approach to visualizing LiDAR data? Let's discuss in the comments below! 👇

Top comments (0)