RDK-X 5 Evaluation: AI Board for Robotics?


Introduction

If you follow my YouTube channel, you recognize I like checking AI side devices– specifically those with NPUs. While CPUs and GPUs have their place, NPUs provide a distinct mix of performance and performance for running deep learning versions on-device.

Today we’re looking at something brand-new: the RDK-X 5 from D-Robotics — a Chinese firm that previously released the RDK-X 3 I avoided reviewing the X 3 due to limited documentation, yet the X 5 caught my interest: it’s inexpensive, has relatively much better support, and is placed as a robotics-focused AI board.

I got it myself– no sponsorship below– and put it through my normal tests.

Very first Impressions & & Configuration

Out of the box, the usability feels criterion for a dev board. I installed the system picture provided by the supplier (which included ROS, Jupyter, and other devices preinstalled), but quickly discovered something odd: no NPU reasoning collections were included

Additionally, the very first boot experience was uncommon– the board had a static IP ( 192 168 127 10 and acted like a router. If you plug it right into a LAN expecting it to appear as a new device, it will not– you require to attach by means of HDMI or straight to a computer to accessibility it.

Lesson found out: for the cleanest start, order the official image from the D-Robotics site (however no guarantees that they do not have the very same troubles).

Specifications at a Look

According to main resources, the RDK-X 5 functions:

  • SoC: Sunrise X 5
  • 8 × Arm Cortex-A 55 @ 1 5 GHz
  • 10 TOPS NPU , 32 GFLOPS GPU
  • 4 K video clip encode/decode
  • RAM: 4 GB or 8 GB LPDDR 4
  • Storage: micro-SD port (no onboard eMMC)

Connection:

  • Gigabit Ethernet (PoE)
  • Wi-Fi 6, Bluetooth 5 4

I/O:

  • Double MIPI-CSI (sustains stereo electronic cameras)
  • MIPI-DSI, HDMI 1080 p 60
  • 4 × USB 3.0, USB-C, micro-USB debug
  • 40 -pin GPIO header, CONTAINER FD, audio jack
  • Software application: Ubuntu 22 04 with ROS, TensorFlow, PyTorch; ~ 200 open-source formulas available via NodeHub

Documents: Valuable however Inconsistent

Among the primary obstacles: the main paperwork and GitHub ModelZoo examples don’t always match Often, the GitHub code works while the docs’ instructions do not. This recommends some separate in between growth and documents teams.

Additionally, there are various independent databases.

And frequently, the information in them and the overviews differ.

Reasoning Capabilities

The most significant limitation : the NPU sustains just INT 8 reasoning No FP 16 or FP 32 implementation.

That stated, D-Robotics gives a number of devices to boost quantization outcomes, including post-training quantization (PTQ) and quantization-aware training (QAT) offered with PyTorch. The PTQ tooling is surprisingly adjustable– similar to what higher-end vendors like HALA offer.

The export procedure mainly uses ONNX (v 11 or earlier), with some support for Caffe and TensorFlow designs. The pipe appears to internally convert ONNX to a PyTorch-like intermediate style prior to generating the final binary.

One neat attribute: in need of support layers can automatically offload to CPU This is uncommon among spending plan NPU boards and makes release extra forgiving.

Advancement Operations

D-Robotics supplies two main Docker images:

  • CPU-only (~ 1 5 GB)– enough for ONNX exports and fundamental model prep work.
  • GPU-enabled (~ 12 GB)– needed for innovative PyTorch workflows and incorporated growth devices.

The GPU image is more vital: it includes collections for straight PyTorch model export and debugging. However, instances are scarce, and my own efforts to run QAT pipelines from square one stopped working because of layer and quantization compatibility issues.

Performance Benchmarks

Below’s what I gauged on my RDK-X 5:

ResNet- 18 (224 × 224

  • Overall structure latency: 1, 759 11 ms
  • Typical latency: 8 80 ms
  • Mount price: ≈ 449 FPS

INT 8 YOLOv 5 (640 × 640

  • Forward pass: 33 99 ms
  • Pre-processing: 16 48 ms
  • C → NumPy conversion: 16 99 ms
  • Post-processing: ~ 30 ms

EfficientNet B 2 (224 × 224

  • Frame totally latency is: 2094 880615 ms
  • Typical latency is: 10 474403 ms
  • Frame price is: 377 831850 FPS

ResNext 50 (224 × 224

  • Mount totally latency is: 4155 889648 ms
  • Ordinary latency is: 20 779448 ms
  • Mount rate is: 190 626331 FPS

These results are faster than many below-$ 100 AI boards I have actually checked, particularly for this price point.

You can examine the distinction boards I contrasted below

Stereo Deepness– The Untapped Potential

On paper, the RDK-X 5 supports stereo perception — an important attribute for robotics. The website even shows stereo depth instances.

In technique? Only the ROS 2 example with StereoNet (one of the worst depth estimate networks).

Yes, they attempted to optimise it. Yet it’s revealing constraints.

Pros & & Disadvantages

✅ Pros:

  • Strong efficiency for the price
  • Rich I/O for robotics (stereo cameras, CAN FD, GPIO)
  • Offload fallback for unsupported layers
  • Active, expanding paperwork base
  • Modular toolchain with Docker pictures

⚠ Disadvantages:

  • INT 8 -just reasoning
  • LLM support is marketing buzz– no NPU acceleration
  • Export tooling still harsh with restricted instances
  • Some initial configuration quirks (static IP actions)

Final thought

The RDK-X 5 is a strong budget AI board with real robotics possible. Its INT 8 constraint and premature tooling hold it back, however the mix of performance, rate, and stereo-capable equipment makes it a major rival in the edge AI room.

If D-Robotics improves PyTorch assimilation, updates ONNX assistance, and supplies on stereo assumption, the X 5 could become one of the most effective worth AI boards for robotics in 2025

Do you have any kind of concerns left? Allow’s discuss it!

And, naturally. If you want to follow my posts about Computer Vision boards, sign up for my LinkedIn , YouTube or Twitter If you have a question– ask them in the remarks and via email ([email protected]) (or we can seek advice from your case).

Resource web link

Leave a Reply

Your email address will not be published. Required fields are marked *