M.W. Sohan Janaka, B.Tech (Hons) in Engineering, works as a Specialist in Digital Solutions at Lapland University of Applied Sciences


Lapland University of Applied Sciences (Lapland UAS) is developing an autonomous robot platform to automate maintenance operations in arctic winter conditions. The implementation is being carried out by the Robotics & Cyber-physical Systems (ARC) team in Lapland UAS, Rovaniemi. The platform is currently deployed and undergoing testing on Snower robot. This platform is designed to configure and deploy to other robots as well.

The development of the autonomous robot platform is enabled and resourced by the Arctic Artificial Intelligence and Robotics (AI.R) project, which promotes autonomous and semi-autonomous platforms that utilize robotics and artificial intelligence (Lapland UAS 2024). The AI.R project includes several use cases for maintenance robots, particularly road maintenance robots operating in Arctic winter conditions (Lapland UAS 2024). The project also explores whether this platform can be used to implement robot prototypes for logistics automation in the warehouse industry. AI.R is a collaborative effort with the University of Lapland and is funded by the Regional Council of Lapland under a grant agreement with the European Regional Development Fund (ERDF) (Project Code: A80741).

What is an autonomous robot?

In our project, we are exploring a variety of different robot types, including ground robots, water drones, robo dogs and more. An autonomous robot is a specialized robot capable of understanding its environment, making decisions and navigating from its current location to a given location without any human intervention.

Figure 1: Testing Snower in arctic winter conditions (Photo: Tuuli Nivala, Lapland University of Applied Sciences)

In this article, we will mainly discuss Snower since the autonomous robot platform is currently deployed to the Snower. Snower is a ground robot built from scratch at Lapland UAS. The development of this robot began as part of the Lapland Robotics project. By the end of that project, the robot had the capability of driving using a remote controller. Later, during the AI.R project, the robot was heavily upgraded. We equipped it with advanced sensors and a more powerful onboard computer. We also added essential safety equipment. Additionally, we replaced the electrical system with a custom-designed power distribution PCB, laying the foundation for autonomous driving capabilities.

Video 1: Testing the capabilities of Snower. To see Snower in action during our testing, check out our video playlist:
https://youtube.com/playlist?list=PLWsViRuTU0_vVEbYQ7IAlROAIAhUaG95m&si=Au6lkwk6SyMrrUuX

These videos showcase Snower’s performance across different testing scenarios, including the challenges and successes we’ve encountered during development.

What is ROS and how does it help to build robots?

ROS (Robot Operating System) is a free and open-source, fully featured robotics SDK (Software Development Kit). In my point of view, even though it is called ”operating system”, it is not a traditional operating system like Windows or Mac. Instead, it is more like a middleware platform for robotics that sits between the hardware and your applications. Think of ROS as a communication layer that runs on a traditional operating system and allows different parts of a robot to talk to each other seamlessly.

ROS provides a structured way to organize robot software through nodes (individual programs that handle specific tasks) and topics (communication channels between nodes). This modular approach makes it much easier to implement, test and maintain complex robotic systems. This approach closely resembles event-driven microservice architecture in enterprise software development. ROS is ready for use across a wide array of robotics applications, from indoor to outdoor, home to automotive, underwater to space and consumer to industrial (ROS.org 2024). It is compatible with Linux, Windows and macOS as well as various embedded platforms via Micro-ROS (ROS.org 2024).

 Our platform is built on top of ROS2, which is the latest generation of ROS with more flexibility and reliability. Since ROS2 is built on DDS (Data Distribution Service), its distributed architecture allows us to run different components on different computers and easily communicate over networks, which is essential for our Arctic field operations.

How do robots sense the world?

Like humans rely on our senses to understand our surroundings, robots also use various sensors to perceive and interpret the environment. Snower relies on a selected set of sensors for autonomous operation:

LiDAR – Ouster OS1 LiDAR 
Cameras – ZED 2 Stereo Camera, SEEK Thermal Camera 
GPS – SparkFun ZED-F9P GNSS module and Taoglas GNSS Antenna 
Speed Sensors – Magnetic Hall sensors

All these sensors are integrated into the ROS2 ecosystem and can be easily monitored using RViz2 (the visualization tool for ROS), which provides a real-time 3D view of what the robot is sensing.

Figure 2: Real-time Snower’s Sensor Data Visualization in RViz2

LiDAR (Light Detection and Ranging) is a crucial sensor for autonomous robots as it creates detailed 3D maps of the surroundings using laser pulses. The Ouster LiDARs are popular for their durability in rugged environments with all-weather conditions, real-world shock and vibration resistance (Ouster 2024). It is capable of 32 beams of vertical resolution and has a built-in IMU (Inertial Measurement Unit) (Ouster 2024). The LiDAR is connected to the main brain using a network interface via the router.

The ZED camera provides wide-angle vision and depth perception, essentially giving the robot stereo vision similar to the human eyesight (Stereolabs 2024). It also has built-in IMU, barometer and magnetometer sensors (Stereolabs 2024). The SEEK Thermal Camera enables thermal vision that’s essential for Snower’s arctic operations. In snowy environments with limited visibility, low light conditions and camouflaged obstacles, the thermal camera can detect heat signatures from animals, vehicles, buildings, or warm machinery that might be invisible to other sensors. Both ZED and Seek cameras are directly connected to the main brain via USB connection.

The GPS (Global Positioning System) module is paired with a microcontroller (ESP32 micromod) to program the behavior of the GPS system and handle network connectivity. This microcontroller can connect to the Wi-Fi router and process RTK (Real-Time Kinematic) corrections, which significantly improves positioning accuracy beyond standard GPS precision. RTK corrections work by comparing GPS signals with data from a nearby base station, allowing the system to correct for atmospheric interference and satellite orbit errors. The processed high-precision location data is then transmitted to the main brain via serial interface using Micro-ROS, a specialized version of ROS2 designed for microcontrollers.

The speed sensors are processed by the main microcontroller (ESP32 WROOM), which is the same unit responsible for motor control and driving operations. This microcontroller processes magnetic hall sensor data and converts it into meaningful speed measurements for both left and right tracks. The hall sensors detect magnetic fields from roller chain, generating pulses that correspond to wheel rotations. The microcontroller processes these pulses to calculate wheel speed and direction, then transmits this odometry data to the main brain via Micro-ROS.

To truly understand the world around us, we need to explore different aspects of nature. Therefore, combining data from multiple sensors to create a more accurate and reliable view of reality is crucial. We always try to think beyond a single sensor. We believe the key to better autonomy lies in sensor fusion!

How do robots react?

Understanding the surroundings is only half the challenge. Snower also needs the physical capability to move and respond to what it perceives. Snower uses a differential drive system with two 600W DC motors as actuators for movement. In differential drive, the robot steers by varying the speed of the left and right wheels independently, turning left by slowing the left wheel or speeding up the right wheel and vice versa. Therefore, differential drive vehicles have the ability to turn on the spot. The motors are connected to a dual channel motor controller, which converts digital commands into the appropriate electrical power needed to control motor speed and direction. The main microcontroller handles the navigation commands coming from the main brain and converts them into signals that the motor controller can understand. All these components are powered by a 36V LiFePO4 battery via a custom-built power distribution PCB. This power distribution PCB was designed and integrated into Snower by Dries Nuttin from Hogeschool PXL in Belgium, who joined our lab as an intern specifically to develop this PCB.

How Snower’s brain work?

Behind every intelligent decision of the Snower lies a powerful computer that processes sensor data and coordinates all systems in real-time. We initially used an Nvidia Jetson Orin computer (6-core ARM64, 1024-core GPU, 8GB RAM) as the main brain of Snower. However, the Orin Nano was easily overwhelmed by the high computational workload required for simultaneous sensor processing, mapping and navigation. We then replaced it with the Jetson AGX Orin computer (12-core ARM64, 2048-core GPU, 32GB RAM). The AGX is powerful enough to handle Snower’s demanding workload.

All digital sensors are connected to the main brain, where ROS2 is running and all sensors are integrated with the ecosystem. The Jetson AGX’s CUDA-enabled GPU comes in handy, especially for computer vision and machine learning tasks.


Figure 3: Snower’s internal components

Navigation commands are transmitted to the main microcontroller (ESP32 WROOM) and wheel odometry data is transmitted from the microcontroller via Micro-ROS. Snower controls its motor controller via the main microcontroller. The microcontroller also listens to signals coming from the magnetic hall sensors and converts them into feedback messages of the left and right wheels, sending this data back to the main brain to generate wheel odometry. This creates a complete feedback loop for the navigation stack, allowing the robot to know how fast it is moving and adjust accordingly.

How does Snower find its way in the snow?

Just like humans use maps and landmarks to navigate, robots need their own navigation systems to know where they are and how to reach their destination. The robot uses both odometry and maps to navigate accurately, much like how you might use both compass and maps when hiking. Odometry provides fast, real-time updates of the robot’s position by calculating movement from wheel rotations and IMU data, but it can drift over time due to wheel slippage or sensor errors. The map gives a globally accurate position reference, but it updates more slowly. By combining both, the robot moves smoothly using odometry and corrects its global position with the map, created by SLAM (simultaneous localization and mapping) to avoid getting lost.

Odometry Generation

Snower generates odometry using EKF (Extended Kalman Filter) algorithms, which is a mathematical method for combining multiple sensor readings to estimate the robot’s position and orientation more accurately than a single sensor could provide. The EKF takes inputs from the speed sensors (wheel odometry) and IMUs to generate filtered odometry transformations and odometry topics in ROS2.

Mapping and Localization

Snower can automatically create a map and localize itself in the environment using its sensors and SLAM algorithms. SLAM is like exploring an unknown building while drawing a map and keeping track of where you are at the same time. However, if we already have a pre-mapped area, we can switch to AMCL (Adaptive Monte Carlo Localization) algorithm for localization, which is more efficient when the environment is already known.

Autonomous navigation

The navigation system is where all the components come together to enable autonomous movement. Using the sensor data, the Nav2 navigation stack generates local costmaps and global costmaps. Think of costmaps as maps where different values represent how ”expensive” or dangerous it is for the robot to move through different areas. Obstacles are marked as high-cost areas, while free space is low-cost.

Figure 4: visualization of local and global costmaps and trajectory for the goal

When a user gives a goal to the robot through ROS2, the robot autonomously calculates the best trajectory according to the cost maps. The global planner finds the overall path from start to goal, while the local planner handles real-time obstacle avoidance and smooth movement. The robot can modify its trajectory dynamically if there are moving obstacles or people while navigating to the goal.

The Nav2 stack uses several algorithms working together: a global planner for long-term path planning, a local planner for immediate obstacle avoidance and a recovery system for handling situations where the robot gets stuck. This layered approach ensures robust navigation in complex environments.

How do we stay connected with Snower?

Building an autonomous robot is one thing, but being able to monitor, control and debug it remotely is crucial for real-world operations. Snower can be accessed through Wi-Fi and Lapland UAS Private 5G Network (WAN). Since Lapland UAS is an independent, non-commercial 5G service provider, we have our own 5G SIM cards and base stations in Kemi and Rovaniemi. Therefore, Snower can provide flexible connectivity options for different operational scenarios.

Local Area Network (LAN)

Snower can create its own Wireless LAN using an onboard router (WLink G930). Once Snower is started, the router will start automatically, then robot can be accessed by connecting to its Wi-Fi network. When the developer connects (with a ROS2-enabled computer) to the Snower LAN network, all the topic data (sensor data, status information and control commands) is distributed to the new device automatically due to the distributed nature of ROS2. This is particularly useful for field operations where external network infrastructure may not be available.

Wide Area Network (WAN)

Snower’s router is also connected to the Lapland UAS private 5G network via a SIM card with a static IP address. This allows us to connect with the Snower computer from anywhere within Lapland UAS 5G coverage. Furthermore, using the Zenoh protocol, we can configure ROS2 communication between robots and remote computers. Zenoh is a high-performance protocol that enables efficient data distribution across WAN networks, making it ideal for robotic applications requiring real-time communication beyond Wi-Fi range.

With either of the above connectivity options, developers can debug, troubleshoot, monitor and control Snower.

Future work

Future development of the autonomous robot platform will continue with several planned improvements:

  1. Full 3D Navigation Map: Currently, the robot navigation stack (Nav2) uses a 2D map derived from the 3D point cloud (from LiDAR), which works well for ground robots on flat terrain but limits the robot’s understanding of its environment since it cannot fully account for elevation changes and overhead obstacles. We plan to upgrade to full 3D mapping capability, which will enable better obstacle avoidance and path planning in complex terrain.
  2. GPS Integration: We aim to integrate the GPS topic into the navigation stack, allowing destinations to be provided as GPS coordinates. This will allow the robot to navigate to specific geographic locations rather than just relative positions on a map.
  3. IoT Dashboard: We are planning to finalize the development of an IoT dashboard and connect Snower to the dashboard via Azure cloud. This will enable remote monitoring and control of multiple robots from a central location.

Some of this future work has already been planned to be implemented in different projects currently ongoing in the Lapland UAS. These improvements will enhance the robot’s capabilities for autonomous operation in challenging Arctic conditions, making it more suitable for real-world applications in Lapland.

Open-source collaboration

We believe in the power of collaborative development and open-source innovation. Our autonomous robot platform is available for the open-source community to explore, adapt, and enhance. We welcome you to check out our GitHub repository: https://github.com/Lapland-Robotics/AI.R-Autonomous_Robot

Whether you’re looking to adapt our platform for your own robotics projects, contribute improvements or learn from our implementation, the complete source code and documentation are available for you.

References:

Lapland UAS 2024. AI-R-Arctic: AI & Robotics. Accessed 29 July 2024 https://lapinamk.fi/hanke/ai-r-arctic-ai-robotics/.

Ouster 2024. OS1 LiDAR Sensor. Accessed 29 July 2024 https://ouster.com/products/hardware/os1-lidar-sensor.

ROS.org 2024. Why ROS? Accessed 29 July 2024 https://www.ros.org/blog/why-ros/.

Stereolabs 2024. ZED 2 Camera. Accessed 29 July 2024 https://www.stereolabs.com/en-fi/products/zed-2.