The study utilizes point cloud data, obtained through multiple Azure Kinect RGB-D cameras, to create a detailed and realistic virtual representation of the operational environment. The integration of point cloud data into VR is achieved using the following steps:
Point Cloud Mapping with External Calibration:
- The study proposes using normal vectors derived from point clouds to define extrinsic parameters (external calibration), eliminating the need for physical markers or additional reference information. This approach allows for a simpler yet accurate mapping of the physical environment into the VR workspace.
Data Integration into VR:
- The collected point cloud data is mapped into the virtual environment using the Unity game engine. The study employs the BAD SLAM (Bundle Adjusted Direct RGB-D SLAM) technique to refine the mapping accuracy.

Point cloud scanned via BAD SLAM
- The VR environment uses the global coordinate system to maintain consistency between the virtual and physical setups. The research demonstrates how iterative processes like ICP (Iterative Closest Point) are applied to align multiple camera feeds accurately within the virtual space.
ICP Results
Experimental Setup
For the experimental validation, the research utilized two Azure Kinect RGB-D cameras and a Meta Quest Pro VR headset. The point cloud data was processed and visualized using Unity, achieving a rendering rate of approximately 16-17 FPS for computationally intensive operations.

Point Cloud Streaming
Key Innovations and Results
1. Improved Visualization Quality:
The study focused on enhancing visualization quality within VR by addressing common challenges like data occlusion and mesh rendering. It explored approaches such as increasing the size and shape of visual elements and generating mesh structures for a clearer representation of objects.
2. Remote Manipulation Experiment:
The study conducted a Peg-in-Hole task using BTSM and the Meta Quest Pro VR headset. The experiment showed that the proposed visualization techniques improved task completion efficiency and accuracy. However, challenges like occlusion due to end-effector positions were identified, which need further camera placement optimization.
3. Enhanced Operator Experience:
By leveraging VR and point cloud data, the study demonstrated a significant improvement in the operator’s spatial awareness and task precision compared to traditional CCTV-based systems. The research emphasized the importance of accurate camera placement and proposed future studies to optimize configurations based on collected experimental data.
Conclusions
The research successfully demonstrated the feasibility of using point clouds in VR for enhancing remote robotic control in nuclear facilities. It highlights the potential of this approach to revolutionize operator experience by providing a more immersive and intuitive control environment. Future research directions include refining visualization techniques, optimizing camera setups, and improving system performance through extensive real-world testing.

The study utilizes point cloud data, obtained through multiple Azure Kinect RGB-D cameras, to create a detailed and realistic virtual representation of the operational environment. The integration of point cloud data into VR is achieved using the following steps:
Point Cloud Mapping with External Calibration:
- The study proposes using normal vectors derived from point clouds to define extrinsic parameters (external calibration), eliminating the need for physical markers or additional reference information. This approach allows for a simpler yet accurate mapping of the physical environment into the VR workspace.
Data Integration into VR:
- The collected point cloud data is mapped into the virtual environment using the Unity game engine. The study employs the BAD SLAM (Bundle Adjusted Direct RGB-D SLAM) technique to refine the mapping accuracy.
Point cloud scanned via BAD SLAM
- The VR environment uses the global coordinate system to maintain consistency between the virtual and physical setups. The research demonstrates how iterative processes like ICP (Iterative Closest Point) are applied to align multiple camera feeds accurately within the virtual space.
ICP Results
Experimental Setup
For the experimental validation, the research utilized two Azure Kinect RGB-D cameras and a Meta Quest Pro VR headset. The point cloud data was processed and visualized using Unity, achieving a rendering rate of approximately 16-17 FPS for computationally intensive operations.
Point Cloud Streaming
Key Innovations and Results
1. Improved Visualization Quality:
The study focused on enhancing visualization quality within VR by addressing common challenges like data occlusion and mesh rendering. It explored approaches such as increasing the size and shape of visual elements and generating mesh structures for a clearer representation of objects.
2. Remote Manipulation Experiment:
The study conducted a Peg-in-Hole task using BTSM and the Meta Quest Pro VR headset. The experiment showed that the proposed visualization techniques improved task completion efficiency and accuracy. However, challenges like occlusion due to end-effector positions were identified, which need further camera placement optimization.
3. Enhanced Operator Experience:
By leveraging VR and point cloud data, the study demonstrated a significant improvement in the operator’s spatial awareness and task precision compared to traditional CCTV-based systems. The research emphasized the importance of accurate camera placement and proposed future studies to optimize configurations based on collected experimental data.
Conclusions
The research successfully demonstrated the feasibility of using point clouds in VR for enhancing remote robotic control in nuclear facilities. It highlights the potential of this approach to revolutionize operator experience by providing a more immersive and intuitive control environment. Future research directions include refining visualization techniques, optimizing camera setups, and improving system performance through extensive real-world testing.