Artificial Intelligence – Rapid Component Identification Using AI-trained Algorithms to
Rapidly Build Virtual Vessel Component List
The ability to efficiently and accurately survey and inventory offshore facilities is critical for owners and operators. Proper accounting of assets and equipment can be accomplished by analyzing images and videos. A project involving Texas A&M and ABS seeks to use artificial intelligence (AI) for rapid component detection and labeling.
Experimental data sets are being analyzed by Dr. Paul Koola and his team at Texas A&M, Harsh Mattoo, and Madhulika Dey. They are working with ABS’ Subrat Nanda, Chief Data Scientist, on case development and refinement. Computational process development is being worked on along with testing and quantification of the accuracy and level of detail. The work is being conducted under a Texas A&M-ABS research agreement covering the Laboratory for Ocean Innovation.
The research effort utilized 360-degree fisheye camera video footage of Kennedy Ship assets to extract image frames for classification. Each image was labeled based on its most important object of interest, forming the basis for an initial image classification phase.
These image frames were then methodically annotated using bounding boxes to highlight each object of interest—a process known as Object Detection. The annotation effort initially covered approximately 12,000 image frames across 30+ object categories. Given that annotation is a time-intensive task requiring great precision, we carefully processed the dataset and ultimately annotated 12,000+ image frames with over 40 unique objects.
For object detection, we are using YOLOv11 (You Only Look Once) from Ultralytics. YOLOv11 is the latest official release at the time of our testing and represents a significant advancement in real-time object detection. It is known for its exceptional speed and accuracy, and its streamlined architecture makes it adaptable across hardware platforms—from edge devices to cloud-based APIs. (Note: YOLOv12 has since been released.)
In general larger objects have good detection – both precision and recall. These good results proved the possibility of implementing AI for inventory management and the effort continues. Bodies with large aspect ratio pipes and shafts have bad performance. It is almost impossible to put rectangular bounding boxes on such objects in a 360 Degree Camera Image.
What has been demonstrated is that Object Detection even on 360 Degree Fish Eye camera images is possible if data annotation is carefully carried out.
