The article presents a new dataset for deep learning-based wind turbine surface flaw identification using unmanned aerial vehicles (UAVs). The dataset consists of 1437 images of wind turbine surface defects taken by UAVs in Northwest China and public UAVs. The images were preprocessed and categorized into five types of defects: dirt, leakage, erosion, cracks, and paint off. A total of 775, 717, 704, 748, and 789 images were obtained respectively. The dataset also includes image blurring and brightness/contrast transformation to simulate photos taken in bad weather conditions. The dataset is divided into training, validation, and testing sets with a ratio of 8:1:1.

The article also presents the evaluation metrics used to measure the detection effectiveness of the model, including precision, recall, model size, frame rate, mean average precision, and mean average precision at 0.5 (mAP@0.5). The model is compared with previous defect detection methods, and the results show that the improved model has better detection accuracy and is more lightweight. The model is tested on the Jetson Nano, a low-power AI development board, and achieves a faster inference time and higher frame rate compared to other YOLO algorithms. The results demonstrate the effectiveness and reliability of the proposed model for wind turbine surface flaw identification.

Source: https://www.nature.com/articles/s41598-024-74798-3?error=cookies_not_supported&code=3f93ebff-22ac-4a1c-afc4-8b9826b0c9b4