Elephas-SAM : segmentation performance of Sumatran Elephant in captivity with segment anything model

Authors

  • Fortuno Ery Faqih Institut Sains dan Teknologi Terpadu Surabaya
  • Lukman Zaman P.C.S.W Institut Sains dan Teknologi Terpadu Surabaya

DOI:

https://doi.org/10.33795/jartel.v14i1.863

Keywords:

Artificial Intelligence, CCTV, Prediction Score, Segment Anything Model, SAM-Point Prompt, SAM-Box Prompt

Abstract

Surabaya Zoo is one of the conservation institutions in Surabaya, which has Sumatran elephants as a collection of endemic Indonesian animals. The Indonesian government protects this animal because of its endangered status. Having CCTV cameras installed in captivity helped us to create Elephas-SAM by utilizing Segment Anything Model (SAM) technology as the initial foundation for developing a system for monitoring animals in captivity with artificial intelligence (AI). Our investigations differ from past research in that we utilize 60 exclusive images obtained from CCTV footage in an elephant enclosure at Surabaya Zoo over a 30-day period instead of using publicly available datasets. The image set was partitioned into 30 instances taken under low-light settings (01:00 WIB) and 30 instances taken under high-light conditions (15:00 WIB). We perform the evaluation of SAM's prediction scores using the SAM-Point Prompt and SAM-Box Prompt techniques. It was found that, on average, the segmentation prediction scores for 30 low-light images are higher when the SAM-Point prompt is used (0.941) instead of the SAM-Box prompt (0.939), which is only a 0.002 difference. For a set of 30 vivid images, the SAM-Point Prompt produces a higher average score (0.989) than the SAM-Box Prompt (0.968), indicating a difference of 0.021. The results emphasize the effectiveness of using a SAM-Point prompt instead of a SAM-Box question to accurately forecast segmentation scores for items of Sumatran elephants under different illumination situations.

References

A. Gopala et al., “Elephas maximus ssp. sumatranus,” IUCN Red List Threat. Species 2011, vol. 8235, no. e.T199856A9129626., pp. 4–29, 2011, [Online]. Available: http://dx.doi.org/10.2305/IUCN.UK.2011-2.RLTS.T199856A9129626.en

A. Wira Putra Wahyudi Arianto, H. Suhartoyo, J. Kehutanan, F. Pertanian, and U. W. Bengkulu Jl supratman, “Mitigasi Konflik Manusia Dengan Gajah Sumatera (Elephas Maximus Sumatranus, Temminck 1847) Di Desa Binaan Lapindo Kabupaten Mukomuko Provinsi Bengkulu,” J. Glob. For. Environ. Sci., vol. 3, no. 1, pp. 48–56, 2023

S. Nazir and M. Kaleem, “Advances in image acquisition and processing technologies transforming animal ecological studies,” Ecol. Inform., vol. 61, 2021, doi: 10.1016/j.ecoinf.2021.101212

M. Zuerl et al., “Automated Video-Based Analysis Framework for Behavior Monitoring of Individual Animals in Zoos Using Deep Learning—A Study on Polar Bears,” Animals, vol. 12, no. 6, 2022, doi: 10.3390/ani12060692

L. Brickson, L. Zhang, F. Vollrath, I. Douglas-Hamilton, and A. J. Titus, “Elephants and algorithms: a review of the current and future role of AI in elephant monitoring,” J. R. Soc. Interface, vol. 20, no. 208, 2023, doi: 10.1098/rsif.2023.0367.

J. V. Congdon, M. Hosseini, E. F. Gading, M. Masousi, M. Franke, and S. E. Macdonald, “The Future of Artificial Intelligence in Monitoring Animal Identification, Health, and Behaviour,” Animals, vol. 12, no. 13, 2022, doi: 10.3390/ani12131711.

E. Mohammed, T. A. Alsadi, and N. K. El Abbadi, “Scrutiny of Methods for Image Detection and Recognition of Different Species of Animals,” Int. J. Recent Technol. Eng., vol. 8, no. 3S3, pp. 151–160, 2019, doi: 10.35940/ijrte.c1046.1183s319.

A. Kirillov et al., “Segment Anything,” 2023, [Online]. Available: http://arxiv.org/abs/2304.02643

M. Hu, Y. Li, and X. Yang, “SkinSAM: Empowering Skin Cancer Segmentation with Segment Anything Model,” 2023, [Online]. Available: http://arxiv.org/abs/2304.13973

M. Ahmadi et al., “Comparative Analysis of Segment Anything Model and U-Net for Breast Tumor Detection in Ultrasound and Mammography Images,” pp. 1–34, 2023, [Online]. Available: http://arxiv.org/abs/2306.12510

T. Chen et al., “SAM-Adapter: Adapting Segment Anything in Underperformed Scenes,” pp. 3359–3367, 2023, doi: 10.1109/iccvw60793.2023.00361.

Z. Qiu, Y. Hu, H. Li, and J. Liu, “Learnable Ophthalmology SAM,” 2023, [Online]. Available: http://arxiv.org/abs/2304.13425

Y. Li, B. Jing, Z. Li, J. Wang, and Y. Zhang, “nnSAM: Plug-and-play Segment Anything Model Improves nnUNet Performance,” vol. 1, 2023, [Online]. Available: http://arxiv.org/abs/2309.16967

S. He et al., “Computer-Vision Benchmark Segment-Anything Model (SAM) in Medical Images: Accuracy in 12 Datasets,” no. April, pp. 1–8, 2023, [Online]. Available: http://arxiv.org/abs/2304.09324

S. Roy et al., “SAM.MD: Zero-shot medical image segmentation capabilities of the Segment Anything Model,” pp. 1–4, 2023, [Online]. Available: http://arxiv.org/abs/2304.05396

C. Mattjie et al., “Zero-shot performance of the Segment Anything Model (SAM) in 2D medical imaging: A comprehensive evaluation and practical guidelines,” 2023, [Online]. Available: http://arxiv.org/abs/2305.00109

S. Pandey, K.-F. Chen, and E. B. Dam, “Comprehensive Multimodal Segmentation in Medical Imaging: Combining YOLOv8 with SAM and HQ-SAM Models,” pp. 2584–2590, 2023, doi: 10.1109/iccvw60793.2023.00273.

C. Hu, T. Xia, S. Ju, and X. Li, “When SAM Meets Medical Images: An Investigation of Segment Anything Model (SAM) on Multi-phase Liver Tumor Segmentation,” pp. 1–5, 2023, [Online]. Available: http://arxiv.org/abs/2304.08506

M. A. Mazurowski, H. Dong, H. Gu, J. Yang, N. Konz, and Y. Zhang, “Segment anything model for medical image analysis: An experimental study,” Med. Image Anal., vol. 89, 2023, doi: 10.1016/j.media.2023.102918.

A. Ranem, N. Babendererde, M. Fuchs, and A. Mukhopadhyay, “Exploring SAM Ablations for Enhancing Medical Segmentation in Radiology and Pathology,” pp. 1–13, 2023, [Online]. Available: http://arxiv.org/abs/2310.00504

J. Ma, Y. He, F. Li, L. Han, C. You, and B. Wang, “Segment Anything in Medical Images,” pp. 1–9, 2023, [Online]. Available: http://arxiv.org/abs/2304.12306

D. Williams, F. Macfarlane, and A. Britten, “Leaf Only SAM: A Segment Anything Pipeline for Zero-Shot Automated Leaf Segmentation,” 2023, [Online]. Available: http://arxiv.org/abs/2305.09418

M. Ahmadi, A. G. Lonbar, A. Sharifi, A. T. Beris, M. Nouri, and A. S. Javidi, “Application of Segment Anything Model for Civil Infrastructure Defect Assessment,” pp. 1–31, 2023, [Online]. Available: http://arxiv.org/abs/2304.12600

J. Yang, M. Gao, Z. Li, S. Gao, F. Wang, and F. Zheng, “Track Anything: Segment Anything Meets Videos,” 2023, [Online]. Available: http://arxiv.org/abs/2304.11968

S. Mo and Y. Tian, “AV-SAM: Segment Anything Model Meets Audio-Visual Localization and Segmentation,” pp. 2–5, 2023, [Online]. Available: http://arxiv.org/abs/2305.01836

S. Ren et al., “Segment anything, from space?,” vol. 1, 2023, [Online]. Available: http://arxiv.org/abs/2304.13000

Downloads

Published

2024-03-29

How to Cite

[1]
F. Ery Faqih and L. Z. P.C.S.W, “Elephas-SAM : segmentation performance of Sumatran Elephant in captivity with segment anything model”, Jartel, vol. 14, no. 1, pp. 8-14, Mar. 2024.