EXPLORING TRANSFER LEARNING AND CONVOLUTIONAL AUTOENCODER FOR EFFECTIVE KITCHEN UTENSILS CLASSIFICATION

Authors

  • Hashim Rosli
  • Rosziza Ali
  • Muhamad Suzuri Hitam
  • Ashanira Mat Deris
  • Noor Hafhizah Abd Rahim

DOI:

https://doi.org/10.24191/mjoc.v10i1.4533

Abstract

Effective classification of kitchen utensils is crucial for advancing assistive technologies and enhancing daily living for individuals with visual impairments. This study investigates the use of transfer learning and convolutional autoencoders to improve classification accuracy. We integrate pre-trained networks into an autoencoder framework to enhance feature extraction and image reconstruction. Models including ResNet50, DenseNet121, and their autoencoder variants were evaluated using precision, recall, accuracy, Structural Similarity Index Measure (SSIM), and Peak Signal-to-Noise Ratio (PSNR). Results show that DenseNet121 outperforms ResNet50 with a classification accuracy of 72% and shorter training time. When combined with autoencoders, DenseNet121-Autoencoder achieves the highest classification accuracy of 76% and superior image reconstruction quality, as indicated by higher SSIM and PSNR scores. This improvement highlights DenseNet121’s effectiveness in handling complex, high-dimensional classification tasks and noise reduction. The study underscores the model’s potential for enhancing assistive technologies and sustainable learning by providing more accurate and reliable object recognition. This advancement supports greater independence for visually impaired users and promotes more inclusive learning environments.

Author Biographies

  • Hashim Rosli

    Effective classification of kitchen utensils is crucial for advancing assistive technologies and enhancing daily living for individuals with visual impairments. This study investigates the use of transfer learning and convolutional autoencoders to improve classification accuracy. We integrate pre-trained networks into an autoencoder framework to enhance feature extraction and image reconstruction. Models including ResNet50, DenseNet121, and their autoencoder variants were evaluated using precision, recall, accuracy, Structural Similarity Index Measure (SSIM), and Peak Signal-to-Noise Ratio (PSNR). Results show that DenseNet121 outperforms ResNet50 with a classification accuracy of 72% and shorter training time. When combined with autoencoders, DenseNet121-Autoencoder achieves the highest classification accuracy of 76% and superior image reconstruction quality, as indicated by higher SSIM and PSNR scores. This improvement highlights DenseNet121’s effectiveness in handling complex, high-dimensional classification tasks and noise reduction. The study underscores the model’s potential for enhancing assistive technologies and sustainable learning by providing more accurate and reliable object recognition. This advancement supports greater independence for visually impaired users and promotes more inclusive learning environments.

  • Rosziza Ali

    Effective classification of kitchen utensils is crucial for advancing assistive technologies and enhancing daily living for individuals with visual impairments. This study investigates the use of transfer learning and convolutional autoencoders to improve classification accuracy. We integrate pre-trained networks into an autoencoder framework to enhance feature extraction and image reconstruction. Models including ResNet50, DenseNet121, and their autoencoder variants were evaluated using precision, recall, accuracy, Structural Similarity Index Measure (SSIM), and Peak Signal-to-Noise Ratio (PSNR). Results show that DenseNet121 outperforms ResNet50 with a classification accuracy of 72% and shorter training time. When combined with autoencoders, DenseNet121-Autoencoder achieves the highest classification accuracy of 76% and superior image reconstruction quality, as indicated by higher SSIM and PSNR scores. This improvement highlights DenseNet121’s effectiveness in handling complex, high-dimensional classification tasks and noise reduction. The study underscores the model’s potential for enhancing assistive technologies and sustainable learning by providing more accurate and reliable object recognition. This advancement supports greater independence for visually impaired users and promotes more inclusive learning environments.

  • Muhamad Suzuri Hitam

    Effective classification of kitchen utensils is crucial for advancing assistive technologies and enhancing daily living for individuals with visual impairments. This study investigates the use of transfer learning and convolutional autoencoders to improve classification accuracy. We integrate pre-trained networks into an autoencoder framework to enhance feature extraction and image reconstruction. Models including ResNet50, DenseNet121, and their autoencoder variants were evaluated using precision, recall, accuracy, Structural Similarity Index Measure (SSIM), and Peak Signal-to-Noise Ratio (PSNR). Results show that DenseNet121 outperforms ResNet50 with a classification accuracy of 72% and shorter training time. When combined with autoencoders, DenseNet121-Autoencoder achieves the highest classification accuracy of 76% and superior image reconstruction quality, as indicated by higher SSIM and PSNR scores. This improvement highlights DenseNet121’s effectiveness in handling complex, high-dimensional classification tasks and noise reduction. The study underscores the model’s potential for enhancing assistive technologies and sustainable learning by providing more accurate and reliable object recognition. This advancement supports greater independence for visually impaired users and promotes more inclusive learning environments.

  • Ashanira Mat Deris

    Effective classification of kitchen utensils is crucial for advancing assistive technologies and enhancing daily living for individuals with visual impairments. This study investigates the use of transfer learning and convolutional autoencoders to improve classification accuracy. We integrate pre-trained networks into an autoencoder framework to enhance feature extraction and image reconstruction. Models including ResNet50, DenseNet121, and their autoencoder variants were evaluated using precision, recall, accuracy, Structural Similarity Index Measure (SSIM), and Peak Signal-to-Noise Ratio (PSNR). Results show that DenseNet121 outperforms ResNet50 with a classification accuracy of 72% and shorter training time. When combined with autoencoders, DenseNet121-Autoencoder achieves the highest classification accuracy of 76% and superior image reconstruction quality, as indicated by higher SSIM and PSNR scores. This improvement highlights DenseNet121’s effectiveness in handling complex, high-dimensional classification tasks and noise reduction. The study underscores the model’s potential for enhancing assistive technologies and sustainable learning by providing more accurate and reliable object recognition. This advancement supports greater independence for visually impaired users and promotes more inclusive learning environments.

  • Noor Hafhizah Abd Rahim

    Effective classification of kitchen utensils is crucial for advancing assistive technologies and enhancing daily living for individuals with visual impairments. This study investigates the use of transfer learning and convolutional autoencoders to improve classification accuracy. We integrate pre-trained networks into an autoencoder framework to enhance feature extraction and image reconstruction. Models including ResNet50, DenseNet121, and their autoencoder variants were evaluated using precision, recall, accuracy, Structural Similarity Index Measure (SSIM), and Peak Signal-to-Noise Ratio (PSNR). Results show that DenseNet121 outperforms ResNet50 with a classification accuracy of 72% and shorter training time. When combined with autoencoders, DenseNet121-Autoencoder achieves the highest classification accuracy of 76% and superior image reconstruction quality, as indicated by higher SSIM and PSNR scores. This improvement highlights DenseNet121’s effectiveness in handling complex, high-dimensional classification tasks and noise reduction. The study underscores the model’s potential for enhancing assistive technologies and sustainable learning by providing more accurate and reliable object recognition. This advancement supports greater independence for visually impaired users and promotes more inclusive learning environments.

References

Downloads

Published

2025-04-01

How to Cite

EXPLORING TRANSFER LEARNING AND CONVOLUTIONAL AUTOENCODER FOR EFFECTIVE KITCHEN UTENSILS CLASSIFICATION. (2025). Malaysian Journal of Computing, 10(1), 2012-2025. https://doi.org/10.24191/mjoc.v10i1.4533