ABUAD Journal of Engineering Research and Development https://www.ajol.info/index.php/abuadjerd <p>ABUAD Journal of Engineering Research and Development (AJERD) is an international peer-reviewed open access journal, which is domiciled in the College of Engineering of Afe Babalola University, Ado-Ekiti (ABUAD), Ekiti State, Nigeria. The aim of AJERD is to promote the discovery, advancement and dissemination of innovative and novel original research and development results in different branches of engineering to the wider public. AJERD provides platform for fast publication of research and development outputs. All papers are freely available online with permanent web identifier. The abstracts will be submitted for indexing in major academic databases. The journal accepts original research contributions that have not been published or submitted for publication elsewhere. The scope of AJERD includes, but not limited to, the following branches of Engineering: Agricultural Engineering, Biomedical Engineering, Bioresources Engineering, Chemical Engineering, Civil Engineering, Computer Engineering, Electrical Engineering, Electronics Engineering, Environmental Engineering, Mechanical Engineering, Mechatronics Engineering, Petroleum Engineering, Systems Engineering.</p> <p>You can see this journal's website <a href="http://journals.abuad.edu.ng/index.php/ajerd" target="_blank" rel="noopener">here</a>.</p> College of Engineering, Afe Babalola University, Ado-Ekiti, Ekiti State, Nigeria en-US ABUAD Journal of Engineering Research and Development 2756-6811 <p>All published articles are available on the internet to all users at no cost. Upon article publication, everyone is free to copy and redistribute the material in any medium or format for any purpose provided that proper citation of the original publication is given. Material may not be used for commercial purposes and if remix, transform, or build upon, contributions must be distributed under the same license as the original. All articles in all journals are licensed under a Creative Commons Attribution-Noncommercial-ShareAlike 4.0 International License.</p> <p>This license enables reusers to distribute, remix, adapt, and build upon the material in any medium or format for noncommercial purposes only, and only so long as attribution is given to the creator. If remix, adapt, or build upon, the modified material must be licensed under identical terms. CC BY-NC-SA includes the following elements:</p> <p>BY: credit must be given to the creator.</p> <p>NC: Only noncommercial uses of the work are permitted.</p> <p>SA: Adaptations must be shared under the same terms.</p> <p><strong> </strong></p> Robotic Assistant for Object Recognition Using Convolutional Neural Network https://www.ajol.info/index.php/abuadjerd/article/view/264674 <p>Visually impaired persons encounter certain challenges, which include access to information, environmental navigation, and obstacle detection. Navigating daily life becomes a big task with challenges relating to the search for misplaced personal items and being aware of&nbsp; objects in their environment to avoid collision. This necessitates the need for automated solutions to facilitate object recognition.&nbsp; While traditional methods like guide dogs, white canes, and Braille have offered valuable solutions, recent technological solutions,&nbsp; including smartphone-based recognition systems and portable cameras, have encountered limitations such as constraints relating to&nbsp; cultural-specific, device-specific, and lack of system autonomy. This study addressed and provided solutions to the limitations offered by&nbsp; recent solutions by introducing a Convolutional Neural Network (CNN) object recognition system integrated into a mobile robot designed&nbsp; to function as a robotic assistant for visually impaired persons. The robotic assistant is capable of moving around in a confined&nbsp; environment. It incorporates a Raspberry Pi with a camera programmed to recognize three objects: mobile phones, mice, and chairs. A&nbsp; Convolutional Neural Network model was trained for object recognition, with 30% of the images used for testing. The training was&nbsp; conducted using the Yolov3 model in Google Colab. Qualitative evaluation of the recognition system yielded a precision of 79%, recall of&nbsp; 96%, and accuracy of 80% for the Robotic Assistant. It also includes a Graphical User Interface where users can easily control the&nbsp; movement and speed of the robotic assistant. The developed robotic assistant significantly enhances autonomy and object recognition,&nbsp; promising substantial benefits in the daily navigation of visually impaired individuals.&nbsp;</p> Sunday Oluyele Ibrahim Adeyanju Adedayo Sobowale Copyright (c) 2024 https://creativecommons.org/licenses/by-nc-sa/4.0 2024-02-12 2024-02-12 7 1 1 13