Intelligent Head-bot, towards the Development of an AI Based Cognitive Platform

  • Ramisha Fariha Baki Department of Computer Science and Engineering, Military Institute of Science and Technology (MIST), Dhaka, Bangladesh
  • M. Akhtaruzzaman Department of Computer Science and Engineering, Military Institute of Science and Technology (MIST), Dhaka, Bangladesh
  • Tahsin Ahmed Refat Department of Computer Science and Engineering, Military Institute of Science and Technology (MIST), Dhaka, Bangladesh
  • Mouneeta Rahman Department of Computer Science and Engineering, Military Institute of Science and Technology (MIST), Dhaka, Bangladesh
  • Md Abdur Razzak Department of Computer Science and Engineering, Military Institute of Science and Technology (MIST), Dhaka, Bangladesh
  • Md Mahfuzul Karim Majumder Department of Computer Science and Engineering, Military Institute of Science and Technology (MIST), Dhaka, Bangladesh
  • Md Adnanul Islam Department of Human-Centred Computing (HCC), Action Lab, Monash University (MU), Australia
  • Md Meftahul Ferdaus Department of Computer Science, University of New Orleans (UNO), USA
  • Muhammad Towfiqur Rahman Department of Computer Science and Engineering, University of Asia Pacific (UAP), Dhaka, Bangladesh
  • Quadri Noorulhasan Naveed College of Computer Science, King Khalid University, Abha 61413, Saudi Arabia
Keywords: Cognitive humanoid head, Head-bot, Social robot, AI chat-bot, Machine intelligence


A cognitive humanoid head is an AI enabled head-bot platform that resembles human's cognitive abilities, such as perception, thinking, learning, and decision-making. The platform is able to interact with human through natural language processing and recognize individuals, thus allowing seamless communication between two parties. No such cognitive platform has been introduced in Bangladesh, thus creating an open field to contribute to the field of Machine Intelligence. This study aims to develop an AI based humanoid head (head-bot) capable of imitating a range of expressions, recognizing individuals, and interacts with visitors through general conversation. The head-bot skeleton is developed using a number of hexagonal blocks of PVC sheet to mimic a human-head-like structure where LCD, camera, microphones, and speaker are mounted. Two separate Machine Learning models are designed for face detection and recognitions, and voice enabled chat-bot implementation. The head-bot platform incorporates 2-DoF neck movements for various head gestures and face tracking. The Artificial Neural Network models are tested with accuracy of 95.05%, and 99.0%, for face detection and recognitions, and speech recognitions and response generation, respectively. According to the overall results and system performances, it seems that the proposed system has a number of good potentials for real life applications such as entertainment, guidance, conversations, interactive receptionists, personal companion, medical assistance, and so on.


Download data is not yet available.


Adjabi, I., Ouahabi, A., Benzaoui, A., & Taleb-Ahmed, A. (2020). Past, present, and future of face recognition: A review. Electronics, 9(8), 1188.

Ahmed, L., Polok, I. K., Islam, M. A., Akhtaruzzaman, M., Mukta, M. S. H., & Rahman, M. M. (2023, January). Context based Emotion Recognition from Bengali Text using Transformers. In 2023 5th International Conference on Smart Systems and Inventive Technology (ICSSIT) (pp. 1478-1484). IEEE.

Akhtaruzzaman, M., & Shafie, A. A. (2010a). Evolution of humanoid robots and the contribution of various countries in advancing the research and development of the platform. In ICCAS 2010 (pp. 1021-1028). IEEE.

Akhtaruzzaman, M., & Shafie, A. A. (2010b). Advancement of android and contribution of various countries in the research and development of the humanoid platform. International Journal of Robotics and Automation (IJRA), 1(2), 43-57.

Akhtaruzzaman, M., & Shafie, A. A. (2011a). Geometrical substantiation of Phi, the golden ratio and the baroque of nature, architecture, design and engineering. International Journal of Arts, 1(1), 1-22.

Akhtaruzzaman, M., & Shafie, A. A. (2011b). An attempt to develop a biped intelligent machine BIM-UIA. In 2011 4th International Conference on Mechatronics (ICOM) (pp. 1-7). IEEE.

Akhtaruzzaman, M., Shafie, A. A., Raihan, S. M., Hasan, M. K., Ahsan, T., Alam, M. S., & Haider, M. B. (2011, December). Golden ratio, the Phi, and its geometrical substantiation. In 2011 IEEE Student Conference on Research and Development (pp. 425-430). IEEE.

Akhtaruzzaman, M. D., Shafie, A. A., & Khan, M. R. (2016). Gait analysis: Systems, technologies, and importance. Journal of Mechanics in Medicine and Biology, 16(07), 1630003.

Akhtaruzzaman, M., Shafie, A. A., & Khan, M. R. (2017). Quasi-inverse pendulum model of 12 DoF bipedal walking. International Journal of Automation and Computing, 14, 179-190.

Alekseev, D., Shagalova, P. and Sokolova, E. (2021), Development of a chatbot using machine learning algorithms to automate educational processes, GraphiCon 2021: 31st International Conference on Computer Graphics and Vision, September 27-30, Vol. 31, pp. 1104–1113.

Apu, M. A., Shakil, U. A., and Akhtaruzzaman, M., (2022). Development of an Intelligent Chatbot for MIST Website. CSE Technical Paper, The MIST Journal of Computer Science and Engineering, 1(1), 33-36.

Asha, N., Fiaz, A. S., Jayashree, J., Vijayashree, J., & Indumathi, J. (2022). Principal component analysis on face recognition using artificial firefirefly swarm optimization algorithm. Advances in Engineering Software, 174, 103296.

Augello, A., Gaglio, S., Infantino, I., Maniscalco, U., Pilato, G., & Vella, F. (2023). Roboception and adaptation in a cognitive robot. Robotics and Autonomous Systems, 164, 104400.

Belhumeur, P. N., Hespanha, J. P. and Kriegman, D. J. (1997), ‘Eigenfaces vs. fisherfaces: Recognition using class specific linear projection’, IEEE Transactions on pattern analysis and machine intelligence 19(7), 711–720.

Beymer, D. (1994), Face recognition under varying pose,”, in ‘Proceedings of 23rd Image Understanding Workshop’, Vol. 2, pp. 837–842

Bishop, C. M. (1994), ‘Neural networks and their applications’, Review of scientific instruments 65(6), 1803–1832

Bogue, R. (2020), ‘Humanoid robots from the past to the present’, Industrial Robot: the international journal of robotics research and application 47(4), 465–472

Breazeal, C., Buchsbaum, D., Gray, J., Gatenby, D. and Blumberg, B. (2005), ‘Learning from and about others: Towards using imitation to bootstrap the social understanding of others by robots’, Artificial life 11(1-2), 31–62

Brooks, R. A. (1996), Prospects for human level intelligence for humanoid robots, in ‘Proceedings of the First International Symposium on Humanoid Robots (HURO-96)’, pp. 17–24

Chen, K., Yi, T., & Lv, Q. (2022). Fast and reliable probabilistic face embeddings based on constrained data uncertainty estimation. Image and Vision Computing, 121, 104429.

Cominelli, L., Hoegen, G. and De Rossi, D. (2021), ‘Abel: integrating humanoid body, emotions, and time perception to investigate social interaction and human cognition’, Applied Sciences 11(3), 1070

Dhamija, A., & Dubey, R. B. (2022). A novel active shape model-based DeepNeural network for age invariance face recognition. Journal of Visual Communication and Image Representation, 82, 103393.

Fong, T., Nourbakhsh, I. and Dautenhahn, K. (2003), ‘A survey of socially interactive robots’, Robotics and autonomous systems 42(3-4), 143–166.

Huang, P., Shen, Y., Yang, Z., Zhang, C., & Yang, G. (2022). Dual collaborative representation based discriminant projection for face recognition. Computers and Electrical Engineering, 102, 108281.

Iqbal, A., Shafie, A. A., Khan, M. R., Alias, M. F. and Radhi, J. (2011), ‘Hri for interactive humanoid head amir-ii for visual tracking and servoing of human face’, International Journal of Robotics and Automation (IJRA) 2(3), 220–23

Kanda, T., Ishiguro, H., Ono, T., Imai, M. and Nakatsu, R. (2002), Development and evaluation of an interactive humanoid robot” robovie”, in ‘Proceedings 2002 IEEE inter- national conference on robotics and automation (Cat. No. 02CH37292)’, Vol. 2, IEEE, pp. 1848–1855

Kotov, A., Arinkin, N., Zaidelman, L., & Zinina, A. (2020). Speech understanding system for emotional companion robots. Procedia Computer Science, 169, 63-69.

Li, L. (2023). Role of chatbots on gastroenterology: Let's chat about the future. Gastroenterology & Endoscopy, Elsevier. (Journal Pre-proof).

Li, Y., Liu, Y., & Wang, W. (2014). Planar hexagonal meshing for architecture. IEEE transactions on visualization and computer graphics, 21(1), 95-106.

Li, H. and Ngan, K. N. (2008), ‘Saliency model-based face segmentation and tracking in head-and-shoulder video sequences’, Journal of Visual Communication and Image Representation 19(5), 320–333.

Liu, C. C., Liao, M. G., Chang, C. H., & Lin, H. M. (2022). An analysis of children’interaction with an AI chatbot and its impact on their interest in reading. Computers & Education, 189, 104576.

Meena, D. and Sharan, R. (2016), An approach to face detection and recognition, in ‘2016 International Conference on Recent Advances and Innovations in Engineering (ICRAIE)’, IEEE, pp. 1–6.

Milad, A., & Yurtkan, K. (2023). An integrated 3D model based face recognition method using synthesized facial expressions and poses for single image applications. Applied Nanoscience, 13(3), 1991-2001.

Nefian, A. V. and Hayes, M. H. (1998), Hidden markov models for face recognition, in ‘Proceedings of the 1998 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP’98 (Cat. No. 98CH36181)’, Vol. 5, IEEE, pp. 2721–2724.

Rawal, N. and Stock-Homburg, R. M. (2022), ‘Facial emotion expressions in human-robot interaction: A survey’, International Journal of Social Robotics 14(7), 1583–1604.

Reggia, J. A., Katz, G. E. and Davis, G. P. (2018), ‘Humanoid cognitive robots that learn by imitating: implications for consciousness studies’, Frontiers in Robotics and AI 5, 1.

Rizomyliotis, I., Kastanakis, M. N., Giovanis, A., Konstantoulaki, K., & Kostopoulos, I. (2022). “How mAy I help you today?” The use of AI chatbots in small family businesses and the moderating role of customer affective commitment. Journal of Business Research, 153, 329-340.

Rojas-Quintero, J. and Rodríguez-Liñán, M. (2021), ‘A literature review of sensor heads for humanoid robots’, Robotics and Autonomous Systems 143, 103834.

Samaria, F. and Young, S. (1994), ‘Hmm-based architecture for face identification’, Image and vision computing 12(8), 537–543.

Sanaullah, M., Akhtaruzzaman, M., & Hossain, M. A. (2022). Land-robot technologies: The integration of cognitive systems in military and defense. NDC E-JOURNAL, 2(1), 123-156.

Sudha, S. S., & Suganya, S. S. (2023). On-road driver facial expression emotion recognition with parallel multi-verse optimizer (PMVO) and optical flow reconstruction for partial occlusion in internet of things (IoT). Measurement: Sensors, 26, 100711.

Turk, M. A. and Pentland, A. P. (1991), Face recognition using eigenfaces, in ‘Proceedings. 1991 IEEE computer society conference on computer vision and pattern recognition’, IEEE Computer Society, pp. 586–587

Wang, W., Liu, Y., Yan, D., Chan, B., Ling, R., & Sun, F. (2008). Hexagonal meshes with planar faces. Dept. of CS, HKU, Tech. Rep., 1-11

Wang, Y., Wang, L., Yang, T., Li, X., Zang, X., Zhu, M., Wang, K., Wu, D. and Zhu, H. (2014), ‘Wearable and highly sensitive graphene strain sensors for human motion monitoring’, Advanced Functional Materials 24(29), 4666–4670.

How to Cite
Baki, R. F., Akhtaruzzaman, M., Refat, T. A., Rahman, M., Razzak, M. A., Majumder, M. M. K., Islam, M. A., Ferdaus, M. M., Rahman, M. T., & Naveed, Q. N. (2023). Intelligent Head-bot, towards the Development of an AI Based Cognitive Platform. MIST INTERNATIONAL JOURNAL OF SCIENCE AND TECHNOLOGY, 11(2), 01-14.