Artificial intelligence-controlled pole balancing using an Arduino board

José Luis Revelo
Oscar Chang
Abstract

Automation Process (AP) is an important issue in the current digitized world and, in general, represents an increase in the quality of productivity when compared with manual control. Balance is a natural human capacity as it relates to complex operations and intelligence. Balance Control presents an extra challenge in automation processes, due to the many variables that may be involved.  This work presents a physical balancing pole where a Reinforcement Learning (RL) agent can explore the environment, sense its position through accelerometers, and wirelessly communicate and eventually learns by itself how to keep the pole balanced under noise disturbance. The agent uses RL principles to explore and learn new positions and corrections that lead toward more significant rewards in terms of pole equilibrium. By using a Q-matrix, the agent explores future conditions and acquires policy information that makes it possible to maintain stability. An Arduino microcontroller processes all training and testing. With the help of sensors, servo motors, wireless communications, and artificial intelligence, components merge into a system that consistently recovers equilibrium under random position changes. The obtained results prove that through RL, an agent can learn by itself to use generic sensors, actuators and solve balancing problems even under the limitations that a microcontroller presents.

DOWNLOADS
Download data is not yet available.
How to Cite
Revelo Orellana, J. L., & Chang, O. (2021). Artificial intelligence-controlled pole balancing using an Arduino board. Revista Tecnológica - ESPOL, 33(2), 189-204. https://doi.org/10.37815/rte.v33n2.852
Author Biography

Oscar Chang

I am an advocate of Artificial Intelligence and firmly believe we humans will soon be dealing -in terms of advanced software programming- with treacherous characters like HAL or faithful partners like “R2-D2”. The era of clever machines has just begun; I feel like part of it and I’m trying hard to leave traces of humanity in them. My specialties are Artificial Neural Networks (ANN), Genetic Algorithms and Deep Learning. I have introduced these topics in the academic programs of several Universities where I lectured as professor, including: IVIC, USB and UCV in Caracas; UPM in Madrid and ESPE in Ecuador. I have also participated in major off-shore oil and gas projects and amusement theme parks developments.

References

Araújo, A., Portugal, D., Couceiro, M. S., & Rocha, R. P. (2015). Integrating Arduino-based educational mobile robots in ROS. Journal of Intelligent & Robotic Systems, 77(2), 281–298.

Azadeh, K., De Koster, R., & Roy, D. (2019). Robotized and automated warehouse systems: Review and recent developments. Transportation Science, 53(4), 917–945.

Foerster, J., Nardelli, N., Farquhar, G., Afouras, T., Torr, P. H. S., Kohli, P., & Whiteson, S. (2017). Stabilising experience replay for deep multi-agent reinforcement learning. International Conference on Machine Learning, 1146–1155.

Garcia, J., & Shafie, D. (2020). Teaching a humanoid robot to walk faster through Safe Reinforcement Learning. Engineering Applications of Artificial Intelligence, 88, 103360.

González, I., & Calderón, A. J. (2019). Integration of open source hardware Arduino platform in automation systems applied to Smart Grids/Micro-Grids. Sustainable Energy Technologies and Assessments, 36, 100557.

Gu, S., Holly, E., Lillicrap, T., & Levine, S. (2017). Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates. 2017 IEEE International Conference on Robotics and Automation (ICRA), 3389–3396.

Hyon, S.-H., Hale, J. G., & Cheng, G. (2007). Full-body compliant human--humanoid interaction: balancing in the presence of unknown external forces. IEEE Transactions on Robotics, 23(5), 884–898.

Jain, A. K. (2018). Working model of self-driving car using convolutional neural network, Raspberry Pi and Arduino. 2018 Second International Conference on Electronics, Communication and Aerospace Technology (ICECA), 1630–1635.

Jimenez, A.-F., Cardenas, P.-F., Canales, A., Jimenez, F., & Portacio, A. (2020). A survey on intelligent agents and multi-agents for irrigation scheduling. Computers and Electronics in Agriculture, 105474.

Korkmaz, H., Ertin, O. B., Kasnakouglu, C., & others. (2013). Design of a flight stabilizer system for a small fixed wing unmanned aerial vehicle using system identification. IFAC Proceedings Volumes, 46(25), 145–149.

Kormushev, P., Calinon, S., & Caldwell, D. G. (2013). Reinforcement learning in robotics: Applications and real-world challenges. Robotics, 2(3), 122–148.

Lengare, P. S., & Rane, M. E. (2015). Human hand tracking using MATLAB to control Arduino based robotic arm. 2015 International Conference on Pervasive Computing (ICPC), 1–4.

López-Rodríguez, F. M., & Cuesta, F. (2016). Andruino-A1: Low-Cost Educational Mobile Robot Based on Android and Arduino. Journal of Intelligent and Robotic Systems: Theory and Applications, 81(1), 63–76. https://doi.org/10.1007/s10846-015-0227-x

Mata-Rivera, M. F., Zagal-Flores, R., & Barría-Huidobro, C. (2019). Telematics and Computing: 8th International Congress, WITCOM 2019, Merida, Mexico, November 4--8, 2019, Proceedings (Vol. 1053). Springer Nature.

Miglino, O., Lund, H. H., & Nolfi, S. (1995). Evolving mobile robots in simulated and real environments. Artificial Life, 2(4), 417–434.

Nagabandi, A., Clavera, I., Liu, S., Fearing, R. S., Abbeel, P., Levine, S., & Finn, C. (2018). Learning to adapt in dynamic, real-world environments through meta-reinforcement learning. ArXiv Preprint ArXiv:1803.11347.

Pan, X., You, Y., Wang, Z., & Lu, C. (2017). Virtual to real reinforcement learning for autonomous driving. ArXiv Preprint ArXiv:1704.03952.

Ram, S. A., Siddarth, N., Manjula, N., Rogan, K., & Srinivasan, K. (2017). Real-time automation system using Arduino. 2017 International Conference on Innovations in Information, Embedded and Communication Systems (ICIIECS), 1–5.

Salazar, R., Rangel, J. C., Pinzón, C., & Rodríguez, A. (2013). Irrigation system through intelligent agents implemented with arduino technology.

Sharma, A., Ahn, M., Levine, S., Kumar, V., Hausman, K., & Gu, S. (2020). Emergent real-world robotic skills via unsupervised off-policy reinforcement learning. ArXiv Preprint ArXiv:2004.12974.

Sun, M., Luan, T., & Liang, L. (2018). RBF neural network compensation-based adaptive control for lift-feedback system of ship fin stabilizers to improve anti-rolling effect. Ocean Engineering, 163, 307–321.

Taha, I. A., & Marhoon, H. M. (2018). Implementation of controlled robot for fire detection and extinguish to closed areas based on Arduino. Telkomnika, 16(2), 654–664.

Wu, D., Liu, S., Zhang, L., Terpenny, J., Gao, R. X., Kurfess, T., & Guzzo, J. A. (2017). A fog computing-based framework for process monitoring and prognosis in cyber-manufacturing. Journal of Manufacturing Systems, 43, 25–34.