The Automation of Large-Scale Painting
The Automation of Large-Scale Painting
Justin Myalil
Thomas Jefferson High School for Science and Technology
This article was originally included in the 2019 print publication of the Teknos Science Journal.
As I stood on the grass field on a blistering May day, waiting for the other players to stroll onto the practice field, Coach Rechin approached me and asked if I could help him with a task. He handed me a stake connected to a long rope with another stake at the other end and asked me to drive it upright into the ground on a marked spot. The rope stretched across the field, perpendicular to the already-painted sidelines. Coach Rechin then took the lawnmower-shaped paint-dispensing machine and pushed it back and forth down the rope. Painting that single yard line on the field took two people nearly ten minutes in the 90-plus-degree weather. I went home that day thinking about all of the field workers that must perform this extensive labor on a more frequent basis. It made me wonder if there was any way to make this job more efficient. It was then that I decided to develop an autonomous and mobile painting robot with my research partner for our senior research project in the Automation and Robotics lab.
One of the most inefficient forms of labor is the manual painting of large-scale horizontal surfaces, such as athletic fields and street lines. Painting a professional football field requires groundwork by twelve or more people lasting over roughly eight hours [1]. There have been robots built to autonomously paint long straight lines, such as the Intelligent One (ION) robot by Turf Tank. However, painting more complex designs on large, horizontal surfaces is still unaddressed. Street artists are exposed to the risks of these tasks, such as traffic hazards. By developing a device that implements software and mechanical components to automate the painting process, we will ease the process of complex, labor-intensive paint jobs and render them safer for the laborer.
One of our earliest concerns was the painting function. Our robot’s main medium is spray paint, and we needed to electronically simulate the spray painting before attempting to physically paint anything with the robot. We looked into various paint dispensing models, and we found research discussing the “accuracy of paint thickness simulations” and their importance to the “validity of planned painting trajectories,” [2], which concluded that the most accurate paint simulation model was the Computational Fluid Dynamics model [2]. Unfortunately, the researchers had only applied these simulations in industrial settings, where the paint dispenser is further away from the target surface than our robot would be. As a result, we determined this simulation is not pertinent for now, but it is still an option if necessary.
The automation of painting has been seldom researched, but there has been some progress in the field, such as the technologically advanced spray paint cans like the "smart spray paint can," [5]. The can employs a wireless connection to a real-time algorithm, webcams, and QR codes to autonomously spray paint any image of the user’s desire [5], keep track of the can’s location relative to the actual painting, and note what color it should be spraying. While implemented differently, both this research and our project aim to automate the spray painting process. The research offered another perspective on how we could feasibly track the location of the robot when it is painting a football field or a street, while also keeping track of the robot’s position relative to the painted picture, ensuring the accuracy of the image’s resolution.
Another study details a novel way to optimize images. Their goal was to remove “unfavorable environmental conditions defects present in the image”, employ a method to enhance the desired characteristics, and consequently generate the optimized picture as a sketch [6]. This research enumerates various methods of texture detection, edge detection, and mathematical models we can use to optimize the target image before the robot paints it. We are hoping to avoid any unwanted shadows or blemishes in the original image being painted onto the athletic field, road, or court.
Significant research has taken place in the field of autonomy and particularly, the development of a “low cost unmanned ground vehicle,” [4]. The related research discusses various edge detection techniques and algorithms that incorporate into the motion systems of robots, ultimately allowing a robot like ours to detect and locate possible obstacles blocking its path while it is painting [4]. This gave us additional insight on the possible avenues we could take for the autonomous aspect of our project.
Further in the realm of autonomy is research conducted at the University of Michigan focusing on the development of a construction robot able to adapt to changes to its workpiece. The researchers developed robots that can utilize the location, orientation, and a preprogrammed model of the target piece to adapt their plans and programming to the piece’s current state. The researchers felt as though robots, construction robots in particular, do not achieve their full potential when they work on inconsistent worksites. If there are changes to a piece that are not on the robot’s model, such as the piece being chipped or flipped over, the inconsistencies can result in the robot making undesired changes and alterations to it [3]. Their research utilized algorithms and laser sensors as a means of locating and recording any physical discrepancies between the object and the preprogrammed model [3]. This capability would allow robots like ours to work on a football field or street and adapt to any changes it may encounter on a field or street.
We reached out to Professor Vineet Kamat, one of the lead researchers, for his opinion on the possibility of using the adaptability framework to aid the autonomous mobility of a robot. He referred us to one of his colleagues more proficient in autonomous motion, Lichao Xu. Mr. Xu advised us to “look into Lidar-based Simultaneous Localization and Mapping,” which uses laser sensors to detect and record objects and their distances from the robot (personal communication, February 18, 2019). Through this interaction, we found a feasible framework to detect and adapt to obstacles.
The ultimate goal for our robot is to take an uploaded image, process it, and autonomously move and paint the image on a horizontal canvas, such as a street or an athletic field. Our biggest obstacle will be getting the individual components to work in conjunction with each other. This robot will be highly beneficial, as the task of manually painting fields, streets, and other horizontal surfaces is arduous and inefficient. In cases of extreme weather and conditions, it could be dangerous as well. We believe our robot could make tasks like these safer and more efficient.
References
[1] Brown, H. (2014, September 16). Good question: How do grounds crews flip a field? Retrieved January 21, 2019, from https://minnesota.cbslocal.com/2014/09/16/good-question-how- do-grounds-crews-flip-a-field/
[2] Chen, Y., Chen, W., Li, B., Zhang, G., & Zhang, W. (2017). Paint thickness simulation for painting robot trajectory planning: A review. The Industrial Robot; Bedford, 44(5), 629-638. Retrieved from ProQuest Databases.
[3] Lundeen, K. M., Kamet, V. R., Menassa, C. C., & McGee, W. (2019). Autonomous motion planning and task execution in geometrically adaptive robotized construction work. Automation in Construction, 100, 24-45. https://doi.org/10.1016/j.autcon.2018.12.020
[4] Man, C. K. Y. L., Koonju, Y., & Nagowah, L. (2018). A low cost autonomous unmanned ground vehicle. Future Computing and Informatics Journal, 3(2), 304-320. https://doi.org/10.1016/j.fcij.2018.10.001
[5] Prévost, R., Jacobson, A., Jarosz, W., & Sorkine-Hornung, O. (2016). Large-scale painting of photographs by interactive optimization. Computers & Graphics, 55, 108-117. https://doi.org/10.1016/j.cag.2015.11.001
[6] Tiwari, M., Lamba, S. S., & Gupta, B. (2018). An image processing and computer vision framework for efficient robotic sketching. Procedia Computer Science, 133, 284-289. https://doi.org/10.1016/j.procs.2018.07.035