Project Summary
The CALI (Computer-Assisted Living Interface) system is a low-cost, assistive robotic feeding solution designed to improve independence for individuals with motor impairments. Built on the AR4 6-DOF robotic arm platform, the system integrates computer vision, depth sensing, and custom control software to detect food items, track user position, and autonomously guide a utensil for feeding. A neural network processes visual data in real time, enabling accurate food localization, while a depth camera provides spatial awareness for safe and precise motion. The system is controlled through a custom user interface and implemented using ROS 2 (Robot Operating System 2) for modular, scalable operation. By combining accessible hardware with intelligent software, CALI demonstrates a practical and affordable approach to assistive robotics, with strong potential for further development in healthcare and at-home support applications.
Project Objective
The objective of the CALI project is to design and develop a low-cost, assistive robotic feeding system that enables individuals with motor impairments to eat independently and safely. This system aims to integrate computer vision, depth sensing, and robotic manipulation to detect food, track user position, and execute precise, autonomous movements using a 6-DOF robotic arm. A key goal is to create a modular and scalable platform controlled through a custom user interface and built on ROS 2, allowing for real-time operation and future expansion. Ultimately, the project seeks to demonstrate that advanced assistive technology can be both accessible and effective, providing a practical foundation for further development in healthcare and at-home support environments.
Manufacturing Design Methods
The CALI system was developed using an iterative design process that combined mechanical fabrication, electronics integration, and software testing to create a functional assistive feeding prototype. The project was built around the AR4 robotic arm platform, with custom-designed components modeled in CAD and fabricated primarily through 3D printing to support rapid prototyping, low cost, and easy modification. Multiple utensil-mount concepts and support components were designed, tested, and refined to improve fit, usability, and overall appearance, including custom housings and attachments tailored to the system’s needs. On the electrical and control side, the design integrated depth-sensing hardware, embedded control components, and a custom interface to connect sensing, processing, and robotic motion into a single system. Throughout development, subsystems were repeatedly evaluated and adjusted based on testing results, allowing the team to improve reliability, functionality, and manufacturability while maintaining a modular design approach.
Specification
The CALI system is designed as a 6-degree-of-freedom (6-DOF) assistive robotic platform built on the AR4 robotic arm, capable of precise and repeatable motion suitable for feeding tasks. The system operates using stepper motor-driven joints with encoder feedback for improved positioning accuracy, achieving sub-centimeter end-effector precision within its working envelope. A depth-sensing camera provides real-time spatial data, enabling accurate food localization and user tracking within a typical operating range of approximately 0.2 to 1.0 meters. The control architecture is implemented using ROS 2, allowing modular communication between perception, planning, and actuation nodes. The vision system utilizes a trained neural network for object detection, running on a standard computing platform capable of real-time inference. Custom 3D-printed PLA components are used for the utensil mount and protective enclosures, ensuring lightweight and cost-effective fabrication. The system is powered by standard DC power supplies and interfaces with a custom-built user interface for manual control, calibration, and automated operation modes.
Analysis
The CALI system demonstrates that a low-cost, modular assistive robotic platform can effectively integrate computer vision, depth sensing, and robotic control to perform feeding tasks with reasonable accuracy and reliability. By leveraging ROS 2, the system achieves real-time communication between perception and actuation, validating the feasibility of combining modern software frameworks with accessible hardware like the AR4 arm. While testing showed strong performance in detecting food and guiding motion, limitations such as sensitivity to lighting conditions, system latency, and the inherent constraints of a non-human-centered robotic platform highlight areas for improvement. Overall, the project confirms the potential for affordable assistive robotics while identifying key opportunities for enhancing robustness, safety, and user adaptability.
Future Works
Future work for the CALI system will focus on improving reliability, safety, and user adaptability to move closer to real-world deployment. Enhancements to the computer vision pipeline, including more robust neural network training and expanded datasets, will improve accuracy across varying lighting conditions and a wider range of food types. Reducing system latency through optimized processing and more efficient communication within ROS 2 will enable smoother and more responsive motion. Mechanical upgrades to the AR4 platform, such as improved safety features, softer end-effectors, and more ergonomic utensil designs, will make the system better suited for direct human interaction. Additional developments may include user-specific calibration profiles, voice or gesture-based controls, and expanded autonomy for tasks beyond feeding. Ultimately, future iterations aim to refine the system into a more robust, intuitive, and clinically viable assistive technology.
Acknowledgement
The team would like to thank the Machine Learning Team (Aruna Dookeran, Michael Yanke, Kari Voelstad Bogen, Levent Kahveci), as well as Dr. Caraway and TA Elis for their support throughout the senior design process.