Mission

The mission of the Department of Computer Engineering and Sciences is to prepare computing, engineering and systems  students for success and leadership in the conception, design, management, implementation and operation of complex engineering problems, and to expand knowledge and understanding of computing and engineering through research, scholarship and service.

Electrical and Computer Engineering

Aurora Display



Team Leader(s)
Isabella Bretthauer

Team Member(s)
Isabella Bretthauer, Zachary Collins

Faculty Advisor
Lee Caraway




Aurora Display  File Download
Project Summary
Aurora Display is an API-driven, autonomous dashboard platform designed for deployment in public smart-space environments such as airports, university campuses, hotels, and entertainment venues. Running on low-cost edge hardware, the system uses a Raspberry Pi to boot directly into kiosk mode and display modular dashboards optimized for public viewing. A backend service layer retrieves, normalizes, and formats real-time data from multiple external sources, while the frontend presents that information through a clear, tile-based interface designed for readability and usability in high-traffic environments. Aurora Display also integrates interactive hardware components, including time-of-flight sensors, an NFC reader, and a camera, to support proximity-based interaction, scan-based workflows, and personalized information delivery. To improve reliability, the platform incorporates local caching, fallback display states, and automatic recovery routines so the system remains usable even during network or API interruptions. Overall, Aurora Display demonstrates a low-cost and scalable way to make public displays more intelligent, autonomous, and useful.


Project Objective
The objective of Aurora Display is to design and demonstrate an autonomous dashboard platform that automatically boots into a full-screen kiosk interface, retrieves real-time information through a backend service layer, and presents that information using a modular tile-based layout optimized for public display. The system is intended to support multiple deployment profiles, maintain usability during partial failures through fallback states and caching, and reduce the need for routine manual updates or human intervention. Aurora Display was not designed to compete with existing digital signage companies, but rather to enhance their displays by adding an intelligent system layer.

Manufacturing Design Methods
Aurora Display was developed as a modular embedded kiosk system built from off-the-shelf hardware and web-based software technologies. The platform is centered around a Raspberry Pi 4B connected to a ViewSonic TD2430 touchscreen display and integrates a Pi camera, an NFC reader, and two VL53L0X time-of-flight sensors to support proximity detection, scan-based interaction, and personalized workflows. The software architecture combines a frontend dashboard interface, a backend service layer for API integration and data normalization, and hardware-interface services that coordinate sensors and peripherals. The system was designed to boot directly into Chromium kiosk mode and operate autonomously with minimal user intervention. Throughout development, emphasis was placed on low-cost deployment, modularity, reliability, and seamless interaction between hardware and software.

Specification
Aurora Display is implemented on a Raspberry Pi 4B and deployed on a 24-inch touchscreen kiosk display in portrait orientation. The system includes a backend API layer, a modular web-based frontend dashboard, kiosk-mode startup automation, and integrated peripherals including a Pi camera (V1 Module), NFC reader, and two time-of-flight sensors. Key functional capabilities include automatic startup in kiosk mode, retrieval of live data from external APIs, support for multiple dashboard deployment profiles, and graceful fallback behavior when live data is unavailable. Key non-functional goals include reliability, maintainability, low cost, public-display readability, and reduced routine maintenance through autonomous startup and recovery behavior.

Analysis
Aurora Display demonstrates that a low-cost edge-based system can provide intelligent, context-aware signage behavior without depending on expensive proprietary infrastructure. The project successfully showed a functioning prototype with multiple deployment profiles built on a shared architecture, real-time API integration, and resilient operation suitable for continuous public-display use. Through its use of local caching, fallback states, and automated recovery behavior, the platform remains usable even when external services are interrupted. Overall, Aurora Display validates the feasibility of a modular, autonomous signage platform that improves clarity, usability, and adaptability in public environments.

Future Works
Future work for Aurora Display focuses on improving scalability, deployment flexibility, and system intelligence. Hardware improvements could include designing a custom PCB based on the Raspberry Pi compute module or MCU to reduce unused components, simplify packaging, and support a more compact enclosure. On the software side, future iterations may incorporate Docker-based containerization, CI/CD pipelines, and remote system monitoring to make deployment and maintenance more consistent across multiple devices. Aurora Display could also be expanded through AI-assisted configuration and decision systems that help adapt content based on context, user behavior, or environment-specific needs. In the long term, cloud integration through services such as AWS would support centralized management, remote updates, and fleet-wide monitoring, allowing Aurora Display to scale from a single kiosk into a distributed platform for campuses, airports, hospitals, shopping centers, and other public spaces.

Other Information
Aurora Display is intended as a flexible autonomy layer for public digital signage rather than a replacement for traditional displays. Its value lies in making information more intelligent, autonomous, user-friendly, reliable, and useful through real-time updates, edge deployment, and interactive user-aware behavior.

Acknowledgement
We would like to thank our faculty advisor(s), Florida Institute of Technology, and everyone who supported the development, testing, and presentation of Aurora Display. We also sincerely thank our family and friends for supporting us throughout this year-long project. Their encouragement and support played an important role in helping us complete this work.




CALI - Cobot Autonomous Living Interface



Team Leader(s)
Nicholas Santamaria

Team Member(s)
Heber Lopez, Berke Dogan

Faculty Advisor
Dr. Edward L. Caraway




CALI - Cobot Autonomous Living Interface  File Download
Project Summary
The CALI (Computer-Assisted Living Interface) system is a low-cost, assistive robotic feeding solution designed to improve independence for individuals with motor impairments. Built on the AR4 6-DOF robotic arm platform, the system integrates computer vision, depth sensing, and custom control software to detect food items, track user position, and autonomously guide a utensil for feeding. A neural network processes visual data in real time, enabling accurate food localization, while a depth camera provides spatial awareness for safe and precise motion. The system is controlled through a custom user interface and implemented using ROS 2 (Robot Operating System 2) for modular, scalable operation. By combining accessible hardware with intelligent software, CALI demonstrates a practical and affordable approach to assistive robotics, with strong potential for further development in healthcare and at-home support applications.


Project Objective
The objective of the CALI project is to design and develop a low-cost, assistive robotic feeding system that enables individuals with motor impairments to eat independently and safely. This system aims to integrate computer vision, depth sensing, and robotic manipulation to detect food, track user position, and execute precise, autonomous movements using a 6-DOF robotic arm. A key goal is to create a modular and scalable platform controlled through a custom user interface and built on ROS 2, allowing for real-time operation and future expansion. Ultimately, the project seeks to demonstrate that advanced assistive technology can be both accessible and effective, providing a practical foundation for further development in healthcare and at-home support environments.

Manufacturing Design Methods
The CALI system was developed using an iterative design process that combined mechanical fabrication, electronics integration, and software testing to create a functional assistive feeding prototype. The project was built around the AR4 robotic arm platform, with custom-designed components modeled in CAD and fabricated primarily through 3D printing to support rapid prototyping, low cost, and easy modification. Multiple utensil-mount concepts and support components were designed, tested, and refined to improve fit, usability, and overall appearance, including custom housings and attachments tailored to the system’s needs. On the electrical and control side, the design integrated depth-sensing hardware, embedded control components, and a custom interface to connect sensing, processing, and robotic motion into a single system. Throughout development, subsystems were repeatedly evaluated and adjusted based on testing results, allowing the team to improve reliability, functionality, and manufacturability while maintaining a modular design approach.

Specification
The CALI system is designed as a 6-degree-of-freedom (6-DOF) assistive robotic platform built on the AR4 robotic arm, capable of precise and repeatable motion suitable for feeding tasks. The system operates using stepper motor-driven joints with encoder feedback for improved positioning accuracy, achieving sub-centimeter end-effector precision within its working envelope. A depth-sensing camera provides real-time spatial data, enabling accurate food localization and user tracking within a typical operating range of approximately 0.2 to 1.0 meters. The control architecture is implemented using ROS 2, allowing modular communication between perception, planning, and actuation nodes. The vision system utilizes a trained neural network for object detection, running on a standard computing platform capable of real-time inference. Custom 3D-printed PLA components are used for the utensil mount and protective enclosures, ensuring lightweight and cost-effective fabrication. The system is powered by standard DC power supplies and interfaces with a custom-built user interface for manual control, calibration, and automated operation modes.

Analysis
The CALI system demonstrates that a low-cost, modular assistive robotic platform can effectively integrate computer vision, depth sensing, and robotic control to perform feeding tasks with reasonable accuracy and reliability. By leveraging ROS 2, the system achieves real-time communication between perception and actuation, validating the feasibility of combining modern software frameworks with accessible hardware like the AR4 arm. While testing showed strong performance in detecting food and guiding motion, limitations such as sensitivity to lighting conditions, system latency, and the inherent constraints of a non-human-centered robotic platform highlight areas for improvement. Overall, the project confirms the potential for affordable assistive robotics while identifying key opportunities for enhancing robustness, safety, and user adaptability.

Future Works
Future work for the CALI system will focus on improving reliability, safety, and user adaptability to move closer to real-world deployment. Enhancements to the computer vision pipeline, including more robust neural network training and expanded datasets, will improve accuracy across varying lighting conditions and a wider range of food types. Reducing system latency through optimized processing and more efficient communication within ROS 2 will enable smoother and more responsive motion. Mechanical upgrades to the AR4 platform, such as improved safety features, softer end-effectors, and more ergonomic utensil designs, will make the system better suited for direct human interaction. Additional developments may include user-specific calibration profiles, voice or gesture-based controls, and expanded autonomy for tasks beyond feeding. Ultimately, future iterations aim to refine the system into a more robust, intuitive, and clinically viable assistive technology.


Acknowledgement
The team would like to thank the Machine Learning Team (Aruna Dookeran, Michael Yanke, Kari Voelstad Bogen, Levent Kahveci), as well as Dr. Caraway and TA Elis for their support throughout the senior design process.




Cobot



Team Leader(s)
Logan Steele

Team Member(s)
Brandon Goerz, Jeremy Hay, Luke Jacobsen, Sean Kalkiewicz, Logan Steele

Faculty Advisor
Dr Caraway




Cobot  File Download
Project Summary
This project addresses the challenges of creating an autonomous robotic system that is capable of physically interacting with a chessboard and maintaining an accurate understanding of the current game state. A traditional chess engine would operate with a focus on pure software, translating that logic into real-world movements to make the physical moves, with challenges such as board recognition, piece identification, and the interaction between human and robot movements. The problem, therefore, requires the integration of robotics, computer vision, and game logic into a single system that enables the robot to operate to a standard that is accurate against the human opponent. The proposed solution is a robotic chess system with a single robotic arm equipped with a claw capable of accurately grabbing and moving chess pieces. The system begins by calculating the first move using a chess engine and then makes the physical move. After its move, it waits for a human to make a responding move. Once a move is made, the system reads the board’s current state and then creates a response and continues doing this until the game is complete. To maintain an accurate understanding of the board state, the system uses two vision systems in sync. A depth camera that is mounted on the robotic claw provides close-range perception for precise piece detection, used in picking up the pieces and confirming their placement. An overhead camera is then used to constantly monitor the entire board and determine the game state as well as detect human moves. This project integrates robotic control, computer vision algorithms, and chess engine logic to coordinate the arm's gameplay and movement with the human opponent. The resulting prototype demonstrates reliable piece manipulation and movement, accurate board state reading and verification, and an autonomous turn-based interaction with a human player, preventing potential injury of the player. This in turn shows the viability of combining the perception, planning, and physical movements required in creating an interactive robotic chess platform.


Project Objective
The objective of this project is to design and implement a robotic chess-playing system capable of detecting board states, identifying human moves, planning legal responses, and physically executing piece movement using a robotic arm. The system is intended to demonstrate the integration of computer vision, robotic control, and decision-making into a single interactive platform.




Future Works
Improved Board Detection: Make the overhead camera work better in different lighting so it can more reliably recognize the board and pieces. Two Camera Vision System: Add a second side camera to help double check what the board looks like and improve accuracy. Faster Robot Movement: Improve how the robot arm moves so it works faster and more smoothly when picking up and placing pieces.






Edge Vision - Machine Learning & AI



Team Leader(s)
Michael Yanke

Team Member(s)
Aruna Dookeran, Kari V. Bogen, Levant Kahveci

Faculty Advisor
Edward Lee Caraway




Project Summary
The Machine Learning & AI team developed the Edge Vision system, a computer vision subsystem for the CALI (Cobot Autonomous Living Interface) project, as part of the senior design program at Florida Institute of Technology. Running entirely on an NVIDIA Jetson Orin Nano, the system performs real-time face detection, facial recognition, and face tracking through a multi-model pipeline. Face detection is handled by a TensorRT-optimized YOLOv12 convolutional neural network, while facial recognition is powered by InsightFace, achieving a mAP@0.5 of 66.5% with approximately 22ms of GPU inference time. A servo-driven pan-tilt camera mount physically orients the camera toward the detected subject, with three configurable tracking modes and manual controls integrated into the interface. The system is presented through a PyQt-based GUI running directly on the Jetson. Hardware platforms, including the Raspberry Pi and the Hailo-8L AI accelerator, were evaluated before the Jetson Orin Nano was selected as the optimal deployment target for its CUDA-enabled GPU and native AI acceleration.


Project Objective
The objective of this project was to design and deploy a real-time computer vision system capable of detecting, recognizing, and continuously tracking human faces on the NVIDIA Jetson Orin Nano. The system needed to maintain smooth tracking across frames, correctly identify enrolled individuals with reasonable confidence, and physically orient a USB camera toward the detected subject using a servo-controlled pan-tilt mount. The interface needed to display live detection results, including labels, confidence scores, and bounding boxes, refreshing at a rate suitable for real-time use. A secondary objective was to evaluate multiple hardware platforms and select the one best suited to these performance and deployment requirements.

Manufacturing Design Methods
The software pipeline was designed around a multi-model architecture, with each model selected to balance accuracy and computational cost on embedded hardware. Face detection is performed by a YOLOv12-based convolutional neural network optimized with TensorRT, leveraging the Jetson's GPU for fast face localization in each frame. Facial recognition is handled by InsightFace, which extracts and compares identity embeddings against enrolled individuals. Mouth detection uses a classical Haar Cascade approach running on the CPU, providing a lightweight, fast solution with approximately 10ms of inference time. TensorRT optimization was applied to the GPU-based models to maximize throughput within the Jetson's hardware constraints. Hardware platform selection was an iterative process. The Raspberry Pi was evaluated first, but proved insufficient without a GPU. Adding the Hailo-8L AI accelerator brought performance to an acceptable level, but the NVIDIA Jetson Orin Nano was ultimately selected as the primary platform due to its CUDA-enabled GPU, which delivered the best sustained inference performance across all candidates. The facial tracking system was implemented with three configurable modes — tracking any face, known faces only, or a specific individual — and always targets the largest face in the frame. Manual controls and a tracking toggle are integrated directly into the GUI. The interface is built with PyQt, runs entirely on the Jetson, and refreshes detections every 250ms to display each detection's label, confidence score, and bounding box live. On the physical side, the servo pan-tilt camera mount was wired to the Jetson Orin Nano's GPIO pins, allowing real-time camera orientation commands based on detected face position. A custom 3D-printed enclosure was designed for the Jetson, with the pan-tilt mount installed on top, creating a compact self-contained unit. A generic USB camera was mounted on the pan-tilt platform to provide the pipeline with a video feed.

Specification
Primary hardware: NVIDIA Jetson Orin Nano Super Developer Kit Camera: Generic USB camera on servo-driven pan-tilt mount Face detection model: YOLOv12 CNN, TensorRT optimized (~22ms GPU inference) Mouth detection model: Haar Cascade, CPU-based (~10ms inference) Facial recognition: InsightFace — mAP@0.5: 66.5% Tracking modes: Any face / known faces only / specific face GUI: PyQt-based, hosted on Jetson Actuation: Servo motors in pan-tilt configuration, Jetson GPIO controlled Enclosure: Custom 3D-printed case Deployment environment: Edge device, no cloud dependency Analysis: The NVIDIA Jetson Orin Nano proved to be the appropriate platform for this application, outperforming both the standalone Raspberry Pi and the Raspberry Pi with Hailo-8L accelerator in sustained inference throughput. The TensorRT-optimized YOLOv12 face detection model achieves approximately 22ms of GPU inference time, enabling the system to maintain a 250ms GUI refresh cycle with headroom for the additional recognition and mouth detection stages. The classical Haar Cascade approach for mouth detection was an effective design choice, offloading a simpler detection task to the CPU and freeing GPU resources for the more demanding models. The facial recognition mAP@0.5 of 66.5% reflects the inherent challenge of identity matching on an embedded system where model size and compute budget must be balanced against accuracy. This is a reasonable result for an edge-deployed system and leaves clear room for improvement through expanded enrollment datasets and potential model fine-tuning. The three-mode tracking system offers practical flexibility, allowing it to be configured for different use cases, from general presence detection to tracking a specific known individual.


Future Works
Several directions could meaningfully improve upon the current system. Recognition accuracy could be increased by expanding per-individual enrollment data and fine-tuning the recognition model on domain-specific examples. Integrating temporal tracking methods, such as Kalman filtering, would improve robustness during occlusions and reduce the computational cost of full recognition on every frame. Pan-tilt control could be refined using PID-based servo control for smoother, more precise camera movement. On a broader scale, expanding the system to reliably handle multiple simultaneous subjects and integrating additional sensor modalities would increase its utility in real assistive robotics applications.


Acknowledgement
The team would like to thank Dr. Edward Lee Caraway and GSA Elis Karcini for their guidance and support throughout the senior design process. Additional thanks to the CALI team (Nicholas Santamaria, Herber Lopez, Breke Dogan) for their assistance and collaboration on the project, and the Department of Electrical Engineering and Computer Science at Florida Institute of Technology for providing the resources and framework that made this project possible.




Electric Vehicle



Team Leader(s)
Terence Lee

Team Member(s)
Troy Stephens, Corbin Williams

Faculty Advisor
Dr. Edward L. Caraway




Electric Vehicle  File Download
Project Summary
The mission of this multi-generational project is to research, design, and implement a traction inverter system for an electric vehicle engine, specifically utilizing the M-9000-MACHE "Eluminator" front drive unit. The project focuses on developing a comprehensive hardware and software framework capable of driving a modern 3-phase motor using high-efficiency Silicon Carbide (SiC) MOSFET semiconductor technology. The scope of this work encompasses the design of the traction inverter, the implementation of an integrated thermal management system, and the development of a dedicated power supply system. Additionally, the project will integrate real-time condition monitoring to track engine health and operational status.


Project Objective
The primary objective of this project phase is to advance the development of the traction inverter system by transitioning from custom-prototype hardware to industrial-grade evaluation platforms. Building upon the control architecture established by previous teams, this project aims to achieve a controlled motor spin of the electric drive unit. Specific objectives include: • Hardware Modernization: Replacing legacy custom-designed gate-driver boards with Texas Instruments UCC21520EVM evaluation modules and utilizing the LAUNCHXL-F280025C (C2000™ microcontroller) to improve signal reliability and system protection. • System Validation: Verifying the operational integrity of the existing SiC MOSFET power stage and ensuring compatibility with the new gate-drive signals and the EVAL-AD2S1210SDZ resolver interface. • High-Voltage Isolation: Implementing bus terminal connections for the MOSFETs using polyimide film for robust electrical insulation and thermal stability between high-voltage conductors. • Mechanical Integration: Designing and fabricating a custom, CAD-validated mounting board to securely house the 3-phase power electronics and control modules while ensuring proper alignment for torque transfer. • Circuit Protection: Implementing an RCD snubber circuit to suppress high-frequency voltage transients and protect power semiconductors from inductive kickback. • Controlled Bench Spin: Executing a phased bring-up and testing plan to confirm proper gate-drive behavior, leading to the successful rotation of the Ford Mustang Mach-E electric motor under deterministic PWM control.

Manufacturing Design Methods
• 3-Phase System CAD & Mounting: Developed custom 3D models for a dedicated mounting board and the 3-phase system layout. The CAD work focused on optimizing the placement of the inverter bridge and control boards to minimize electromagnetic interference (EMI) and ensure secure mechanical fastening. • High-Voltage Terminal Construction: Utilized bus terminal connections for the MOSFET power stage. Polyimide insulation was applied to isolate the voltage bus bars from the mounting infrastructure, providing high dielectric strength and heat resistance. • Protection Circuitry Fabrication: Engineered and integrated an RCD (Resistor-Capacitor-Diode) snubber circuit specifically designed to damp voltage spikes caused by parasitic inductance in the 3-phase bridge during high-speed switching. • Control & Sensing Integration: Interfaced the LAUNCHXL-F280025C real-time controller with the EVAL-AD2S1210SDZ evaluation board to enable high-precision resolver-to-digital conversion for accurate rotor position tracking. • Gate Drive Implementation: Integrated the UCC21520EVM dual-channel gate drivers to provide the necessary isolation and drive current to reliably toggle the SiC MOSFETs while maintaining safety between the control and power tiers.

Specification
• Motor: Ford Mustang Mach-E Front Drive Unit (M-9000-MACHE) • Control MCU: TI LAUNCHXL-F280025C (C2000™ Real-time Controller) • Gate Driver: TI UCC21520EVM (Isolated Dual-Channel Evaluation Module) • Position Sensing: EVAL-AD2S1210SDZ (High-resolution Resolver-to-Digital Converter) • Power Stage Protection: Integrated RCD Snubber Circuit • Switching Technology: Silicon Carbide (SiC) MOSFETs • Isolation Material: Polyimide (Kapton) film • Mechanical Infrastructure: Custom CAD-validated 3-phase mounting board

Analysis
• Transient Mitigation: Identified the need for an RCD snubber to clamp voltage spikes caused by parasitic inductance in the 3-phase bridge. • Signal Compatibility: Verified that the LAUNCHXL-F280025C PWM outputs match the input requirements and dead-time logic of the UCC21520EVM drivers. • Material Insulation: Selected polyimide for its high dielectric strength to prevent arcing between high-voltage terminals and the chassis. • Resolver Mapping: Configured the EVAL-AD2S1210SDZ resolution to match the Ford Mach-E resolver characteristics for accurate commutation. • Mechanical Clearance: Used CAD models to ensure proper "creepage and clearance" distances between high-voltage conductors for safety.

Future Works
• Thermal Design Research: Continue investigating advanced cooling strategies to manage heat dissipation during high-load operation. • Proprietary System Integration: Research methods to interface with Ford's proprietary communication and safety protocols. • Power Optimization: Develop strategies to refine power delivery efficiency while reducing the overall physical footprint of the system. • Mechanical Advisement: Seek expert consultation on mechanical impacts and structural stresses on the system.


Acknowledgement
Special thanks to our faculty advisor, Dr. Edward L. Caraway, and Graduate Student Assistant Elis Karcini for their guidance and support. We also acknowledge the dedicated efforts of the project team members whose contributions were essential to the advancement of this system.




Microgravity Simulator



Team Leader(s)
Alexander Montano

Team Member(s)
Aruna Dookeran, Elias Orellana, Aiden Smart

Faculty Advisor
Dr. Andrew G. Palmer

Secondary Faculty Advisor
Dr. Edward L. Caraway



Microgravity Simulator  File Download
Project Summary
This project involves the design and development of an automated two-axis microgravity simulator (clinostat) for biological research. The system uses continuous dual-axis rotation to simulate microgravity conditions on Earth. It integrates mechanical, electrical, and software components, including stepper motors, slip rings, LED lighting, and a Raspberry Pi-controlled touchscreen interface.


Project Objective
The objective of this project is to design and build an automated microgravity simulator capable of continuous dual-axis rotation. The system aims to provide consistent lighting, real-time user control, and reduced manual intervention through an integrated touchscreen interface.

Manufacturing Design Methods
The system was designed using Fusion 360 and fabricated primarily through 3D printing. A dual-axis frame was developed to support continuous rotation using two NEMA 17 stepper motors. Slip rings were incorporated to allow power transfer during rotation, while electrical components such as the motor driver HAT, MOSFET, and power supplies were integrated into a compact enclosure. A Raspberry Pi was used for system control and user interaction.

Specification
Dual-axis rotation system Two NEMA 17 stepper motors 24V LED lighting with PWM control Raspberry Pi-based control system Touchscreen interface for user interaction Slip rings for continuous rotation Separate 12V (motors) and 24V (LEDs) power systems

Analysis
The system successfully integrates mechanical rotation, electrical control, and user interface design into a single platform. The use of slip rings allows continuous operation without wire interference, while the touchscreen interface improves usability by enabling real-time adjustments. Initial testing demonstrates stable LED control and system responsiveness, with ongoing work focused on motor performance and full system integration.

Future Works
Future work includes completing system integration, performing extended testing, and optimizing performance for long-duration operation. Additional improvements may include enhanced automation features, improved thermal management, and further refinement of the user interface.

Other Information
This project provides an accessible and cost-effective platform for simulating microgravity conditions on Earth. The system is designed to support continued development and future enhancements for expanded research applications.

Acknowledgement
The team would like to thank Dr. Palmer for his guidance and project direction, as well as Dr. Caraway and TA Elis for their support throughout the senior design process. Additional thanks to Florida Tech and the OEC lab for providing resources for fabrication and testing.




Ortega Observatory



Team Leader(s)
Elijah Kornbau

Team Member(s)
Elijah Kornbau, Logan Collins, Bradley Erskine, Luz Houck





Project Summary
Our project details the help done for the Ortega observatory including maintenance and help with the design of the circuit board for the lower control arms, as well as the construction of a weather station that monitors and displays real time weather data.












Rapid Reach



Team Leader(s)
Peyton Hay, Ryan Matthews

Team Member(s)
Rishi Ammanabrolu, Alex Dumbell, Peyton Hay, Ashley Hurtado, Ryan Matthews, Mac McHale, Sean Miller, Dan Zschau

Faculty Advisor
Dr. E. Lee Caraway




Project Summary
We developed an autonomous drone system to improve emergency response times for sudden cardiac arrest incidents on the Florida Tech campus. We identified that limited AED availability and reliance on bystander retrieval significantly delayed treatment, so we designed a quadcopter capable of delivering an AED within a half-mile in under three minutes. Our system integrated a centrally located Housing and Launch System (HLS) that stored and deployed the drone upon request, enabling rapid, on-demand response without requiring users to locate existing AEDs. We engineered the drone for reliability and efficiency, achieving a 2:1 thrust-to-weight ratio, stable autonomous flight, and GPS-based navigation with obstacle avoidance and failsafe behaviors. Through iterative CAD design, material optimization, and ANSYS simulations, we refined the structure using lightweight carbon fiber components while ensuring safety and compliance with FAA regulations. The final system demonstrated a scalable, cost-effective solution that could extend beyond campus use to broader emergency response applications, with the potential to significantly improve survival outcomes in time-critical situations. Problem Statement: Survival of out-of-hospital cardiac arrest depends on rapid heart defibrillation, yet automated external defibrillator (AED) retrieval, high acquisition costs, and maintenance requirements often limit unit availability. Consequently, the delay inherent in traditional manual retrieval and EMS arrival frequently prevents aid from reaching victims in time.


Project Objective
Our primary objective was to design, build, and test an autonomous drone system capable of delivering an AED within a half-mile radius in under three minutes. We also aimed to ensure the system was always mission-ready through a centralized housing and launch system that maintained power, protection, and communication capabilities. Additional objectives included meeting safety and regulatory requirements and staying within the project budget.




Future Works
For future development, we planned to enhance the system’s reliability and usability by integrating advanced obstacle detection capabilities, allowing the drone to dynamically identify and avoid unexpected hazards during flight. We also aimed to implement real-time automated flight planning, enabling the drone to adjust its route on-the-fly based on environmental conditions or mission constraints rather than relying solely on pre-programmed paths. Additionally, we proposed the development of a user-facing application for dispatch and tracking, which would allow emergency requests to be initiated quickly while providing live updates on drone status and location, improving overall system responsiveness and user confidence.


Acknowledgement
Special thanks to Elis Karcini, Felix Gabriel, Dr. Firat Irmak, Terence Lee, and Alex Lacy for their contributions and assistance with this project.




Computer Science and Software Engineering

BrainBench - LLM Evaluation




Team Member(s)
Orion Powers and Daniella Seum

Faculty Advisor
Dr. Khaled Slhoub

Secondary Faculty Advisor
Dr. Philip Chan



BrainBench - LLM Evaluation  File Download
Project Summary
BrainBench is an evaluation framework designed to analyze the performance of locally deployed large language models (LLMs) on mathematical reasoning tasks. The system evaluates 3 free, downloadable models: Qwen3:4b (Alibaba), Gemma3:4b (Google), and Phi3:3.8b (Microsoft). By integrating automated testing, answer verification, and performance tracking into a unified pipeline, the project enables transparent, reproducible comparisons across models. Results are presented through an interactive web dashboard for easy analysis. BrainBench Website: https://dseum2023.github.io/senior_design_project/index.html


Project Objective
The primary objective is to develop a standardized, automated framework for evaluating LLMs across multiple dimensions, including accuracy, efficiency, and consistency. The project aims to enable fair comparisons by testing models on identical datasets with repeated runs and statistical validation. Another key objective is to present results in a clear, user-friendly dashboard, allowing users to easily interpret model performance and make informed decisions based on their specific needs.

Manufacturing Design Methods
The system was designed as a modular pipeline consisting of dataset ingestion, prompt generation, model execution, answer extraction, and result storage. Three locally deployable models (Qwen3:4b, Gemma3:4b, and Phi3:3.8b) were selected because their similar parameter sizes allow for fair comparisons by minimizing differences caused purely by model scale, instead emphasizing architectural and training differences. Their relatively small size also makes them well-suited for local deployment, as they require less storage, memory, and computational power, enabling efficient execution on consumer-grade hardware. Mathematical datasets were curated and processed through each model using identical prompts to ensure consistency and fairness. A multi-stage parsing system extracts and normalizes answers, allowing reliable comparisons against the solutions. Results are then stored and visualized through a web-based dashboard, enabling cross-model analysis and interactive exploration.


Analysis
The system evaluates models using metrics such as correctness, latency, token usage, and hardware efficiency, including energy consumption and resource utilization. Experiments are conducted across multiple datasets and repeated trials to ensure statistical reliability. Analysis includes accuracy comparisons, efficiency trade-offs, and significance testing to validate observed differences. Results show that no single model dominates across all tasks, highlighting trade-offs between accuracy and efficiency and reinforcing the need for multi-dimensional evaluation.







Cardiac Image Reconstruction From Low-Dosage SPECT Scans via Deep Learning



Team Leader(s)
Timothy Shane

Team Member(s)
Evan Gunderson, Alex Thomas

Faculty Advisor
Dr. Marius Silaghi

Secondary Faculty Advisor
Dr. Philip Chan



Cardiac Image Reconstruction From Low-Dosage SPECT Scans via Deep Learning  File Download
Project Summary
Single Photon Emission Computed Tomography (SPECT) scans are a type of medical scan where a patient is injected with a radiotracer which emits gamma rays that are detected by gamma ray cameras that rotate around the patient. The result of this is a series of images taken from different angles, collectively referred to as a sinogram. The sinogram is then converted into a 3D volume of the patient using techniques such as Maximum Likelihood Expectation Maximization (MLEM). This takes a long time, and using a neural network would speed it up and allow for a lower dosage of radiotracer to be used. An autoencoder with attention was trained on 72,000 simulated patients. This model can be used to reconstruct cardiac images instantaneously.


Project Objective
Quickly and accurately reconstruct SPECT scans with low dosages of radiotracer via a neural network.

Manufacturing Design Methods
For generating the training data, a simulation of the GE Infinia SPECT scanner was created in Gate10, which is a medical physics simulator. eXtended CArdiac-Torso (XCAT) patient phantoms with low radiotracer dosage were scanned in the simulation to produce images of the patient, which were captured by simulated rotating gamma ray cameras every 3 degrees, producing 120 images, or 120-slice sinograms. Augmentations were performed on the sinograms to increase the amount of training data. Augmentations included translations, z-axis rotations, subresolution, and poisson noise. An autoencoder with attention was then trained on the sinograms as inputs, and volumes reconstructed via MLEM as outputs. Structural Similarity Index Measure (SSIM) was used to evaluate the accuracy of the reconstructed scans.

Specification
Model: Our model we trained for this project was an autoencoder-decoder with self-attention blocks. Simulation: In our simulation, a patient cropped to their target organs using a fixed window that always contained the target organs was dosed with 7 MBq of Tc99m and then 120 images were taken each exposed for 5 seconds across 3 degrees. This is distributed across 4 cameras leading to a total simulation scan time of 150 seconds and a full 360 degree patient coverage. The gamma camera is an exact replica of the GE Infinia SPECT Hawkeye 4 camera with the LEHR collimator and the thallium-activated sodium iodide (NaI(Tl)) crystal with a ⅜” thickness.

Analysis
We achieved an RMSE of 0.002743, and a SSIM of 0.943770, displaying how neural networks are competent at performing sinogram reconstructions. Our inference time is near instant, vastly surpassing the standard 45 minutes required by MLEM reconstructions. The visual comparison shows a clearer reconstruction than traditional iterative methods, especially for the heart. Overall, if further training and tuning is pursued with real patient data, it is feasible that neural network reconstruction could replace MLEM.

Future Works
Training our model on large amounts of real data could allow it to be used by clinicians to reconstruct SPECT scans in much shorter timeframes. It would also allow doctors to use lower dosages of radiotracer while still getting a quality scan.

Other Information
https://athomas2022.github.io/SeniorDesignSite/

Acknowledgement
Acknowledgement: Thank you to our advisors: Dr Marius Silaghi and Dr. Debasis Mitra. Thank you to our mentors: Sammy Morries Boddepalli, Tommy Galetta, and Youngho Seo, and the National Institutes of Health for financial support.




Competitive Programming Primer




Team Member(s)
Pedro Marcet, Ivan Marriott, Jon Ayuco

Faculty Advisor
Dr. Rhaguveer Mohan

Secondary Faculty Advisor
Dr. Philip Chan



Competitive Programming Primer  File Download
Project Summary
There is little popularity for competitive programming despite how helpful and fun it can be for CS students and other programmers. We tackle this issue on with 3 products: An algorithm visualizer, a problem repository, and a problem cataloguer. Our algorithm visualizer is meant to be used with already existing code! With minimal modification to a python script and by using our program one can visualize arrays, pointers, and graphs in such a way you can visualize your code. The problem repository is a database made to contain catalogued versions of all past International Collegiate Programming Contest (ICPC) problems. It allows for searching by year, competition, region, level, and even problem types (Dynamic Programming, Ad Hoc, etc.). The problem cataloguer is made as a simple way to add and catalogue new problems into the database. Comes with a built in Kattis scraper (the website that hosts ICPC problems) to auto-fill most information about any given problem, leaving just the categorizing job. All these features are made available in a single website.




Specification
The visualizer was made in Python utilizing the Manim and numpy libraries. It uses a custom array and graph node implementation that automatically creates the visual representations. For non-tree graphs a gradient descent algorithm is used to keep a viable size. The database is hosted on Supabase. The cataloguer is made in Java utilizing these tools and libraries: Maven, JavaFX, Jackson, Jsoup, and PostgreSQL JDBC Driver. The website is hosted on Netlify. GitHub was used as the main version control.


Future Works
How the cataloguer is used to maintain the database and by who is left to the discretion of Dr. Mohan, our advisor and client. The scraper does not support non-standard problems (standard problems have one input from STD in and one output to STD out). The website could support LaTeX.


Acknowledgement
We would like to express our gratitude to Dr. Raghuveer Mohan, our academic advisor and client who inspired us to make this project. Further thanks to Dr. Philip Chan, our secondary advisor, and the developers of all the tools, softwares, and libraries mentioned in Specifications.




FITARNA



Team Leader(s)
Jacob Hall-Burns

Team Member(s)
Vincenzo Barager, Dathan Dixon, Jacob Hall-Burns, Ethan Wadley

Faculty Advisor
Eraldo Ribeiro

Secondary Faculty Advisor
Philip Chan



FITARNA  File Download
Project Summary
FIT AR Navigation App (FITARNA) is an indoor augmented reality navigation system designed to help students and visitors navigate complex campus buildings such as the Evans Library. The project uses AR-based spatial mapping, indoor localization, and real-time route visualization to provide an intuitive wayfinding experience where traditional GPS systems fail. By combining Unity, Vuforia Area Targets, AR Foundation, and custom pathfinding, the system overlays directional guidance and navigation markers directly onto the real-world environment. Problem Statement: Large academic buildings can be difficult for new students and visitors to navigate. Existing outdoor navigation systems, such as GPS, do not work reliably indoors due to signal attenuation and limited floor-level accuracy. As a result, users often struggle to find specific rooms, study areas, help desks, or library wings efficiently. Project Objective: The objective of FITARNA is to create an indoor AR wayfinding application that provides accurate, real-time navigation without GPS. The system aims to achieve high-precision indoor localization, display intuitive 3D guidance overlays, and allow users to search for points of interest within a campus building. Manufacturing / Design Methods: The project was developed by first scanning the Evans Library using Vuforia Area Targets to build a high-fidelity digital representation of the environment. Unity’s AR Foundation was integrated with the Vuforia Engine to support augmented reality features and robust area recognition. A custom NavMesh was implemented in Unity for shortest-path route calculation, and a spatial UI was designed to project AR markers and breadcrumb-style directional indicators into the user’s physical surroundings. Specifications: The system is designed to support indoor navigation in complex academic environments with sub-meter localization accuracy. It includes searchable campus points of interest such as library wings, study rooms, and help desks. The software stack includes Unity, AR Foundation, Vuforia Engine, ARCore/ARKit support, and a custom Unity NavMesh-based routing system. Analysis: The project demonstrates that augmented reality can improve indoor wayfinding by more naturally bridging digital maps and physical spaces than conventional navigation tools. Using spatial mapping and area targets allows the system to maintain persistent tracking and deliver more precise indoor guidance than GPS. The custom route calculation and 3D overlays create a more intuitive experience for users navigating complex buildings. Future Works: Future work could include expanding the system to additional campus buildings, improving route adaptability in changing indoor conditions, enhancing the searchable point-of-interest database, and refining localization accuracy and user interface responsiveness. Additional features such as accessibility-aware routing or multi-floor route optimization could also strengthen the system. This part goes a bit beyond the exact poster text, but it follows naturally from the project scope shown in the presentation. Acknowledgement: This project was completed by Vincenzo Barager, Dathan Dixon, Jacob Hall-Burns, and Ethan Wadley, with faculty advising from Eraldo Ribeiro in the Department of Computer Science at Florida Institute of Technology. Other Information: FITARNA focuses on solving indoor navigation challenges in environments where GPS is unreliable. Its main innovation is the use of AR-based spatial understanding and real-time visual guidance to create a seamless indoor wayfinding experience for campus users.












Sloan Hatter, Blake Gisclair



Team Leader(s)
Sloan Hatter

Team Member(s)
Sloan Hatter, Blake Gisclair

Faculty Advisor
Dr. Ryan T. White

Secondary Faculty Advisor
Dr. Philip Chan



Sloan Hatter, Blake Gisclair  File Download
Project Summary
Orbital object detection is a vital aspect of space operations, particularly for identifying satellite components. Convolutional Neural Networks (CNNs) are typically used for such operations by running on-board models directly on satellite systems. However, a new neural network architecture, known as Vision Transformers (ViTs), has shown greater effectiveness due to its ability to capture global context. One main issue of deploying systems with such capabilities is resource allocation. One solution to the problem of resource allocation is to run models on a Low-SWaP (Low Size, Weight, and Power) system; however, this typically results in inefficient performance. To enable efficient ViT operations on Low-SWaP systems, the model must be scaled down via quantization, reducing its bit representation and improving performance on smaller systems. The proposed solution enables the achievement of High Autonomous Low-SWaP Operations.





Analysis
Across all models, YOLOv5s provides the strongest overall balance in performance, achieving the highest accuracy (mAP@0.5 = 0.853, mAP@[0.5:0.95] = 0.667) while also delivering the highest throughput (~16 FPS), despite its relatively large model size (~26 MB). However, this performance comes at a higher computational cost (15.8 GFLOPs) and higher runtime memory usage (~230 MB). Model performance falters when assessing the YOLOv5s Reduced and YOLOv5n variants. The YOLOv5s Reduced model performs the worst out of all five models, showing a significant drop in accuracy (0.701 mAP@0.5 and 0.491 mAP@[0.50:0.95]). Even with the reduced computational cost (~10 GFLOPs), model size (~14 MB), and runtime memory (~184 MB), the trade-off is not advantageous, as the resulting loss in accuracy outweighs the efficiency gains. The YOLOv5n model further reduces computational cost (4.1 GFLOPs), model size (6.75 MB), and runtime memory (~111 MB), but this comes at the cost of lower accuracy (0.808 mAP@0.5 and 0.57 mAP@[0.50:0.95]) and reduced throughput (~10 FPS). Nevertheless, it offers a more practical trade-off for scenarios where resource constraints are more critical than top performance. The FP32 ViT model performs well at 0.848 mAP@0.5, with a slight drop-off at 0.53 mAP@[0.50:0.95]. However, the model incurs a higher computational cost (18.6 GFLOPs), requires more runtime memory (~288 MB), and shows a substantial drop in throughput (~6 FPS). The model is slightly smaller than the YOLOv5s model (~24 MB); however, this reduction is not substantial enough to offset its inefficiencies. These results indicate that the FP32 ViT achieves competitive coarse detection performance but does so inefficiently, requiring higher computational cost and memory while delivering low throughput, making its use less practical for real-time deployment. The FP16 ViT model also performs well at 0.848 mAP@0.5, with some drop-off at 0.577 mAP@[0.50:0.95], while cutting model size (~12 MB) and runtime memory (~144 MB) in half, achieving a substantial efficiency gain in storage and memory footprint while still preserving its detection capabilities. However, this comes with two important trade-offs: a slight drop in localization precision (mAP@[0.5:0.95]) and a notable decrease in inference speed (~1 FPS). The latter two points suggest that the reduced bit representation improves memory efficiency, but does not translate into faster inference without appropriate hardware or optimization. That said, despite the strong performance of the CNN models, the FP16 ViT model offers competitive accuracy, particularly at mAP@0.5, while reducing model size and runtime memory, and could improve further with advanced training techniques and optimized quantization pipelines.

Future Works
Future work and analysis can be done by achieving true 8-, 4-, and 2-bit representations of ViTs for orbital object detection tasks. Even further, a true 1-bit representation may be achieved to sufficiently increase model performance on even smaller computers. Improvements to the outlined models can also help achieve efficient satellite component object detection for diagnosing broken components, docking to satellites, and identifying space debris. Further documented comparison between CNNs and ViTs can also be achieved through the training and quantization of models for analysis. Acknowledgement: A big thank you to Dr. White for his guidance and foundational research insights, and Arianna Issitt for her help with the Jetson computer, datasets, and workflow knowledge.






Java OO Visualizer



Team Leader(s)
Ashley McKim

Team Member(s)
Ashley McKim, Darian Dean, Simon Gardling

Faculty Advisor
David Luginbuhl

Secondary Faculty Advisor
Philip Chan



Java OO Visualizer  File Download
Project Summary
The Java OO Visualizer is a browser-based educational tool built for students in introductory Java courses. It automatically parses Java source code, generates class-relationship diagrams, and animates execution step-by-step using memory diagrams, showing object creation, reference assignment, and method invocation in real time. The tool also includes a manual diagram creator and an automated diagram comparer that gives students instant feedback on their work. No installation is required; everything runs in the browser via a Rust backend compiled to WebAssembly.


Project Objective
The goal of the Java OO Visualizer is to bridge the gap between abstract OO concepts and concrete understanding by providing an interactive, animated memory diagram tool driven by real Java source code. Specifically, the project aimed to build a system that automatically generates class diagrams from any Java input, simulates execution of a main method step-by-step with visible heap and stack state, provides a drag-and-drop diagram creator for students to draw their own OO diagrams, compares student-drawn diagrams against actual code and reports mistakes with hints, and runs entirely in the browser with no setup required on the student's part.

Manufacturing Design Methods
The backend is written in Rust and uses the tree-sitter library with a Java grammar to parse source code into a concrete syntax tree. A two-pass static analyzer extracts classes, interfaces, fields, methods, constructors, and inter-class relationships. A separate execution flow engine symbolically simulates the main method, stepping into user-defined method and constructor bodies, evaluating expressions, tracking field mutations, and resolving branch conditions and loop counts. Both pipelines emit DOT graph descriptions, which are serialized to JSON and passed to the frontend. The backend is compiled to WebAssembly using Emscripten, exposing three C-ABI functions callable from JavaScript. The frontend is built with vanilla JavaScript ES modules. It loads the WASM module at startup, calls the exported functions on every code change, and renders the resulting SVG diagrams. The Diagram Creator is a custom canvas-based drawing engine with quadratic Bézier connectors, continuous border snapping, zoom/pan, and lasso selection. The Diagram Comparer runs a lightweight client-side Java parser and diff engine entirely in the browser. Builds are automated through a GitHub Actions CI workflow using a NixOS flake for reproducible Emscripten and Rust toolchain management.





Acknowledgement
The team would like to thank Dr. David Luginbuhl of the Florida Institute of Technology Department of Computer Sciences and Cybersecurity for his guidance and support throughout the project as faculty advisor. The project makes use of several open-source libraries and tools whose authors deserve recognition: the tree-sitter project and its Java grammar maintainers, the Graphviz teams, the CodeMirror editor project, the Bootstrap framework, the pako compression library, the Emscripten toolchain, and the Rust programming language and its ecosystem.




LTE and Wifi Operated Car




Team Member(s)
Nicholas Shenk, Christian Prieto, Joseph Digafe, Donoven Nicolas

Faculty Advisor
Marius Silaghi

Secondary Faculty Advisor
Philip Chan



LTE and Wifi Operated Car  File Download
Project Summary
The Remotely Controlled Car via LTE/Wi-Fi project is a distributed, low-latency teleoperation system designed to enable safe exploration of hazardous or inaccessible environments. The system allows an operator to remotely control a rover while receiving a real-time first-person video feed and live telemetry, reducing the need for humans to physically enter dangerous areas. The architecture consists of a rover-side system (Raspberry Pi, camera, LTE/Wi-Fi module, and motor controller) and a client-side operator application. The rover captures and encodes video, transmits it over a custom UDP-based protocol, and receives encrypted control commands. The operator interface displays live video, system metrics (latency, jitter, packet loss), and provides control via keyboard, gamepad, or steering wheel. A key focus of the project is achieving low-latency communication (


Project Objective
The objective of this project is to design and build a secure, low-latency remotely controlled car that can operate over both Wi-Fi and LTE. The system should provide real-time video, reliable control, and live telemetry, while maintaining stable performance even under poor network conditions. Ultimately, the goal is to allow operators to safely explore and gather information from hazardous environments without needing to be physically present.



Analysis
The system was evaluated based on latency, reliability, and usability in real-time conditions. Testing showed that using a UDP-based approach allows for low-latency video streaming and control, making the system responsive enough for remote operation. Network performance varies depending on conditions, with Wi-Fi providing lower latency and LTE offering extended range. The failover mechanism ensures continuous operation by switching networks when needed, improving reliability. Telemetry data helps monitor system performance and adjust behavior dynamically, such as adapting video quality to maintain stability. Overall, the system meets its goal of providing a responsive and reliable remote control platform, though performance can still be affected by network instability, making optimization of streaming and compression an area for future improvement.

Future Works
Future improvements will focus on optimizing the video streaming pipeline to further reduce latency and improve quality under unstable network conditions. The system can also be extended to support additional platforms such as drones, robotic arms, or other remote-operated devices. Other areas include improving the UI with better controls and visualization, enhancing failover and network resilience, and strengthening the security implementation. Additional testing in real-world environments can be done to validate performance and reliability at scale.

Other Information
https://christianprieto243.github.io/RemotelyControlledCar/





NoCap: Article fact checking using AI



Team Leader(s)
Joshua Pechan

Team Member(s)
Joshua Pechan, Anthony Ciero, Thomas Chamberlain, Varun Doddapaneni

Faculty Advisor
Marius Silaghi

Secondary Faculty Advisor
Philip Chan



NoCap: Article fact checking using AI  File Download
Project Summary
his project focuses on streamlining the process of fact-checking online articles using AI. Users can input a URL into the platform, which analyzes the article and generates a comprehensive report including a trustworthiness score, a concise summary, and a detailed analysis. Each report is stored in a database for future access and efficiency. Additionally, a Chrome extension enables users to generate these reports directly while browsing, providing seamless, real-time fact-checking without leaving the webpage.



Manufacturing Design Methods
The system was designed as a full stack application which utilizes AWS amplify, AWS bedrock, AWS dynamoDB, and react.js. The articles are scraped and preprocessed to extract relevant textual content. The cleaned text is then analyzed using an AI model and a scoring algorithm is used to generate an overall trustworthiness score. The database is used to store previously analyzed articles and their corresponding reports, which allows for quick retrieval and reduces redundant computations.

Specification
Input: Article URL Output: Trustworthiness score, summary, and detailed analysis report Platform: Web application and Chrome extension Backend: AI/ML model for text analysis and classification Database: Stores article reports for reuse and scalability Performance: A couple of seconds if it is the first time analyzing the article but to see a previously analyzed article it is nigh instant. Compatibility: Works across major web browsers

Analysis
Analysis: The system was evaluated based on accuracy, response time, and usability. The initial testing showed that the AI model is effective at identifying potentially misleading or factually incorrect data. There have been user tests and evaluations as well as a accuracy test using known truths to determine accuracy.

Future Works
Future improvements could focus on enhancing model accuracy by utilizing larger and more diverse datasets, including real-time fact-checking sources. Integrating external APIs from established fact-checking organizations could improve reliability. There could also be more accuracy tests using more known truths.

Other Information
Website: https://main.d1kku51l1ickza.amplifyapp.com/

Acknowledgement
We would like to thank our faculty advisors, Marius Silaghi and Philip Chan, for their guidance and support throughout this project. Their expertise and feedback were instrumental in shaping the direction and implementation of this system.




Panther Shuttle App



Team Leader(s)
Joseph Hilte

Team Member(s)
Joseph Hilte, Tony Arrington, Jonathan Suo, Chase Monigle

Faculty Advisor
Khaled Slhoub

Secondary Faculty Advisor
Philip Chan



Panther Shuttle App  File Download
Project Summary
We developed a mobile Android application to improve the on-campus shuttle experience for students, drivers, and managers. The application supports three user roles. Students can view the live shuttle location, check the daily shuttle schedule, receive driver notifications, and save favorite stops and times. Drivers can view the route, see the next scheduled stop, estimate how many students may be waiting at a stop based on favorite-stop data, and send notifications to students. Managers can add, edit, and remove shuttle stops on the map and create or update the shuttle schedule for each day of the week. These updates are shared with the student and driver sides of the app through Firebase so that all users see the most current route information. The goal of the project is to provide a more organized shuttle system and encourage greater student use of campus transportation.


Project Objective
The objective of this project was to create an Android-based shuttle application that provides real-time and schedule-based information for students while also giving drivers and managers tools to manage communication, stops, and routes. The app is intended to improve convenience, reduce uncertainty, and make campus shuttle transportation more efficient and easier to use.

Manufacturing Design Methods
The application was designed and developed using Android Studio with Kotlin for the front-end and Firebase for backend services such as authentication, Firestore database storage, and live data synchronization. Google Maps was integrated to display shuttle location and stop markers visually. The system was divided into three main interfaces based on user role: student, driver, and manager. A modular design approach was used so that scheduling, stops, and notifications could be managed centrally and reflected across all user views. Testing was performed throughout development to verify navigation, Firebase connectivity, schedule updates, and notification behavior.

Specification
Platform: Android mobile devices Programming Language: Kotlin/Xml Development Environment: Android Studio Backend Services: Firebase Authentication and Cloud Firestore Mapping Service: Google Maps API User Roles: Student, Driver, Manager Student Features: live map, daily schedule, favorite stops, notifications, estimated student count at next stop Driver Features: live location sharing, route view, next-stop view, stop-based notifications Manager Features: add/edit/delete stops, update daily schedules, manage route data across the app

Analysis
The project demonstrates how a simple mobile system can improve communication and visibility in the campus shuttle environment. By allowing managers to control the official stop locations and schedule, the app reduces inconsistencies across users. Firebase integration enables real-time updates so that schedule and stop changes are reflected without manually updating each device and theuse of favorite-stop data also adds a predictive element by estimating student demand at upcoming stops. Overall, the design supports better decision-making for drivers and more reliable information for students.

Future Works
Future improvements could include automatic background notifications even when the app is fully closed, more advanced delay prediction using live shuttle movement, manager analytics dashboards for stop demand trends, and stronger role-based security to limit certain actions to approved driver or manager accounts only. Additional features such as route history, accessibility settings, and support for multiple shuttle routes could also be added in future versions. We also hope for it to be able to be implanted on the IPhone.


Acknowledgement
We would like to acknowledge our instructor, advisor, and all others who provided feedback and support during the design and development of this project. We also acknowledge the use of Android Studio, Firebase, and Google Maps as essential tools that made this project possible.




REVU: Project Development Platform



Team Leader(s)
Chervelle Pierre

Team Member(s)
Arisa Laloo

Faculty Advisor
Marius Silaghi

Secondary Faculty Advisor
Philip Chan



REVU: Project Development Platform  File Download
Project Summary
REVU is a web-based project development and review platform designed to help instructors monitor student software repositories and evaluate both team progress and individual contributions. The system integrates Google OAuth for secure login, role-based access control for students, instructors, and administrators, and GitHub repository analysis to collect commit activity. REVU generates overall project reports as well as contributor-specific reports, giving instructors better visibility into collaboration, activity trends, and code development patterns. The platform improves transparency in team-based software projects and supports more efficient, data-informed evaluation.


Project Objective
The objective of REVU is to create a secure and user-friendly platform that allows students to register and submit project repositories while giving instructors tools to analyze repository activity. The system is intended to support contributor-level tracking, generate AI-assisted summaries and reports, and improve the fairness and efficiency of assessing team-based software development projects.

Manufacturing Design Methods
The system was developed using a modular, layered design approach to ensure scalability, maintainability, and clear separation of responsibilities. A three-tier architecture was implemented, consisting of a frontend interface, a backend API built with Flask, and a relational database for persistent storage. The backend handles core logic including authentication, data processing, and integration with external services such as the GitHub API for retrieving repository and commit data. An evaluation pipeline was designed to process this data through multiple stages, including compilation tracking, metric generation, and large language model (LLM) analysis for code evaluation. The database schema was structured to manage entities such as users, repositories, commits, and reports. This modular design allows individual components such as authentication, data retrieval, and reporting to be developed, tested, and extended independently.

Specification
The system is designed with a secure and structured workflow that supports both authentication and project evaluation. It utilizes Google OAuth for login, restricting access to approved academic email domains and requiring users to complete registration before account creation is finalized. Role-based access control is implemented to differentiate permissions for students, instructors, and administrators. Users can submit repositories and link contributors, while the system integrates with GitHub to retrieve commit histories and analyze development activity. All repository, commit, and contributor data is stored in a relational database to ensure consistency and accessibility. The platform generates both overall project reports and detailed per-contributor reports, which are accessible through an instructor dashboard designed for efficient review of project progress and individual contribution metrics.

Analysis
The project demonstrates that repository activity can be used to provide meaningful insight into both group and individual progress. By linking commits to registered contributors through GitHub usernames and email matching, REVU can distinguish team-wide activity from individual participation. The system architecture also supports future scalability by separating repository-level reporting from contributor-level reporting. Early testing showed that the platform can streamline instructor review workflows and improve visibility into collaboration patterns that would otherwise be difficult to assess manually.

Future Works
Future improvements include refining the AI scoring logic, adding more advanced visual analytics, facilitate testing the submissions and improving support for large team projects. Additional enhancements also include deeper integration with institutional systems, and deployment to a school-hosted production server with stronger monitoring and logging.


Acknowledgement
We would like to thank Dr. Marius Silaghi and Dr. Philip Chan for their guidance, feedback, and support throughout the development of this project. Their input helped shape both the technical direction and the practical goals of the platform.




Search and Rescue Coordinated Intelligence Systems



Team Leader(s)
Yavanni Ensley

Team Member(s)
Younghoon Cho, Yavanni Ensley, Jaylin Ollivierre

Faculty Advisor
Dr. Thomas Eskridge

Secondary Faculty Advisor
Dr. Chan



Search and Rescue Coordinated Intelligence Systems  File Download
Project Summary
SRCIS (Search and Rescue Coordinated Intelligence Systems) is a ROS2-based system designed to improve the effectiveness of search and rescue operations through human-robot collaboration. A key feature of the system is its continuous compositional control (CCC), which allows operators to provide guidance to the agents by remotely controlling them and then transitioning the control back to the agents. With humans and agents capable of changing their behaviors based on the environment, we can significantly reduce target search and tracking time beyond fully autonomous performance. In addition, SRCIS provides a scalable architecture that enables seamless integration of new robots with various roles. The system integrates heterogeneous platforms, including UGVs (Unmanned Ground Vehicles), Quadrupeds, and UAVs (Unmanned Aerial Vehicles), and can be further expanded by incorporating additional robotic platforms. Overall, SRCIS combines autonomy and human decision-making to improve efficiency in search-and-rescue operations.


Project Objective
A human-agent teamwork system designed to improve the effectiveness of search and rescue operations by merging human decision-making with robotic platforms through autonomous agents.

Manufacturing Design Methods
SRCIS was designed with a modular architecture that leverages publish/subscribe networks to enable effective communication among agents, human operators, and physical robots. High-level communication uses Hazelcast to facilitate communication with the ability to scale. Agents and human operators can directly command their physical shells through this layer, and the robots can provide information about the world (camera feeds, location, maps, etc.) back to the agents and humans. At the low level, all 3 robots leverage various ROS2 nodes to accomplish SLAM and target detection. To account for remote network communication, SRCIS uses a VPN (in our case, ZeroTier), which minimizes the need for complex routing and enables WebRTC’s peer-to-peer connections.



Future Works
The system can be expanded with specialized robots for scouting, tracking, and payload delivery. Computer vision can also be integrated for real-time target detection and recognition, while the human control interface can be refined for more intuitive control. Additionally, autonomous decision-making can be developed to enable robots to adapt to dynamic environments. Furthermore, cellular communication (e.g., LTE) can be utilized to support operation over wider and more distributed environments.






Skin Cancer Detection App



Team Leader(s)
Lawson Darrow

Team Member(s)
Lawson Darrow, Nicolas Rincon-Speranza, Nikiraj Konwar, Christian Stevens

Faculty Advisor
Dr. Zahra Nematzadeh




Skin Cancer Detection App  File Download
Project Summary
The Skin Cancer Detection app is a mobile application that allows users to capture or upload images of suspicious skin lesions and receive a real-time, AI-assisted preliminary risk assessment. Built with machine learning for on-device analysis, the app is designed to support early awareness, protect user privacy, and encourage professional medical consultation when higher-risk results are detected.












SLAIT




Team Member(s)
Maria Linkins-Nielsen, Michael Bratcher

Faculty Advisor
Marius Silaghi

Secondary Faculty Advisor
Philip Chan



SLAIT  File Download
Project Summary
The Secure Language Assembly Inspector Tool (SLAIT) is a web-based educational platform designed to help students safely explore and understand assembly-level program execution without requiring local debugger setup or reduced system security settings. SLAIT allows users to upload assembly programs through a browser, select specific instruction lines for inspection, and observe how registers and processor flags change step-by-step during execution. Programs run inside an isolated Docker sandbox environment, ensuring secure execution while protecting the user’s system. The platform then returns structured register and flag data for visualization through an interactive interface hosted on an application server. SLAIT supports instruction in computer architecture and assembly programming by lowering technical setup barriers and enabling safe experimentation with low-level code behavior. This makes it especially useful for students learning debugging concepts, instruction-level execution flow, and processor state changes in a controlled environment.


Project Objective
The objective of the Secure Language Assembly Inspector Tool (SLAIT) is to develop a secure, browser-based platform that enables users to upload assembly programs, select instruction inspection points, and visualize register and flag state changes during execution without requiring local debugger installation. The platform improves accessibility to instruction-level debugging while maintaining system security through sandboxed execution.

Manufacturing Design Methods
SLAIT was implemented as a distributed web application using a frontend interface connected to a backend execution environment hosted on an application server. Assembly programs are uploaded through the browser and executed inside isolated Docker containers to ensure safe runtime behavior. The backend assembles and executes programs using NASM and GDB, captures register and flag values at selected breakpoints, and returns structured execution data to the frontend for visualization. The frontend provides a step-level execution interface that allows users to observe processor state transitions across program instructions.

Specification
Key system specifications include: - Web-based assembly program upload interface - User-selected breakpoint inspection support - Register and processor flag state capture - Step-level execution visualization - Docker-based sandbox execution environment - Backend automation using NASM and GDB - JSON-formatted execution state responses - Platform-independent browser accessibility

Analysis
Testing confirmed successful communication between the frontend interface and backend execution environment. Assembly programs were assembled, executed inside containers, and register/flag states were captured at selected instruction breakpoints. The sandboxed execution workflow ensured safe runtime isolation while maintaining accurate processor state inspection. The system demonstrates that instruction-level debugging can be performed remotely without requiring local debugging environments or elevated permissions. Initial evaluation indicates that the platform effectively supports instructional use in computer architecture and assembly programming courses by simplifying access to low-level execution visibility.

Future Works
Future development of SLAIT will include expanding support for additional assembly syntaxes such as MASM, improving the platform’s adaptability across instructional environments that use different assembler formats. Planned enhancements also include extending register inspection capabilities to support full 64-bit register tracking. Additional work will focus on completing structured user testing and usability evaluation to measure how effectively students interpret register and flag state changes using the visualization interface. Feedback from these evaluations will guide improvements to the frontend visualization workflow and overall instructional usability of the platform.

Other Information
Want to use our tool? Make sure you are connected to the Florida Tech Wi-Fi and visit http://172.16.135.14/

Acknowledgement
We would like to thank our advisors, Dr. Chan and Dr. Silaghi for guiding us through this process and supporting our project development.




SpacePNP



Team Leader(s)
Khurram Valiyev

Team Member(s)
Khurram Valiyev, Sam Warner, Jabari Sterling, Samuel Kaguima

Faculty Advisor
Dr. James Brenner

Secondary Faculty Advisor
Dr. Philip Chan



SpacePNP  File Download
Project Summary
The goal of SpacePNP is to streamline the electronic component procurement process by replacing slow, complex interfaces with a high-efficiency UI. Currently, users face significant delays and errors due to the manual effort required to navigate traditional distributor websites and verify part compatibility. SpacePNP addresses these pain points by integrating an automated compatibility checker that ensures parts meet project specifications before purchase. By reducing search time and mitigating the risk of ordering incorrect components, the system enables a more accurate and productive engineering workflow.


Project Objective
Our project aims to streamline the way electronic components are selected and integrated. It focuses on automated validation by replacing manual datasheet cross-referencing with logic-based compatibility checks. It also simplifies sourcing by reducing search fatigue through an intuitive interface and more precise search capabilities. To improve design accuracy, the project enables 3D visualization of component fit, helping prevent mechanical assembly errors. In addition, it fosters collaboration by creating a centralized knowledge base that supports community-driven design feedback.

Manufacturing Design Methods
SpacePnP is developed using a modular full-stack architecture based on Node.js, Express.js, and MongoDB for structured data management. A custom API client integrates supplier data using authenticated requests and rate-limiting to ensure reliable performance. The platform includes an interactive schematic editor built on an HTML5 Canvas, enabling users to visually configure components. Design accuracy is supported through grid-snapping, collision detection, and state management features. Security is implemented through authentication, encrypted data handling, and protected API communication, while system reliability is maintained through automated testing and validation processes.

Specification
The system shall provide an intuitive, e-commerce-style user interface that enables users to browse and search for components with ease. It shall support efficient search and filtering through a specialized fuzzy search algorithm with caching to deliver accurate and responsive results. The platform shall include a schematic-based tool that allows users to visually configure components and validate their compatibility within a design. Users shall be able to create and manage project-specific component lists, with options to export or share them publicly on the platform. Additionally, the system shall support community collaboration through an interactive forum where users can discuss components, share projects, and exchange design ideas and recommendations.

Analysis
The system was evaluated based on speed, accuracy, reliability, and collaborative tool usage. Overall, it performed well across all metrics. For speed, most server responses completed within the 2-second target, indicating efficient backend performance. Accuracy was consistently high, with system outputs reliably reflecting user inputs across core features such as search, account creation, and the component builder, as confirmed through testing and bug reports. Reliability was stable, with the platform successfully completing tasks in about 9 out of 10 cases during testing. Finally, collaborative features showed active engagement, with users creating projects, using the component builder, and participating in forum discussions, demonstrating that these tools are being effectively utilized.

Future Works
Future development of the platform could focus on improving the accuracy and depth of component compatibility checking by incorporating more advanced electrical modeling and manufacturer-specific constraints. The search system could also be enhanced with smarter recommendations and AI-driven suggestions based on user behavior and project history. The schematic and 3D visualization tools could be expanded to support more complex multi-layer circuit designs and real-time simulation of electrical performance. Collaboration features could be further developed by adding version control for projects and integrating real-time co-editing between users.

Other Information
https://sites.google.com/my.fit.edu/spacepnp/home

Acknowledgement
This project would not have been possible without the guidance of Dr. James Brenner and Dr. Philip Chan.




Student Code Online Review and Evaluation 2.0




Team Member(s)
Dorothy Ammons, Shamik Bera, Rak Alsharif, Patrick Kelly

Faculty Advisor
Raghuveer Mohan

Secondary Faculty Advisor
Philip Chan



Student Code Online Review and Evaluation 2.0  File Download
Project Summary
Student Code Online Review and Evaluation (2.0) or S.C.O.R.E (2.0) is a web application for creating and submitting programming assignments. Professors are able to create classes, assignments, rubrics, test cases and rosters. Additionally, they may view grades, submissions, AI usage scores and similarity scores. Students may use the application to view and make submissions to their assignments, receiving automatic grades and feedback through their submission output.


Project Objective
The objective of this project is to streamline the submissions process for programming assignments. By allowing students to achieve real time feedback on their programming assignments, we can help them improve their submissions and manage their grade expectations. Alternatively, we offer professors a platform full of customizable options to create their assignments. We want professors to automatically receive their students' grades in the way that they want those assignments to be graded.

Manufacturing Design Methods
S.C.O.R.E (2.0) follows a client-server architecture. It uses a React framework for the frontend, a Flask backend, and Firebase for the cloud database. Authentication is handled though Google OAuth.



Future Works
This project holds great potential for future improvements. We would love to see improved visuals for the integrity systems in our project. This could include clustering for similarity scores or returning exact lines of code that are being detected as AI created. Additionally, moving the server to a hosted platform, perhaps offered by Florida Tech, would eliminate the need for user side executables. This would allow students to simply head to the webpage to use the web application.






Visualization For Formal Languages



Team Leader(s)
Chris Pinto-Font

Team Member(s)
Chris Pinto-Font, Andrew Bastien, Vincent Borrelli, Keegan McNear

Faculty Advisor
Dr. David Luginbul

Secondary Faculty Advisor
Dr. Philip Chan



Visualization For Formal Languages  File Download
Project Summary
Visualization for Formal Languages is an interactive Deterministic Finite Automata (DFA) canvas that allows users to see how a DFA is constructed, helping them better understand parsing and computation. The program also includes visualization of a Nondeterministic Finite Automata (NFA), which can be converted into a DFA. The program serves as a robust computational engine for educational purposes, helping any students who need extra help understanding the ins and outs of a DFA. It can also be used by a professor to show students how to create and understand. This is done by allowing users to draw state graphs, evaluate string or regular expressions, and observe real-time animations. The seamless integration of complex mathematical logic into the custom-built, responsive graphical interface transforms abstract formal language theory into simple, readable ideas on a workspace we call the canvas. We also have a specific teaching mode to help students better grasp the concept.


Project Objective
The primary objective of our project is to bridge the gap between theoretical computer science concepts and applied computational logic by providing a visual learning environment. Instead of static diagrams, being able to follow on a dynamic canvas engages users to reinforce their understanding of finite automata.

Manufacturing Design Methods
The program was built on a clean, modular architecture to keep the system separate from the algorithmic logic in the backend and the user interface in the frontend, ensuring scalability and ease of maintenance. Instead of relying on static visuals, our program uses the Tkinter Canvas, which supports dynamic animation, drag-and-drop, and smooth transition lines.

Specification
Python 3 is the only programming language used in the program, allowing for a clean, readable codebase. We also used the native Tkinter library for the graphical interface to maintain lightweight, simple execution and maximum compatibility.

Analysis
The visualizer successfully achieves its purpose through efficient processes for complex graphical logic, including non-determinism and lambda transitions. All without reducing graphical performance. By ensuring an interactive teaching mode rather than automatic solutions, users can be tested on their knowledge, demonstrating the growth of their understanding of abstract concepts.

Future Works
Future development will be focused on expanding the program's logic, including Pushdown Automata and Turing Machines. As well as upgrading the graphical user interface to handle dense graphs and improve the layout of transition lines.

Other Information
You can use the program from a direct download from our website: https://kmcnear2022.github.io/

Acknowledgement
We achieved our results in the program thanks to our faculty advisor, who helped us gain a better grasp of formal languages. We also appreciate the feedback and support from our users.




Wallee.



Team Leader(s)
Emma Bahr

Team Member(s)
Emma bahr, Kyle Gibson, Joshua Cajuste, and Matteo Caruso

Faculty Advisor
Dr. Siddartha Bhattacharyya

Secondary Faculty Advisor
Dr. Phillip Chan



Wallee.  File Download
Project Summary
We developed a mobile cross-platform personal finance application called Wallee to help users better understand and manage their money in one place. The app connects securely to users’ bank accounts through the banking API through Plaid to automatically import and update transactions in real time. Users can view an overview of their finances on a home dashboard, explore spending breakdowns and trends, and track recent transactions categorized automatically. The application also includes a dynamic budgeting system that adjusts based on spending behavior, along with savings goals that allow users to set targets and monitor their progress over time. In addition, Wallee includes an AI-powered chat assistant (Wallo) that provides personalized financial insights, answers questions about spending patterns, and helps users make informed budgeting decisions. The goal of the project is to simplify personal financial management and give users clearer, more actionable insight into their financial health.


Project Objective
The objective of Wallee is to address the limitations of existing personal finance tools by providing an adaptive, intelligent, and user-centered financial management system specifically designed for variable-income users. The application aims to replace static budgeting models with an automated, paycheck-aware system that continuously recalibrates budgets based on real-time income changes. It also seeks to improve the reliability of financial guidance by using a two-layer AI architecture that combines generative AI with verified financial logic to ensure that all recommendations are consistent with the user’s actual financial data. In addition, Wallee introduces a dynamic financial health scoring system to encourage better financial habits through continuous feedback and engagement. Finally, the project focuses on delivering a clean, cognitively accessible interface that transforms complex financial data into clear, actionable insights, ultimately bridging the gap between raw transaction data and meaningful financial decision-making.

Manufacturing Design Methods
The manufacturing and design methods for Wallee follow an iterative, user-centered development approach focused on modular architecture, secure data handling, and scalable system integration. The system is designed using a layered architecture consisting of a Flutter-based cross-platform frontend, a Node.js (NestJS) backend for core application logic, and in app advanced analytics and financial computation. Secure financial data integration is achieved through the Plaid API, which enables real-time transaction syncing via webhooks and structured ingestion into a Supabase database. On the design side, the application follows a component-driven UI approach in Flutter, emphasizing clarity, minimal cognitive load, and accessibility for variable-income users. Key features such as the dashboard, budgeting system, goals tracker, and transaction views are developed as reusable modules to ensure consistency and maintainability. The financial logic layer is separated from the UI to ensure that budgeting recalculations, income detection, and health scoring remain accurate and independently testable. The AI system is implemented as a two-layer model: Wallee Zero, which handles deterministic financial calculations and rule-based validation, and Wallo, which serves as the user-facing conversational interface that retrieves only verified insights from the underlying logic layer. Development follows an agile workflow with continuous testing and refinement based on usability feedback, ensuring that both financial accuracy and user experience are maintained throughout the system.



Future Works
Future work for Wallee will focus on expanding predictive and automation capabilities to make the system more proactive and personalized for users. This includes adding forecasting features that estimate future income, spending, and savings based on historical financial patterns, as well as improving the AI system to support more advanced financial guidance such as tax estimation, debt management strategies, and long-term planning while maintaining verified, logic-based outputs. Additional improvements include smarter automation for detecting recurring bills, managing subscriptions, and refining transaction categorization over time through adaptive learning. The platform could also be expanded to support more financial institutions beyond the current integration, along with enhanced gamification of financial health scoring to increase user engagement. Finally, future development will focus on improving scalability, performance, and customization options to support a growing user base and provide a more tailored financial management experience.


Acknowledgement
We would like to acknowledge the support and contributions of everyone who helped make the Wallee project possible. We extend our gratitude to our faculty advisor for their guidance, feedback, and encouragement throughout the development process, as well as for providing valuable insight into system design and implementation. We also thank the developers and maintainers of the technologies used in this project, including Flutter, Node.js, in-app calculations, Supa BaseSQL, and the Plaid banking API, which were essential in building a secure and scalable financial platform. Finally, we appreciate the support from peers and reviewers who provided feedback during testing and helped improve the usability, functionality, and overall design of the application.