Mission

The mission of the Department of Computer Engineering and Sciences is to prepare computing, engineering and systems  students for success and leadership in the conception, design, management, implementation and operation of complex engineering problems, and to expand knowledge and understanding of computing and engineering through research, scholarship and service.

Computer Science and Software Engineering

Automated Drone Navigation



Team Leader(s)
Michael Gourdine

Team Member(s)
Chinedum Ajabor, Patrycja Bachleda, Richard Diaz

Faculty Advisor
Dr. Siddhartha Bhattacharyya

Secondary Faculty Advisor
Parth Ganeriwala



Automated Drone Navigation  File Download
Project Summary
The Automated Drone Navigation project aims to mitigate the risks to public safety associated with the loss of communication between the aircraft AW609 and the command center. Through proper implementation of the SOAR cognitive architecture, we plan to develop an intelligent agent that can assume control of the AW609 during emergencies and navigate it back to its origin or destination safely, while avoiding densely populated areas. The project will also include a graphical user interface (GUI) for monitoring critical data such as fuel levels, airspeed, elevation, and altitude in real-time, as well as a map that tracks the aircraft's location and the path generated by the intelligent agent. Through these developments, the project seeks to ensure the safety of the public by having the agent navigate the AW609 around densely populated areas with minimized risk.


Project Objective
Project Objective: The objective of this project is to mitigate the potential risks associated with operating the AW609 by implementing an automated system that permits the agent to assume control in situations where communication with the control tower is lost. By leveraging the SOAR cognitive architecture to design an intelligent agent capable of flying the aircraft to its intended destination or returning to the starting point during emergencies, we can guarantee the safety of the flight.

Manufacturing Design Methods
Design: The system must be integrated with the SOAR architecture; a cognitive agent which performs decisions to pilot the aircraft in the XPlane 11 simulator. The aircraft under focus to be operated by the SOAR agent is the Agustawestland's AW609, which has a unique capability to takeoff/land vertically similarly to a helicopter and fly horizontally as a typical aircraft. This SOAR agent would be able to determine whether it must navigate, and land safety following proper FAA (Federal Aviation Agency) protocol. The SOAR agent would decide on the safest course of action to return back to its starting location or continue safely to its destination while avoid densely populated areas. Architecture challenges: The SOAR cognitive architecture, when implemented appropriately, is a general intelligent agent capable of performing tasks, learning, making decisions, solving problems, and planning based on information provided to it. This architecture is based on a platform that was quite novel to our team's methodology and took some time to get used to, as it requires proposals of certain rules that, when met, must be paired with a follow-on event execution. These proposals and events must then be chained in sequence to ensure the aircraft performs as expected. However, this required a lot of fine-tuning because the SOAR agent will always execute if the given conditions are met, even if the situation is not suited for that action, resulting in a paradigm shift in our team's thinking to perform countless system tests and make adjustments to the architecture. We were ultimately successful in creating an environment where the agent went from a 10% success rate in landings to 90%. Implementation Challenges: Upon activation of the loss of communication the SOAR agent would assume control of the aircraft and determine its current position relative to its initial position and compare it with a determined distance resulting from the pathfinding algorithm. The constraints of distance and fuel must be taken into account. The novelty of this project is the SOAR cognitive architecture deciding on the most appropriate course of action and fine tuning the algorithm has proven to be incredibly difficult. This implementation has caused us to revise our approach 3 separate times ultimately resulting in an amalgamation of Dijkstra and the on board AI system industrial aircrafts utilize. Previous attempts considered locations in the sparse regions of the map to have nodes and boundary nodes around densely populated areas shaping into an irregular polygon. The difficulty proved to be navigating as efficiently as possible without additional travel time and exceeding fuel constraints. The speed in which the Agent would have to determine which nodes to travers proved extensively difficult as each node required n^n total edges where n is the number of nodes. This proved to be incredibly overwhelming and as the aircraft was constantly moving, SOAR agent could not make a decisive decision in the short time it had. Our implementation ultimately created the environment where the autopilot would head toward the closest entry node to a polygon and perform Djikstra to traverse around the node and creating a forwarding future position of the aircraft incrementally spaced out, the SOAR agent is able to determine where to exit the polygon without all polygons to be connected to each other in a graph, ultimately reducing processing time to O(n).


Analysis
Analysis: Multiple tests were conducted to evaluate the decision-making capabilities of the agent in situations where communication is lost. Specifically, we developed numerous scenarios in which the aircraft was situated in densely populated areas following the loss of communication. The SOAR agent was expected to recognize the issue, identify the most efficient and direct flight path to escape the congested area and reach the intended destination, and proceed along that path until the destination was reached. Additionally, various other testing procedures were employed, including the placement of the aircraft in random locations and environmental conditions, to verify the agent's capacity to successfully initiate takeoff, operate the aircraft safely, and land it at the intended destination.

Future Works
Future Works: Potential future developments for this project encompass incorporating machine learning to better predict path based on previous flight data, adding weather factors that may affect flight, and refining the SOAR architecture to improve the agent's decision-making process. Additionally, the optimization of the algorithm for flying around dense regions can be further enhanced to efficiently plot a course towards its destination.

Other Information
However, it is important to note that all the testing were done in a simulated environment and do not account for the complexities of the actual aircraft, which would require further development and implementation.

Manufacturing Design Methods
Design: The system must be integrated with the SOAR architecture; a cognitive agent which performs decisions to pilot the aircraft in the XPlane 11 simulator. The aircraft under focus to be operated by the SOAR agent is the Agustawestland's AW609, which has a unique capability to takeoff/land vertically similarly to a helicopter and fly horizontally as a typical aircraft. This SOAR agent would be able to determine whether it must navigate, and land safety following proper FAA (Federal Aviation Agency) protocol. The SOAR agent would decide on the safest course of action to return back to its starting location or continue safely to its destination while avoid densely populated areas. Architecture challenges: The SOAR cognitive architecture, when implemented appropriately, is a general intelligent agent capable of performing tasks, learning, making decisions, solving problems, and planning based on information provided to it. This architecture is based on a platform that was quite novel to our team's methodology and took some time to get used to, as it requires proposals of certain rules that, when met, must be paired with a follow-on event execution. These proposals and events must then be chained in sequence to ensure the aircraft performs as expected. However, this required a lot of fine-tuning because the SOAR agent will always execute if the given conditions are met, even if the situation is not suited for that action, resulting in a paradigm shift in our team's thinking to perform countless system tests and make adjustments to the architecture. We were ultimately successful in creating an environment where the agent went from a 10% success rate in landings to 90%. Implementation Challenges: Upon activation of the loss of communication the SOAR agent would assume control of the aircraft and determine its current position relative to its initial position and compare it with a determined distance resulting from the pathfinding algorithm. The constraints of distance and fuel must be taken into account. The novelty of this project is the SOAR cognitive architecture deciding on the most appropriate course of action and fine tuning the algorithm has proven to be incredibly difficult. This implementation has caused us to revise our approach 3 separate times ultimately resulting in an amalgamation of Dijkstra and the on board AI system industrial aircrafts utilize. Previous attempts considered locations in the sparse regions of the map to have nodes and boundary nodes around densely populated areas shaping into an irregular polygon. The difficulty proved to be navigating as efficiently as possible without additional travel time and exceeding fuel constraints. The speed in which the Agent would have to determine which nodes to travers proved extensively difficult as each node required n^n total edges where n is the number of nodes. This proved to be incredibly overwhelming and as the aircraft was constantly moving, SOAR agent could not make a decisive decision in the short time it had. Our implementation ultimately created the environment where the autopilot would head toward the closest entry node to a polygon and perform Djikstra to traverse around the node and creating a forwarding future position of the aircraft incrementally spaced out, the SOAR agent is able to determine where to exit the polygon without all polygons to be connected to each other in a graph, ultimately reducing processing time to O(n).




Crusty Crab



Team Leader(s)
Alex Schmith

Team Member(s)
Alex Schmith, Robbie Heine, Chandler Hake

Faculty Advisor
Dr. O'Connor




Crusty Crab  File Download
Project Summary
The Crusty Crab is a C2 (command-and-control) toolkit designed for use in attack and defense capture the flag (CTF) competitions that FITSEC, our cybersecurity club, competes and trains in. Developed using the Rust programming language, the toolkit provides a powerful and flexible set of tools for both offensive and defensive operations, and is designed to be easy to use and customize. At its core, the Crusty Crab consists of two main components: a listener and an implant. The listener serves as the central command-and-control server for the toolkit, while the implant is the agent that is installed on the target system and used to execute various commands and payloads. The Crusty Crab's command-line interface provides an intuitive and user-friendly way for attackers to interact with the listener and implant. Using this interface, attackers can issue commands, upload and download files, and execute custom Rust code via user modules. One of the key features of the Crusty Crab is its ability to function as a reverse shell between the attacker and the victim. This allows attackers to gain remote access to the target system, execute commands, and steal data without being detected. The Crusty Crab is also designed to be highly customizable and extensible. The Rust programming language provides a powerful set of tools for developers, and the Crusty Crab takes full advantage of these tools to allow users to create custom modules and payloads for use in their own CTF competitions and training exercises.












Fake News Detector



Team Leader(s)
Victor Pinto

Team Member(s)
Joseph Bigelow, Victor Pinto

Faculty Advisor
Dr. Nasheen Nur




Fake News Detector  File Download
Project Summary
The goal of our project was to integrate an NLP based fake news detection tool into a web browser extension that can be used on social media platforms. The NLP detection tool is integrated into a system where the tool and the extension provide data to each other through a web server. The user sends text through the server to the detection tool where the text is analyzed. Upon completion the tool will send the resulting values back through the server to the extension where a conclusion is reached on the authenticity of the social media text.






Future Works
We plan to make the tool usable on other platforms such as Mac and Linux operating systems, improve the scope of the NLP tool, incorporate visual explanation for the user through features and text comparison, and submit the survey paper and a user study paper on an improved version of the tool.






Florida Tech History Tours



Team Leader(s)
Tyler Zars

Team Member(s)
Tyler Zars, Cameron Miskell, Matthew Tokarski, Grant Butler

Faculty Advisor
Fitzroy Nemhard

Secondary Faculty Advisor
Ryan Stansifer



Florida Tech History Tours  File Download
Project Summary
Our Motivation: While having over 60 years of history, most of the surviving history of Florida Tech is largely stored in a single room in the library, inaccessible to the public. The goal of this project is to ensure that others can learn about the broad history of Florida Tech while experiencing the campus directly. The university’s history isn’t known from just the plaques attached to buildings and various landmarks. Our web application guides users to historical sites on campus and provides supplemental information. Offered Features: Explore the beautiful campus of Florida Tech at your own pace with our walking tours, housed completely within the app! Discover the landmarks of Florida Tech without setting foot on campus; select locations on the map to display their history. Connect with the university through games that test your knowledge and help you remember tour discoveries about Florida Tech. Scroll through facts and pictures of the campus on the active timeline! Watch as the most relevant information scrolls automatically into view as you reach each point of interest on your tour! Major Challenges: We faced two major challenges during the development of this application, those being the reactive content timeline and the GPS precision. For the timeline content, we have an integration for a scrollable timeline that will hold the historical data for our project, but automatic scrolling based on geofencing will require extensive testing to verify. Secondly, differences in GPS hardware precision between devices can cause 2 users to be routed differently, even if they are walking side by side. Future Work: This application is a good introductory experience for campus goers to learn about the rich history of Florida Tech. This application provides a way for users to connect with the past while learning in the present. In the future, more facts and tours can be easily added to allow for new and old members of the campus alike, to celebrate our university’s history and growth. The web app can be extended by future students to improve the user experience or add new interactivity; the React framework is beneficial to the flexibility of future expansions to this project.



Manufacturing Design Methods
- React and NodeJS implemented for a dynamic app and high browser compatibility - Mapbox API integration to provide map and directions using GPS coordinates - Compiled facts and pictures from campus archives, books, and online repositories - Chrono Timeline library integration displays historical content in cards - Geofencing dynamically updates timeline contents with proximity to landmarks/buildings - Trivia and Scavenger Hunt gamify campus history


Analysis
From the work over two semesters we have produced an interactive web application with three static tours and the option to set additional waypoints. The application can consistently load and be fully interactable within three seconds. We discuss more than 55 facts from the first 40+ years of Florida Tech in a timeline format. The application provides users with interesting information based off their location within 75 feet of their current location.



Manufacturing Design Methods
- React and NodeJS implemented for a dynamic app and high browser compatibility - Mapbox API integration to provide map and directions using GPS coordinates - Compiled facts and pictures from campus archives, books, and online repositories - Chrono Timeline library integration displays historical content in cards - Geofencing dynamically updates timeline contents with proximity to landmarks/buildings - Trivia and Scavenger Hunt gamify campus history




Jabberwocky



Team Leader(s)
Ian Orzel

Team Member(s)
Ian Orzel, Dylan McDougall

Faculty Advisor
Dr. Ryan Stansifer




Project Summary
Jabberwocky provides a command-line manager of virtual containers installed on the user's computer. These containers provide environments to be used by students to complete their coursework. Thus, this allows students to access environments for coursework without having to deal with manually installing them.












PDF Autofiller




Team Member(s)
Bradley Felix, Skieler Kowalik, Kyle Kline, Ethan Banks

Faculty Advisor
Dr. Fitzroy Nembhard




PDF Autofiller  File Download
Project Summary
Our project is a tool used alongside PowerAutomate to automatically fill our fillable PDFs using JSON files provided by the user. The tool will take in a blank PDF and said JSON and email it (using PowerAutomate) to whomever it needs to be sent to.


Project Objective
To create an American auto-filling tool that can be used as a replacement to Plumsail.

Manufacturing Design Methods
MongoDB was used to create a database that stores PDFs, user IDs, and JSONs. Node JS was also used in order to handle the post and get requests.


Analysis
It is important that the PDF and JSON are formatted to be compatible with each other in order for our software to work. PDF field names must exactly match the JSON key names, or else they will be skipped and left blank.



Manufacturing Design Methods
MongoDB was used to create a database that stores PDFs, user IDs, and JSONs. Node JS was also used in order to handle the post and get requests.




Polo Expense System



Team Leader(s)
Nicholas Velaquez

Team Member(s)
Matthew Brown

Faculty Advisor
Fitzroy Nembhard




Polo Expense System  File Download
Project Summary
The landscaping business, Polo Velasquez Landscape Maintenance, is successful in providing services but has a very poor and outdated system for managing billing customers and keeping track of work that has been completed. There are many ways in which their business can be improved with a management system that will help them keep track of their work. Currently, the company does all its billing by hand, including changing each bill for every customer according to the month. These changes that are made to the bills are also done once a month which causes disorganization on bills that need to be made. This once-a-month process makes missing a statement quite complicated to find as well. There is also a problem keeping customer information as there is not just one place they keep that data. We hoped to create a management expense system that would tailor to all of their needs and is simple enough to use efficiently for a business that uses non-technical methods.


Project Objective
We aim to create a management expense system that is simple and efficient for the client to use. We need to allow them to create, send and manage their bills, customers, and all aspects of their business that they need to keep track of.










Secure Door Lock



Team Leader(s)
Luke Bucher

Team Member(s)
Christopher Kiefer, James Pabisz, Warren Smith

Faculty Advisor
Dr. Marius Silaghi




Project Summary
Project Description: Give users access to an Internet-connected door lock that uses facial recognition technology to allow or disallow people from entering their homes. The accompanying mobile application allows users to access the functions of the door lock from anywhere in the world. Features: The user will be able to remotely lock and unlock the door from within the mobile application. The user will also be able to view whether or not the door is locked or unlocked from within the mobile application. The user will be able to view a live camera feed from the door lock within the mobile application. The mobile application will display a list of currently adopted door locks that are available for the user to interact with. The mobile application will allow the user to add additional door locks if they are available to be adopted. The door locks that a user has adopted will be available through their account, which can be accessed with a valid username and password entered through the mobile application. The user will be able to reset the password if they have forgotten it, as well as have the mobile application remember their account so they will not have to log in every time. The user will be able to view a list of recognized visitors from the mobile application. Recognized visitors are visitors that the facial recognition software of the door lock has seen before. Users will be able to add and delete recognized visitors from the mobile application. Users will also be able to customize the photos of recognized visitors. If the user does not choose a custom photo, the photo of the recognized visitor will be the last photo the door lock has taken of them. If a visitor is recognized, a push notification will be sent to the user’s smartphone, allowing them to grant or deny entry through the door. This is to add an extra layer of security to the door lock, as the facial recognition technology in the door lock is not always accurate. Evaluation: Our project will be evaluated primarily through whether or not our requirements laid out in our requirements document were satisfied. The requirements document contains a comprehensive list of everything we would like to complete during this project. If a majority of the requirements contained in the requirements document were met, then we know that our project was a success. We will also utilize usability and security testing. Usability testing will involve conducting testing with end-users to ensure that the door lock is intuitive to use and meets the end-users’ needs. Security testing will involve conducting testing to ensure that the door lock is secure and protected against cyber threats. Major Challenges: Major challenges of this project included ensuring the facial recognition is accurate and secure and ensuring the mobile application and the door lock were working together properly. The facial recognition not only needed to recognize there was a face, it also needed to compare that face to the other faces in the recognized visitors database to see if the visitor should be allowed access to the door or not. It was also imperative for our project that the software of the mobile application and the hardware of the door lock worked seamlessly to ensure the best user experience. This turned out to be a challenge for our team as we had to learn how to most effectively communicate with the physical hardware of the door lock.












Synthetic Data Pipeline for Satellite Pose Estimation



Team Leader(s)
William Stern

Team Member(s)
William Stern, Nathan Pichette, Stephane Baruch, Hanibal Alazar

Faculty Advisor
Dr. Ryan White




Synthetic Data Pipeline for Satellite Pose Estimation  File Download
Project Summary
The Synthetic Data Pipeline for Satellite Pose Estimation projects goal is to create a tool that can easily generate lots of data for pose estimation of satellites. Using the 3D animation tool blender and the blender python API we created a tool that allows users to easily changes satellite model, flight path, and other setting using a simple configuration file. This tool is far easier to use that existing methods such as setting up scenes by scratch in a 3D animation software. It also allows very easy batch rendering. By using this tool, users can easily create large amounts of training and testing data for satellite pose estimation.


Project Objective
The objective of our project is to create a program that will allow users to create synthetic data for pose estimation. The data must be a rendered clip that contains a satellite in motion with a customizable background and modifiable lighting conditions. The API can be adjusted to the user's needs. The user will be able to set the location and strength of a light source, the model of the satellite, as well as the flight path and rotation.


Specification
The user can control the satellite model, lighting, rotation, flight path, and background.


Future Works
It can be difficult to estimate the flight path and speed of objects without running the program. Potential improvements include creating a visualization tool to help the user create good flight paths. Currently the program requires admin access on some operating systems. In the future, researchers will utilize the project to generate data for the purpose of training machine learning models, which will enable the prediction of satellite movement and rotation.






Wifi Performance Map



Team Leader(s)
Devon Resendes

Team Member(s)
Devon Resendes, Ankur Dhadoti, Aaron Glenore

Faculty Advisor
Dr. Philip Chan




Wifi Performance Map  File Download
Project Summary
Give users the ability to monitor WIFI performance over the Florida Tech campus without system administrator involvement and show them locations with better WIFI performance.



Manufacturing Design Methods
The primary design method used for this project was a variation of the agile method. Agile development is a software development methodology that emphasizes iterative and incremental development, frequent delivery of working software, and close collaboration between developers and stakeholders. Over the last year, we have regularly developed plans and met with our advisor to deliver working versions and implement different features each time. Between our biweekly meetings, we would have smaller tasks completed during these sprints to demonstrate steady progress. With feedback given to us from advisor meetings, discussions in class, and from beta testers, we were able to evaluate our progress and make improvements when needed.

Specification
Users can choose which performance metrics are displayed on the map, such as upload speed, download speed, and ping. Users can choose between the default campus view, which will have the user’s location with respect to the entire campus, and personal view, which will allow the user’s location to appear in the center of the map while the map moves. The application will periodically update users' ping, upload, download, and GPS data to the central server. This website could be taking a lot of bandwidth and reducing the overall wifi speed. To counteract this, we took steps to reduce this by using geolocation to make sure users are on campus, letting the server determine when devices can perform speed tests, aggregating data points, and adding a refresh button for users to pull new data from the server.

Analysis
The primary metric for evaluating the project's performance was through user feedback. Users were given a task, and they were assessed on completion. The task given to the users was to find a location on campus that had a better upload speed. Out of all eight users tested, only one was unable to complete the task because their device would not record an accurate location for them, so they could never collect data. Of the users that completed the task, 5 used the toggle buttons to isolate the desired metric.


Other Information
Demo Video: https://youtu.be/jLI62M-hgJE

Manufacturing Design Methods
The primary design method used for this project was a variation of the agile method. Agile development is a software development methodology that emphasizes iterative and incremental development, frequent delivery of working software, and close collaboration between developers and stakeholders. Over the last year, we have regularly developed plans and met with our advisor to deliver working versions and implement different features each time. Between our biweekly meetings, we would have smaller tasks completed during these sprints to demonstrate steady progress. With feedback given to us from advisor meetings, discussions in class, and from beta testers, we were able to evaluate our progress and make improvements when needed.




Electrical and Computer Engineering

Cobot



Team Leader(s)
Ethan Vandermolen

Team Member(s)
Junfu Cheng, Ahmed Araara, Wuchang Weng, Mohammed AlRehaili, Feras Alsubhi, Matthew Glory, Alex May, Liam Roulston

Faculty Advisor
Dr. Lee Caraway




Cobot  File Download
Project Summary
The initial purpose of our project was to build a working AR3 robot with the purpose of playing chess against another AR3 robot that was built in previous semesters. We are using kinetic and time-of-flight cameras to locate the chess pieces, the Robot Operating System (ROS) to interface with the hardware, a simple gripper to grab the intended pieces, and a custom chessboard involving photoresistors to keep track of the pieces' positions. We have since decided to focus on interfacing with a single AR3 in order to demonstrate our ability to have our arm perform some chess moves in a live setting, theoretically against some human opponent.


Project Objective
The purpose of this project is to establish a human-assisted robot arm capable of operating with minimal user effort. Multiple sensors are used for the arm to make movements. To elicit finite movement, the arm performs a chess demonstration. The chess pieces’ locations on the board are detected with photo-resistors on each of the 64 spaces. A chess API enables auto-performance.

Manufacturing Design Methods
Each subsystem in the project had its own set of requirements and specifications that were integrated into the final product. To successfully complete the project, each subsystem was incrementally designed, implemented and tested. This approach to the manufacturing and designing allowed for a greater flexibility in the design process. It allowed for changes and modifications to be made on each individual subsystem, without impacting the entire project if something went wrong. Testing each subsystem before integration ensured that when it was time to implement into the final system each section worked as intended.

Specification
The ARCS Calibration Software is utilized to control the AR3 robotic arm through an XBOX controller. The VEX Claw is used to grip and place objects. Time-of-Flight (ToF) cameras are used to measure the distances from the gripper to objects, and this is controlled using Raspberry Pi. The control software is written in various scripts on separate boards, including Arduino, Teensy, and Raspberry Pi. The AR3 robotic arm’s primary tool for movement is six main stepper motors. A photoresistors array on a chessboard is used to detect pieces, and ROS Noetic Ninjemys packages are used to manage hardware interfaces and encoder data. The AR3/Gripper Interface & Driver is used to send commands to controllers for position-based joints, and MoveIt and Rviz are used as motion planning frameworks. Overall, this complex system is made up of various hardware and software components working together to control the AR3 robotic arm for object manipulation.

Analysis
The AR3 arm, designed by Annin Robotics, is the primary tool for movement and is controlled by the Robot Operating System (ROS) in combination with a chess-playing API. The system is equipped with Time of Flight (ToF) cameras and a Kinect camera for computer vision, allowing the system to detect objects, coordinate movements, and create a virtual 3D environment for display. The AR3 arm is composed of various hardware components, including stepper motors, sensors/cameras, and microcontrollers such as Arduino Mega, Arduino Uno, Raspberry Pi, and Teensy 3.5. The system utilizes the ARCS Calibration Software to control the AR3 arm using an Xbox controller, and the VEX Claw is used for gripping and placing objects. The Chessboard is integrated with an array of photoresistors and chess pieces to detect movements. The software framework of the system includes ROS Noetic Ninjemys, which is a set of packages for controller interfaces, managers, transmissions, and hardware interfaces, applied for joint position control. The AR3/Gripper Interface & Driver allows hardware interfaces to work in conjunction with drivers to command position-based joints, and MoveIt and Rviz are used as motion planning frameworks to make a virtual environment possible. In terms of computer vision, the system uses a Kinect Camera Driver and a ROS Interface to Kinect and Img. Broadcaster to receive data from the sensor and publish topics of sensor_msgs with HD images. The system also utilizes CenterNet Keypoint Triplets for Object Detection with a centernet SSD model for object detection and retrieval using coordinate transform. Overall, the project is a complex system involving various hardware and software components that work together to control the AR3 robotic arm for object manipulation and chess playing.

Future Works
This project has now been going for a number of years, with another group taking over in the coming year. Our vision for the project is multi-arm collaboration, full chess API implementation, and potentially pursuing interdisciplinary work. After the 2021-2022 Cobot group assembled their arm and we (the 2022-2023 Cobot group) have assembled ours, both arms should theoretically be capable of performing in conjunction to achieve greater tasks. This may look like playing against one another in a game of chess or another task that requires that degree of simultaneous finite motion. Although the chess API is functional at the moment, it is dependent on a photoresistor-enabled chess board. The goal is to have the computer vision, supported by an objective Kinect camera and neural net, be fully capable of recognizing objects and their location. From there, the chess API can recognize the state of the board and direct the arm(s) where and how to move the pieces. If time and inter-major collaboration allows, it would also be great to use the six-axis arms to mirror the movement of actual arms using something akin to a sensor sleeve. This would be a great extension of the project, as it displays the biomedical applications that the arm is capable of. However, it does require biomedical knowledge that we, as Electrical and Computer Engineers, do not specialize in.


Manufacturing Design Methods
Each subsystem in the project had its own set of requirements and specifications that were integrated into the final product. To successfully complete the project, each subsystem was incrementally designed, implemented and tested. This approach to the manufacturing and designing allowed for a greater flexibility in the design process. It allowed for changes and modifications to be made on each individual subsystem, without impacting the entire project if something went wrong. Testing each subsystem before integration ensured that when it was time to implement into the final system each section worked as intended.




Electric Vehicle



Team Leader(s)
Thomas Francis

Team Member(s)
Thomas Francis, Whitney Ellis, Arianna Issitt, Will Burk, Jonathan Kinkopf, Adrianna Agustin, Matthew Delgado, Mitchell Sales, Jiwoo Jeong, Bryce Fowler.

Faculty Advisor
Lee Caraway




Project Summary
This project is tasked with the design and construction of a traction inverter for use in driving an electric motor. To do this, it will need to produce multiple systems that drive, process, and monitor different components within the construction. The project must create a supporting frame for the motor to rest and run on that is maneuverable and provides easy access to work on the electric motor. In order to drive the motor, a high and low voltage battery system must be designed for use by the motor, microprocessors, and MOSFETs. The microprocessors will interpret data from the motor and various components to monitor temperatures, read resolver data, and produce and interpret other necessary signals. To ensure safe operating conditions, temperatures for the MOSFETs and motor must be monitored and a cooling system developed for each.





Analysis
The team decided to take on as many systems as possible. To do this, students were tasked with different elements in the system to research. Once adequate information was gathered, designs were conceptualized and reviewed to ensure the safety of students and components. Multiple subsystems were tested successfully, and simulations of the traction inverter circuit were developed.

Future Works
The current proposal for this project is for future students to build on the foundation set by this year’s team and to carry on development towards a self-driving electric vehicle. This includes the continued maturing of the traction inverter system and its integration into the mechanical systems of an automobile, as well as the development of self-driving software.






Laser Array



Team Leader(s)
Jillian Cantieni

Team Member(s)
Mohammed Alahmed, Omar Alsagheer, Zyad Alzarnougi, Jillian Cantieni, Laura Christie, Thomas Dunham, Tariq Fievre, Daniel Leles, Samuel Paulson

Faculty Advisor
Dr. Ed Caraway




Laser Array  File Download
Project Summary
The purpose of the Laser Array project was to develop a fiber optic system with an input laser array and an output of a single laser beam of higher power than the input power of one of the individual input lasers. The project was a continuation of one started earlier than the period of the Fall 2022- Spring 2023 and was expected to be further continued after the period. Some of the major challenges of the design during the period were that the laser diodes generated large amounts of heat and that the Class 4 laser was dangerous to the naked eye due to specular and diffuse reflections. These issues were solved by using the Peltier cooler and enforcing safety protocols, including mandatory safety glasses that protect against blue-colored lasers. The laser diode was proved capable of emitting a blue laser beam, and its VI characteristics were found empirically. Testing confirmed that the laser diode did not generate more heat than the Peltier cooler could handle as its temperature was maintained around 71° Fahrenheit. Finally, a Raspberry Pi was configured to take a picture of the laser on the detector and process the image to measure the beam's radius at the detector. Additional research was performed on how the project could be upscaled to use more input lasers and how the input lasers would be theoretically coupled into fiberoptic cables and be optically combined to form a single output laser. Overall, between the Fall 2022- Spring 2023 period, the team focused on mounting the laser diode, ensuring that the Peltier cooler was operational, testing the VI characteristics, and processing the image of the spot size of one input laser.


Project Objective
The objective of the Laser Array Team is to design a blue light laser using diodes, GRIN lenses, and etched multimode optical fibers to focus enough power into a combined focal point to machine copper. First, research the I-V Characteristics of the diodes and the Peltier Cooler to ensure all components operate within reasonable limits. Then, once the machined GRIN lens and diode mounts are completed, wire everything in a cohesive manner to achieve the objective. The use of raspberry pi to perform image processing is also incorporated.

Manufacturing Design Methods
A Peltier Cooler attached to a PI Controller was used to regulate the temperature for laser diodes incorporated into a machined-copper diode mount. A GRIN lens mount was also machined in the Florida Tech machine shop to hold the GRIN lens in front of the laser diodes to be focussed into fiber optic cables to get a uniform laser output. The output was captured photographically using a raspberry pi camera, where the image was then processed via Python programming for further analysis.

Specification
All data sheets for laser diodes and Peltier cooler had been gathered to ensure the equipment operates at nominal (and safe) requirements. Understanding the I-V Characteristics first helped provide a better understanding of the equipment at hand, and how to safely operate each component in the system as the project is considered a class IV laser.

Analysis
This semester the Laser Array Team performed multiple analyses regarding diode characteristics, power output, image processing, and thermodynamics. In the beginning of the semester the team obtained the actual diode that was going to be used in the final setup for the Senior Design showcase. Voltage and current characteristics were obtained using a multimeter. This provided us with important information for the circuit design that was used for the final setup. Once the voltage and current characteristics were obtained, a new power meter was purchased and used to measure the output power of the diode to see how much power was being dissipated as heat. To ensure the diode was providing enough output power, an input voltage and current was supplied that the team anticipated would make the diode operate at a high temperature. This is where thermodynamic testing was applied using an infrared thermometer to measure the diodes temperature in real time as it was constantly running. This ensured the cooler settings were properly set to maintain the diode running at room temperature. Finally, a Raspberry Pi set up with a camera was used to capture the visual output of the diode and obtain measurements of the output rings the diode outputs.

Future Works
As this Spring 2023 semester concludes, the existing project/materials/research will be handed off to next year's Laser Array Team for further continuation toward project completion.


Manufacturing Design Methods
A Peltier Cooler attached to a PI Controller was used to regulate the temperature for laser diodes incorporated into a machined-copper diode mount. A GRIN lens mount was also machined in the Florida Tech machine shop to hold the GRIN lens in front of the laser diodes to be focussed into fiber optic cables to get a uniform laser output. The output was captured photographically using a raspberry pi camera, where the image was then processed via Python programming for further analysis.




Man Machine Interface



Team Leader(s)
Rauly Baggett

Team Member(s)
Talon Holton, Giulio Martini, Nathaniel Hagen, Ahmed Alqasir, Brenden Davis, Ziqi Yan, Sizhe Rong, Marisa Guerra, Caroline McTigue, Tanner Chamness

Faculty Advisor
Dr. Ed L Caraway




Project Summary
Man Machine Interface attempts to bridge the gap between EMG muscle signal collection and subsequent interpretation, and portability and ease of use. This is done by converting a stationary EMG signal collection system to a portable, battery powered, and wireless data transmitting all-in-one system. The goal of the MMI project is to deliver a easy to use, UI enabled, system for the consumer. This final product will provide the user with the ability to slip on a sleeve and turn it on and all of the pairing, neural network processing, and data display will be handled automatically on the back end. This will allow users of any background to conduct research, data collection, or use the system in their personal lives for any physical enhancements or disability aids.


Project Objective
OBJ-01. The team members shall each research their respective role’s necessary information for developing this product. This includes research on EMG signal processing, instrumentation and differentiating Op-Amps, materials and fabrics, biological muscle locations, machine learning and neural networks, and various hardware implementation techniques. Rationale: This research will enable team members to decide on an efficient course of action for their respective duties. OBJ-02. The team shall use the information learned to develop an all-in-one module that will collect the data using the Gravity Analog EMG sensor muscle sensor board in conjunction with a Raspberry Pi Pico and then process the data before sending it over TCP to a receiving motorized moveable robotic arm assembly. The Raspberry Pi Pico ,equipped with WiFi and Bluetooth, will do all of the signal noise filtering, data processing, and data transmission on board Rationale: This is the core of the project, and the completion of this will allow the remote movement of robotic arm assemblies purely from the data collected in the user’s muscle signal. OBJ-03. The team shall develop a Graphical User Interface (GUI) that will provide the user with easy pairing to the data module as well as display live data while it is collected from the muscle and processes on the Raspberry Pi Pico. This GUI will provide ease of pairing by showing the user a list of nearby, powered-on data modules. The user will then select the data module they would like to observe, and the pairing will be done automatically. OBJ-04. The Team shall deliver a finished EMG data collection/processing/transmitting module with an included GUI application that encompasses all of the needs and goals of this project. Rationale: With this final product delivered, the MMI team will have successfully created an alternative solution to EMG analysis that can be used in many robotics and simulation applications.

Manufacturing Design Methods
A variation of the Engineering method which involved: Learning the existing research from the previous group, conceptualizing our approach to this problem, planning and assigning team roles and figuring out team member strengths and weaknesses, designing the respective solutions, developing them, testing them and altering to solve and issues, and launching the final product.

Specification
Collect EMG sensor data and then process the variable voltage signal in a Neural Network. Then the host computer handling the processing will wirelessly transmit the interpreted, smoothed data to a receiver on a robotic arm for induce movement.

Analysis
Upon completion of our project, we discovered many pitfalls that we ran into throughout the development process; However, hindsight is 20/20. We have learned a lot about working with a diverse group of individuals and got further than we expected to. We have created a product that has the ability to collect, and wirelessly transmit accurate and predicted muscle signal data over UDP wireless connection to any 1 or 2 dimensional robotics. This product is easy to use from a consumer standpoint and if given more funding and time, we would definitely have a market ready product within 1-2 years.

Future Works
Future works include: Finding better and more efficient locations for the sensors on the arm to collect more accurate and specific data, as well as building on the simple interpretation model that currently exists with hopes of expanding to multiple axis of rotation for more complex robotics in the future (amputees, underwater welding, i.e.).


Manufacturing Design Methods
A variation of the Engineering method which involved: Learning the existing research from the previous group, conceptualizing our approach to this problem, planning and assigning team roles and figuring out team member strengths and weaknesses, designing the respective solutions, developing them, testing them and altering to solve and issues, and launching the final product.




Microgravity Simulator: Random Positioning Machinery



Team Leader(s)
Ryoku Yamaguchi

Team Member(s)
Vivienne Nipar, David Handy

Faculty Advisor
Dr. Andrew G. Palmer

Secondary Faculty Advisor
Dr. Edward L. Caraway



Microgravity Simulator: Random Positioning Machinery  File Download
Project Summary
A Random Positioning Machine (RPM) are devices that have the ability to rotate along the x and y axis to minimize the effects of gravity which simulates weightlessness in a three-dimensional pattern. RPMs are able to randomize speeds between two axes, allowing customization in gravity scaling and distribution. Gravity was identified as a major factor in plant growth and other biological systems by Julius von Sachs (late 1800s), the originator of the clinostat, or random positioning machine.The constant movement confuses gravity sensing mechanisms in bacteria and plants, inducing responses similar to those seen in spaceflight microgravity.



Manufacturing Design Methods
The first iteration’s purpose was to test microgravity on bacteria. It is a basic version of the clinostat. A single stepper driver is used and wired to the motors in parallel as microgravity is simulated at a low speed, consistent on each axis. For the second iteration, smoother rotation and a more compact electrical system were implemented. Parts were modeled in CAD, then 3D printed. Ball bearings were added for smoother rotations and support for more weight on the side opposite the motor. Springs were also added to secure the petri dishes/plates while in motion. Due to plant growth testing being the primary focus of this iteration, a structure to hold the control group was also made so that the LEDs being used on the plants are consistent on both. It requires a similar structure with the same light pattern affecting the test subjects. With the bacteria focused microgravity simulator, a control group just needs to be placed to the side. Nema 17 motors with 400 steps were used instead of the 200 step motors from the first iteration, allowing for smoother rotations. A CNC board was used to help decrease space and make things more compact. A LED strip is used for the lights for the plants as well as switches to turn on/off motors and/or lights.



Future Works
Design different arms and plate holderd to support more plate types and other styles of equipment. Also, implement a touch screen interface to control things like speed, lights colors, etc.


Manufacturing Design Methods
The first iteration’s purpose was to test microgravity on bacteria. It is a basic version of the clinostat. A single stepper driver is used and wired to the motors in parallel as microgravity is simulated at a low speed, consistent on each axis. For the second iteration, smoother rotation and a more compact electrical system were implemented. Parts were modeled in CAD, then 3D printed. Ball bearings were added for smoother rotations and support for more weight on the side opposite the motor. Springs were also added to secure the petri dishes/plates while in motion. Due to plant growth testing being the primary focus of this iteration, a structure to hold the control group was also made so that the LEDs being used on the plants are consistent on both. It requires a similar structure with the same light pattern affecting the test subjects. With the bacteria focused microgravity simulator, a control group just needs to be placed to the side. Nema 17 motors with 400 steps were used instead of the 200 step motors from the first iteration, allowing for smoother rotations. A CNC board was used to help decrease space and make things more compact. A LED strip is used for the lights for the plants as well as switches to turn on/off motors and/or lights.