Mission

The mission of the Department of Computer Engineering and Sciences is to prepare computing, engineering and systems  students for success and leadership in the conception, design, management, implementation and operation of complex engineering problems, and to expand knowledge and understanding of computing and engineering through research, scholarship and service.

Electrical and Computer Engineering

Laser Array



Team Leader(s)
Javier Kameka

Team Member(s)
Javier Kameka, Hong Vu Vi, Brody Morgan-Lewis, Dominic Scarpignato, Ignacio Serrato, Marco Gabriel Del Arca Argueta, Yiqing Dong

Faculty Advisor
Dr. Lee Caraway




Project Summary
The main goal of the project is to focus the light emitted from 10 5W laser Diodes with the use of multimode Optical fibers and GRIN lenses while maintaining and controlling the overall amount of heat present by the utilization of the Peltier Cooler. The purpose of the output for the focused laser is to be emitted into a machine copper brick and the thermal data will be collected.


Project Objective
The main objective of this project is to upgrade an existing cooler to sustain ten five-watt blue laser diodes and design a lens and coupler holder so the diodes combine into one beam.

Manufacturing Design Methods
-The team utilized 3D modeling software and a CNC machine to create the copper mount and a brass back plate for all ten laser diodes. - The copper mount sits on top of a Peltier cooler controlled through a current feedback loop to maintain safe operating temperatures for all ten diodes. -For each diode’s aperture, a lens must be utilized in order to focus the beam. Using Autocad Fusion 360 a holder was designed for the 3.2 x 3.2 mm lens and the thorlabs coupler. -The current feedback control loop also ensures each laser diode stays under its operating current.

Specification
- The current control loop for the laser diodes is a modified design of Texas Instruments’ active voltage-to-current converter using a U1 OPA2991 Op-Amp and a Darlington transistor pair. - The focal length of the beam after being shot through the lens was determined by using an optical power meter and shifting the lens until a maximum power output was recorded.


Future Works
The project can be improved more on the aspect involving the control loop as well as improving the specs of the laser diode holder.

Other Information
At the current time there is no other information that can be provided.

Manufacturing Design Methods
-The team utilized 3D modeling software and a CNC machine to create the copper mount and a brass back plate for all ten laser diodes. - The copper mount sits on top of a Peltier cooler controlled through a current feedback loop to maintain safe operating temperatures for all ten diodes. -For each diode’s aperture, a lens must be utilized in order to focus the beam. Using Autocad Fusion 360 a holder was designed for the 3.2 x 3.2 mm lens and the thorlabs coupler. -The current feedback control loop also ensures each laser diode stays under its operating current.




COBOT



Team Leader(s)
Zach Champion

Team Member(s)
Zach Champion, Logan Boehm, Matthew McGuckian, Saad Aloraymah, Josh Rodriguez, Meshari Almutairi, Liam Progulske, Zixin Zhou, Qingying Hu

Faculty Advisor
Lee Caraway




COBOT  File Download
Project Summary
This project aims to develop a system that allows a collaborative robot (COBOT) to play chess with humans seamlessly and safely. The system integrates various components, including the COBOT (a modified AR3 robot arm from Annin Robotics), the Robot Operating System (ROS2), computer vision (using an Xbox Kinect and integrated ArduCam depth cameras), a custom chess clock, and a web-based dashboard. The COBOT is designed to interact with humans during a game of chess while considering human factors to ensure predictable and safe behavior.


Project Objective
The primary objective of this project is to demonstrate how a robotic system can be designed with human factors in mind, enabling seamless and safe interaction between humans and robots. By integrating various technologies, such as computer vision, familiar interfaces (e.g. a chess clock), and motion planning, the project aims to create a robotic system that can play chess with humans while adhering to established rules and conventions.

Manufacturing Design Methods
We designed and 3D printed numerous components, including: - A custom 3-fingered gripper that houses a depth camera and uses helical gears to move the fingers using a central servo motor. - A chess clock that communicates with the computer over serial. - Mounts and enclosures for magnetic rotary encoders.



Future Works
We identify two main areas for future work: Hardware Issues: Backlash in the robot's joints makes precise motion difficult. While workarounds were implemented, a hardware fix is necessary to achieve true consistency. Second COBOT: The project originally planned for two COBOTs to play against each other, but the second COBOT had to be cannibalized to fix issues with the main one. Rebuilding the second COBOT is necessary to enable COBOT-vs-COBOT games.


Manufacturing Design Methods
We designed and 3D printed numerous components, including: - A custom 3-fingered gripper that houses a depth camera and uses helical gears to move the fingers using a central servo motor. - A chess clock that communicates with the computer over serial. - Mounts and enclosures for magnetic rotary encoders.




Electric Vehicle



Team Leader(s)
Sean Fabre

Team Member(s)
Alyssa Campos, Shirui Da, Stephen Merritt, Andres Sperandio, Justin Wirzman, and Zi Lin Zhang

Faculty Advisor
Lee Caraway




Electric Vehicle   File Download
Project Summary
Design and implement a three-phase traction inverter and drive a CNC servo motor for an electric vehicle. Account for battery protection and heat dissipation throughout the ancillary systems. Ensure an effective method in connecting the Texas Instruments evaluation module to the traction inverter. MOSFETs: Selection of a Nexperia N-channel MOSFET for optimized load switching, hot-swap applications and battery protection. The design accounts for heat dissipation with the drain on the backside and the ability to withstand a high current load. Microcontroller: Power a Texas Instruments evaluation module to support the three-phase inverter and motor. CNC Servo Motor: Enable encoders for FANUC AC Servo Motor. Proper gauges and soldering ensure compatibility with the system.






Future Works
Complete production of the full traction inverter and attach it to the Texas Instruments setup. Secure servo motor to the system and initiate rotation. Collaborate with Mechanical Engineering department for the necessary mechanics of the electric vehicle.






Computer Science and Software Engineering

Ambient AI



Team Leader(s)
Patrick Menendez-Rosado

Team Member(s)
Patrick Menendez-Rosado, Nicholas Chen, Matthew Fitzgerald

Faculty Advisor
Dr. Thomas Eskridge




Project Summary
Ambient AI is a system that uses real-time facial detection software to produce engagement scores for inputted materials, and it achieves this without violating privacy. As it makes use of facial detection software, rather than facial recognition, this means that scores are calculated from overall engagement, rather than individuals recognized and profiles built. This non-invasive way of building engagement metrics datasets is more respectful of privacy, while still gathering valuable engagement data for inputted materials. As the system itself is designed to be administered locally, rather than as a service, this allows companies to maintain their own systems and datasets, proving to be a huge advantage for in-house operations and removing any middle companies.


Project Objective
To provide a system that provides actionable engagement metrics for system administrators without violating privacy.


Specification
System administrators have access to file management services as well as engagement metrics, and target audiences are shown uploaded images as managed by system administrators.








Panther Lounge Database System



Team Leader(s)
David Walston

Team Member(s)
Mia Dattis, Joseph Robson

Faculty Advisor
Fitzroy D. Nembhard, Dept. of Computer Science, Florida Institute of Technology




Project Summary
The Panther Lounge is a Student-run library located on campus within the Evans Hall Student Center. The library is co-run by two student organizations: FITSSFF and Anime Club, who collectively have over $250,000 worth of items which has been collected since the 1960s. Currently, the catalog of the library is tracked through spreadsheet programs such as Google Sheets, and the checkout and return system is handled through online forms. However, through multiple executive board changes, downsizings, and moving to new office spaces, the catalog has become extremely disorganized and complex, with items spread across multiple spreadsheets spread across the club's Cloud storages. The library has recently moved to a larger space, and the officers of both clubs are attempting to reconstruct the catalog to make it easier to use. This project was created to assist them by creating a centralized database system which can be used to store any items and track checked out items.


Project Objective
The goal of the Panther Lounge Database System is to create a secure database, along with an integrated web application to allow the students running the Panther Lounge to easily catalog and track the library's items. The application will also provide members with a streamlined approach to search for and check out items within the library.

Manufacturing Design Methods
The database was designed using MySQL. The backend is designed using Spring Boot, which is an application library built off of Java. This allows the use of Java's included database library, JDBC, to connect the system's backend to the database. The front end was designed using HTML, CSS, and JavaScript, and connected to the backend using Thymeleaf.

Specification
The database was designed to hold the following types of items: - Novels - Comic Books - Movies/DVDs - Board Games - Video Games - Trading Cards - Miniatures - Consoles - Hardware In addition, the database needed to hold the login and contact information of users so that they can check out items and be notified when an item they are holding is about to be or is currently past due, as well as the list of currently checked out items and who is holding them.




Manufacturing Design Methods
The database was designed using MySQL. The backend is designed using Spring Boot, which is an application library built off of Java. This allows the use of Java's included database library, JDBC, to connect the system's backend to the database. The front end was designed using HTML, CSS, and JavaScript, and connected to the backend using Thymeleaf.




TutorFIT



Team Leader(s)
Sidney Nedd

Team Member(s)
Samaher Damanhori, Eleanor Barry

Faculty Advisor
Dr. Khaled Slhoub




TutorFIT  File Download
Project Summary
The TutorFIT project at the Florida Institute of Technology is a forward-thinking initiative designed to enhance the tutoring experience by simplifying the connection between students and tutors, particularly for advanced courses. The mobile application addresses existing gaps in the tutoring system with features such as simplified registration and flexible scheduling. By streamlining the process of finding and scheduling tutoring sessions, TutorFIT makes educational support more accessible and efficient for both students and tutors, thus enriching the educational environment at FIT. The development approach for TutorFIT involved several strategic solution methods. These included enhancing server-side automation using Node.js, integrating with key external services like Microsoft App Center, Firebase, and OneSignal to manage mobile app functionality and user engagement effectively, and utilizing Google Sheets for real-time data management. This multifaceted strategy ensured that the app was not only functional but also user-friendly, meeting high standards of performance and reliability.


Project Objective
The primary objective of our project is to streamline the process of connecting students with accessible tutoring resources at the Florida Institute of Technology (FIT), making it easier for students to find and connect with tutors. We aim to increase the flexibility and number of tutoring options available, especially for the challenging junior and senior level courses where there is a noticeable deficit in available tutors. By encouraging academically successful students to offer tutoring services, we intend to expand the pool of tutors and enhance coverage for advanced courses, thereby improving the overall learning experience. Our approach involves creating a dynamic and user-friendly platform that efficiently matches students with tutors, tailored to meet the diverse and demanding educational needs of the university. This comprehensive solution is designed to address the critical need for accessible tutoring resources, ensuring that the educational outcomes for all students at FIT are enhanced.




Future Works
We're enhancing our app with pivotal features to improve the tutoring experience: In-App Messaging System: Recognizing the pivotal role of effective communication in tutoring success, our roadmap includes the introduction of an advanced in-app messaging system. This system will leverage cutting-edge real-time messaging SDKs to ensure seamless interaction between students and tutors. With the flexibility to choose between in-app chat and email communications, our platform aims to cater to the preferences of all users, ensuring that every interaction is as convenient and effective as possible. Reviews and Ratings: A new system will allow students to rate tutors and provide feedback on their sessions, promoting transparency and informed choices within our community. This feature not only aids students in selecting tutors but also fosters a culture of excellence and constructive feedback among tutors. Multi-Language Support: Recognizing the diversity of our users, we plan to introduce multi-language support, making our app accessible in users' preferred languages, thereby enhancing inclusivity and user-friendliness.






Cognitive-Driven UAS



Team Leader(s)
Justin Swanson

Team Member(s)
Justin Swanson, Christopher Norton

Faculty Advisor
Dr. Siddhartha Bhattacharyya

Secondary Faculty Advisor
Parth Ganeriwala



Cognitive-Driven UAS  File Download
Project Summary
This project was conceived to mitigate the risks of autonomous flight operations in densely-populated environments. Specifically, protocols would need to be established that allows for an autonomous agent to take control of the flight and safely route the aircraft to its destination should there be a connection loss between the aircraft and the ground station controlling the aircraft. Our work was to further enhance the project by reworking the pathfinding code to more accurately measure the risks of operation in the environment, allowing a modified graph-based pathfinding approach to be applied to find the safest and shortest path to the destination. There were no shortage of challenges to overcome in this project, from finding the best solutions to perform spatial analysis to understanding the terminology and methods used in the GIS space. Ultimately, we deliver a system which is capable of safe navigation to the destination, while mitigating potential damage to those on the ground.


Project Objective
The objective of this project is to provide enhanced safety when operating a remotely-piloted AW609 in highly-populated areas as it navigates to its destination upon connection loss with the ground station. This is done by further developing advanced autonomous navigation to more accurately assess operational risk.

Manufacturing Design Methods
The Pathfinding system was rewritten to accommodate the integration of a population database and weather API calls. The information from these data sources was used in calculating the risk, measured in expected casualties per flight hour, of traveling through areas in the search space. To calculate offline risk, which is the component of the risk calculation which does not change throughout the operation of the AW609, a data pipeline was constructed to take a sparse population database and create a tessellated grid, capable of being used for pathfinding, and extracting values from the population database to the grid.



Future Works
Future works for the project include implementation of various pathfinding strategies to determine the most appropriate ones for this project. Additional work in the future could instead focus on implementing reinforcement learning to choose which factors are most important for a safe flight.


Manufacturing Design Methods
The Pathfinding system was rewritten to accommodate the integration of a population database and weather API calls. The information from these data sources was used in calculating the risk, measured in expected casualties per flight hour, of traveling through areas in the search space. To calculate offline risk, which is the component of the risk calculation which does not change throughout the operation of the AW609, a data pipeline was constructed to take a sparse population database and create a tessellated grid, capable of being used for pathfinding, and extracting values from the population database to the grid.




Heap Heap Hooray



Team Leader(s)
Trevor Schiff

Team Member(s)
Trevor Schiff, Tyler Gutowski

Faculty Advisor
Ryan Stansifer




Heap Heap Hooray  File Download
Project Summary
The MiniJava compiler was a project for the Compiler Theory course that followed Andrew Appel’s Modern Compiler Implementation in Java. This compiler, which we’ve named “MJC” (“MiniJava Compiler”), compiles MiniJava source code and produces assembly code for the SPARC architecture. MJC includes most of the common phases of code compilation: lexing/parsing, type-checking (semantics), translation (intermediate representation), assembly code generation, register allocation, and lastly the output step. The assembly output can then be linked alongside our compiler runtime (CRT for short) to produce an executable file. For the purposes of the Compiler Theory course, MiniJava is a sufficient language for a simple compiler. It is a subset of the Java programming language, and while it does contain powerful features such as class inheritance, it lacks other Java features such as the null keyword, and functions can only return integer values, objects cannot escape functions, etc. For research purposes, MiniJava does not provide a great environment. As with Java, there is no way to manually free memory. However, MiniJava does not contain a garbage collector out-of-the-box as Java does, so all memory allocations are leaked and will never be freed. As part of this project, we have implemented garbage collection in the CRT using the C programming language. We chose to implement a few unique methods of garbage collection to allow us to test configurations and determine which methods perform best for MiniJava source code, based on which data structures and algorithms are exhibited by the source code. We hope that our findings here can be applied to other programming languages. Our garbage collector supports: reference counting, mark-and-sweep, copying, and generational garbage collection algorithms.


Project Objective
Heap Heap Hooray's goal was to demystify garbage collection in order to make it easier for the average user to understand.

Manufacturing Design Methods
We began by implementing a garbage collection algorithm and ran several test cases with it. Originally we were attempting to pass the test cases, but we found it more entertaining to try to make it fail. When we found a fault in the algorithms, we would research, then try to implement something that would fix our previous problem. When reference counting had the drawback of failing to detect cyclic references, we implemented mark-and-sweep. When mark-and-sweep failed to work with a heavily fragmented heap, we implemented copying. When copying's collection cycles took too long, we implemented generational.

Specification
Our software is built on the MiniJava Compiler, which was written for SPARC architecture. QEMU is used as our SPARC emulator, and Jabberwocky is used as the containerization software to keep all necessary tools together.

Analysis
Given that the MJC garbage collector was primarily designed for educational purposes, it's most aptly compared to itself. We explored the merits and drawbacks of each algorithm, highlighting how specific ones address shortcomings in others.

Future Works
As this project was built on top of the MiniJava compiler, it would be nice to add more functionality to the compiler. Currently, the compiler does not support two-dimensional arrays or Null values, and complex algorithms are hard to execute without either of these.


Manufacturing Design Methods
We began by implementing a garbage collection algorithm and ran several test cases with it. Originally we were attempting to pass the test cases, but we found it more entertaining to try to make it fail. When we found a fault in the algorithms, we would research, then try to implement something that would fix our previous problem. When reference counting had the drawback of failing to detect cyclic references, we implemented mark-and-sweep. When mark-and-sweep failed to work with a heavily fragmented heap, we implemented copying. When copying's collection cycles took too long, we implemented generational.




VehID



Team Leader(s)
Spencer Hirsch

Team Member(s)
Remington Greko, Spencer Hirsch, Thomas Johnson, Alexis Nagle

Faculty Advisor
Dr. Marius Silaghi




VehID  File Download
Project Summary
VehID is a vehicle recognition software that was driven by the idea of linking our preexisting camera network along our roadways to assist in AMBER Alerts and other vehicle-related crimes. Our software can accept a pre-recorded video, identifying all vehicles in each frame, extracting each vehicle, and processing them based on various characteristics. Once a vehicle is processed, all identifying information is stored in our database, which can be accessed through our web application.


Project Objective
The objective of our project is to identify vehicles in a video feed to aid in widespread vehicle searches for law enforcement purposes. The current system heavily relies on human interaction or license plate readers. Our system adds innovation to the previous design by not only identifying license plate information but also vehicle identification information.


Specification
Our system consists of six individual neural networks, a database, and a web application. Firstly, a video can be captured and read utilizing the Python library OpenCV to process the video by frame. Each frame is then passed through our variety of neural networks of differing architectures, each responsible for extracting specific information about the vehicle, such as color, make, body type, and license plate information. Once a frame has been processed and the data is extracted, it is written into a JSON file that is read into our database. The database can be interacted with through our web application, where a user can search for vehicles or see all vehicle entries that were identified by our neural network subsystem.


Future Works
Potential future developments for this project would be live-feed integration and retraining some of our models. In addition to these changes, revisiting our original plan of implementing vehicle model detection would be something that would significantly improve the system.






Collaborative Control of Autonomous Cars



Team Leader(s)
Brennan Pike

Team Member(s)
Brennan Pike, Isaya Danice, John Vitali

Faculty Advisor
Thomas Eskridge




Project Summary
The past decade has seen staggering progress in the development of autonomous cars. The technology is yet to be perfected, however, and human intervention is still necessary in many situations where the autonomous agent is unable to decide what it should do. However, the systems for handing off control to a human are all but nonexistent, making these handoffs rough and requiring more of the user's attention, rather than less. Collaborative Control of Autonomous Cars seeks to improve the systems for passing control from the autonomous agent to the user, then back again. This is useful in situations where the agent cannot make a choice, but also in situations where the user wishes to make a different choice from the agent. In these cases, control can be briefly handed to the user before being returned to the agent without causing disturbances on the road or stress for the user.












Project Rotten Apples



Team Leader(s)
Angel Star

Team Member(s)
Hussam Alghamdi, Connor Pommer, Angel Star

Faculty Advisor
Dr. Marius Silaghi




Project Rotten Apples  File Download
Project Summary
The "Rotten Apple Analyzer" is an innovative AI-based software developed to enhance the detection and classification of rotten apples. Utilizing the advanced Segment Anything Model (SAM) developed by META Research, this project aims to automate the assessment process of apple quality in supermarkets and farms. By leveraging AI, the system provides a rapid and accurate evaluation of the fruit, distinguishing healthy apples from those that are bruised or rotten.


Project Objective
Our objective is to provide a reliable, efficient, and automated solution for apple quality assessment. By implementing AI and machine learning, the Rotten Apple Analyzer aims to reduce dependency on manual labor, enhance the accuracy of apple quality determination, and minimize food waste by accurately categorizing produce.

Manufacturing Design Methods
The project employs a software-centric design approach, using Python and a PYQT interface for the GUI. The system integrates the Segment Anything Model (SAM) for precise image analysis. Apple images are segmented into masks, which are then analyzed by the AI to rate the quality of each apple. The design process involved iterative testing and refinement to ensure the system's functionality across different stages of apple degradation.

Specification
Software Requirements: Python 3.11+ PYQT5 TensorFlow and PyTorch (latest compatible versions) Hardware Requirements: Compatible with Windows 10+ and macOS Standard PC with 8 GB RAM or more Preferred integrated camera; compatible with standard external cameras Performance Metrics: Image processing time: ~15 seconds per image, faster in batch processing Accuracy rate: ~90% for detecting rotten areas

Analysis
The system was trained and validated on a dataset of green apples monitored over three months to capture various stages of rot. Statistical analysis of the model's performance showed a significant reduction in error rate compared to traditional methods. The GUI provides users with visual and numerical feedback, including color distribution graphs and a categorical good/bad rating.

Future Works
Future enhancements will focus on expanding the model to include different types of apples and other fruits, improving the GUI for user interaction, and integrating more dynamic elements such as a multi-masking feature that allows for detailed per-mask analysis. Long-term goals include adapting the system for use in other agricultural quality control applications.


Manufacturing Design Methods
The project employs a software-centric design approach, using Python and a PYQT interface for the GUI. The system integrates the Segment Anything Model (SAM) for precise image analysis. Apple images are segmented into masks, which are then analyzed by the AI to rate the quality of each apple. The design process involved iterative testing and refinement to ensure the system's functionality across different stages of apple degradation.




Code Visualization



Team Leader(s)
Curtice Gough

Team Member(s)
Curtice Gough, Joshua Hartzfeld, Catherine DiResta

Faculty Advisor
Ryan Stansifer




Code Visualization  File Download
Project Summary
The "Code Visualization" project aims to address the challenge of debugging complex code by providing an interactive visualization tool. By integrating dynamic code analysis and user intervention features, the project seeks to enhance debugging efficiency and accuracy.


Project Objective
The primary objective of this project is to achieve code visualization at a medium/high level (as opposed to algorithm visualization). Code tracing is often a complex task. Visualizing structures and the movement of data will aid users with debugging tasks while not spending too much time tracing execution by hand.










TEC-V



Team Leader(s)
Michael Dowling

Team Member(s)
Michael Dowling, Zealand Brennan, Stephen Coster, Gabor Papp, Henry Hill

Faculty Advisor
Dr. Marius Silaghi, Dept. of Electrical Engineering and Computer Science, Florida Institute of Technology




TEC-V  File Download
Project Summary
The Topographic Exploration Cave Vehicle (TEC-V) project, spearheaded by a team at the Florida Institute of Technology, is a initiative in the realm of underwater exploration technology. Initially conceived as a thesis project two years ago and then expanded upon by the current project lead, TEC-V has undergone significant evolution through three design iterations. The core vessel was initially developed by a collaborative effort involving two members from the Ocean Engineering department and one from the Mechanical Engineering department. Upon the involvement of the Computer Science team, the project saw new technological integration, including the implementation of sonar systems. The Computer Science team focused on harnessing Python for data acquisition processes and employed JavaScript for dynamic 3D visualization, creating a tool for real-time interaction with underwater environments. This integration has allowed for the generation of interactive l models essential for both research and explorative missions in submerged cave systems. The ongoing enhancements in TEC-V aim to focus on autonomous underwater navigation and mapping, emphasizing precision, efficiency, and user engagement.


Project Objective
The primary objective of the TEC-V project is to advance underwater exploration capabilities through the development of an Remotely Operated Vehicle (ROV) equipped with sonar technology and a user-friendly software interface. This system aims to facilitate three-dimensional mapping of submerged cave systems, enhancing both the understanding and accessibility of these challenging environments.

Manufacturing Design Methods
Sonar Technology: Utilization of the Ping 360 sonar and Omniscan 450 FS sonar for environmental scanning and data accuracy. Software Development: Implementation of Python for backend data handling and JavaScript (using the three.js library) for frontend 3D data representation. Cloud Plot Webpage: A central feature of the project, this webpage facilitates the interaction with the data collected by allowing users to visualize, manipulate, and analyze 3D models of the scanned environments. Features include rotational adjustments, multi-file integration, and enhanced data accuracy through algorithmic adjustments.



Future Works
Autonomous Navigation: Refining algorithms for improved AUV autonomy to enhance navigational capabilities in complex underwater environments. Sonar Integration: Expanding the compatibility of the Cloud Plot Webpage to support various sonar data formats and integrating more advanced data processing algorithms to ensure higher accuracy and reliability. User Interface Improvements: Continual enhancement of the Cloud Plot Webpage to improve user experience and data interaction, specifically focusing on the ease of use, accessibility, and functionality across different devices and platforms. Comprehensive Testing: Conducting extensive field tests in various underwater environments to validate and refine the system's functionality and accuracy.


Manufacturing Design Methods
Sonar Technology: Utilization of the Ping 360 sonar and Omniscan 450 FS sonar for environmental scanning and data accuracy. Software Development: Implementation of Python for backend data handling and JavaScript (using the three.js library) for frontend 3D data representation. Cloud Plot Webpage: A central feature of the project, this webpage facilitates the interaction with the data collected by allowing users to visualize, manipulate, and analyze 3D models of the scanned environments. Features include rotational adjustments, multi-file integration, and enhanced data accuracy through algorithmic adjustments.