HCR Lab Robotics Research
Table of Contents
This blog post covers my time at the Human Centered Robotics (HCR) Lab at the Colorado School of Mines from February 2021 to September 2021.
Background
I started my undergraduate studies at the Colorado School of Mines in the Fall 2018 semester. My major was Computer Science with a focus on Robotics & Intelligent Systems. And I graduated in Spring 2022.
I was lucky to find my passion early on in my life. During high school, I spent a good amount of time to figure out what I liked and what I could be good at. After some trial and error, I was able to figure out that my passion was computer science. But it was also during this time that I discovered I had this overwhelming love for building through code.
At Mines, I got the opportunity to work at Mines’ Human Centered Robotics (HCR) Lab under Dr Hao Zhang. I first met Dr. Zhang in Spring 2020 through his class “Human Centered Robotics” (CSCI473), and after the chaos of COVID and classwork, I got to work in his lab in early Spring 2021.
Human Centered Robotics (CSCI473) Class
Mines’ Human Centered Robotics (CSCI473) was one of only a few classes from my college experience that had a profound impact on me. The class was taught by Dr. Hao Zhang. Our entire grade for the class was made up of just three projects, each of which presented a challenging problem that introduced core concepts of robotics. These projects consisted of:
- Learning Robot Operating System (ROS)
- Reinforcement Learning for Robot Wall Following
- Robot Understanding of Human Behaviors Using Skeleton-Based Representations
🧩 Learning Robot Operating System (ROS)
This was the first project we were assigned. The project consisted of three tasks:
- Setup Development Environment
- Understand Gazebo Simulator
- Write a ROS “Hello World”
For tasks 1 and 2, we just had to set up our development environment and follow an introduction to Gazebo tutorial. This included:
- Setting up ROS Melodic, which I did on my 2011 HP laptop which was good enough
- Installing and configuring ROS and Gazebo
- Going through gazebosim’s tutorial and e-manual’s tutorial.
Task 3, on the other hand, was a real challenge. The task was to use turtlesim and have the turtle draw the Mines’ “M” logo:
![]() |
![]() |
This task, though it sounded simple, was more difficult than it looked. This project eventually introduced me to the concept of Open-Loop and Closed-Loop systems. You can learn more about this project and my solution on the ROS Move Turtle project page.
🧩 Reinforcement Learning for Robot Wall Following
This was the second project we were assigned, and it was one of the hardest projects I ever worked on in college. The project description was as follows:
In this project, students will design and implement reinforcement learning algorithms to teach an autonomous mobile robot to follow a wall and avoid running into obstacles. Students will be using the Gazebo simulation in ROS Melodic to simulate an omni-directional mobile robot named Triton, and using an environment map that is provided to you. Students will be using a laser range scanner on the robot to perform sensing and learning, where the robot is controlled using steering and velocity commands. Students are required to program this project using C++ or Python in ROS Melodic running on Ubuntu 18.04 LTS (i.e., the same development environment used in Project 1). Also, students are required to write a report following the format of standard IEEE robotics conferences using LATEX.
For the reinforcement learning algorithm, we were instructed to use Q-Learning. We also used the Stingray Gazebo simulation environment provided by the class. Stingray consisted of the Triton model and physics logic. We were also provided a maze for the robot to follow. All in all, the environment looked like this:
For the full project description, check out csci473-p2.pdf. I never published my solution to GitHub or the web because it was not very good and heavily flawed. Also, getting the code running in the right environment is quite difficult and annoying. However, I do have a demo video that I submitted to the class, showing my solution. You can view it here:
🧩 Robot Understanding of Human Behaviors Using Skeleton-Based Representations
For the third project, the project description was as follows:
In this project, students will implement several skeleton based representations (Deliverable 1) and use Support Vector Machines (SVMs) (Deliverable 2) to classify human behaviors using a public activity dataset collected from a Kinect V1 sensor. Additionally, students are required to write a report following the format of standard IEEE robotics conferences using LATEX in Deliverable 3.
This project was challenging but not as difficult as the second project. The main goal was to use Kinect V1 sensor data, from the MSR Daily Activity 3D Dataset, and Support Vector Machines to classify certain human actions/behaviors. You can learn more about this project and my solution on the Predict Human Actions Using LIBSVM project page.
CSCI473 Conclusion
CSCI473 is one of, if not the best class I took during my undergraduate studies at Mines. All these projects taught me a lot and allowed me to have a cool catalog of projects to reflect on and refer to on my resume. It was also the first class where I felt like I was in my element, as I was never a good test taker but excelled in completing projects. It was also through this class that I met Dr. Hao Zhang, who eventually helped me secure a position as a research assistant at Mines’ Human-Centered Robotics (HCR) Lab.
Starting At The HCR Lab
After completing CSCI473, my CS Field Session in the summer of 2020, and my Fall 2020 semester, I decided to pursue research in robotics. I had such a great experience with CSCI473 that I decided that I wanted to do research for the HCR Lab. Since I had met Dr. Zhang the year prior, I decided to email him and ask about any opportunities the lab might have in January 2021. Within about 2 weeks, Dr. Zhang expressed interest, presented me with research options, and offered me a role in the lab. I then started working for the lab in February 2021.
Introduction Video
Here’s my introduction video that I recorded a few months into my time in the HCR Lab. It was recorded in May 2021 and covers the research I would focus on in the HCR Lab during the summer of 2021:
My Project
Throughout my time in the HCR Lab, I mainly focused on the Triton project. The Triton project is a mobile robot developed by the Human Centered Robotics Lab at the Colorado School of Mines. It’s a triangular omni-wheel ground robot powered by NVIDIA’s Jetson Nano.
The Triton, in a simple overview, consisted of the following parts:
- NVIDIA Jetson Nano
- NVIDIA’s Seed Studio A205 Carrier Board
- Arduino Mega
- 64 GB Micro SD Card
- Custom 3D printed body
- 3 mecanum wheels
- 1 AR Battery
- Custom circuits for optimized power distribution and wiring
- Intel’s Realsense D435 Camera
- Some LEDs
It was designed, built, and manufactured around 2018-2020 as a robot for educational purposes. By the time I joined, the Triton was pretty established, and the lab was considering making a new version of it. However, the main issue with the Triton was its software. The Triton could move, charge, and function in a basic sense but did not really do anything intelligent. It even lacked the ability to make more advanced movements.
![]() |
![]() |
![]() |
![]() |
To start addressing this, the lab set up an area where we could keep track of the Triton. To do this, they created a 2-meter by 2-meter area with 8 Optitrack Flex (Infrared Red) Cameras in a square-like shape about 6-7 feet above the floor.
![]() |
![]() |
Along with having this area built, each Triton had three gray sphere balls attached at the top of their bodies.
With this setup, we had effectively built our own small-scale GPS system that allowed us to get the exact coordinates in meters of a Triton in our area of interest. By using the Optitrack infrared cameras and the Optitrack gray spheres in a triangular shape, we could pinpoint the exact coordinates of a Triton in our area. This allowed us to apply a closed-loop system for better accuracy in movement.
The Optitrack system provided position and orientation data at about 120Hz with sub-millimeter accuracy when properly calibrated. Each Triton’s three reflective markers formed a unique triangular pattern that the system could track as a rigid body. The coordinate system was calibrated so that (0,0) was at the center of the tracking area, with X and Y axes aligned to the room’s geometry. But despite this precise positioning data, the Triton still struggled with movement.
With this setup, one core feature we wanted to provide the Triton was the ability to move to a specific coordinate. The user, or their software, could provide a (x, y) coordinate within their area of interest. Then the robot would move to that coordinate as fast, accurately, and seamlessly as possible. When I joined, this feature existed but it wasn’t working very well. Here is a simple animation showing how the original moving logic worked:
I did not record the original solution in action, so I created this simple animation showing you the old moving logic in action. Knowing this, what are the issues with this method?
- It’s really slow
- It makes the robot take up a lot of space just to go to a specific point. This made it difficult for us to use this solution when multiple Tritons were moving around.
So why was this behavior happening? The issue was that the Triton first turned, changing its alpha, until it pointed toward the target point within a specific margin of error. Then it would sprint forward, and after its theta was off from the target by a specific amount, it would stop and start turning again until the alpha was within that acceptable range for the target goal. Then it sprints again and keeps doing this until it gets to the point. Also, as it got closer and closer to the goal point, the turning and sprinting speed would get much slower to make sure it didn’t overshoot. This resulted in the Triton having unnatural movement, taking forever to get to a target point, and requiring so much area just to get to a specific target point. So with all of these issues, and given how essential this feature was for the development of the Triton project, when I started working at the HCR Lab, my first task was to develop more effective solutions that would allow the Triton to better navigate to a goal point.
Knowing this, I spent a lot of time doing research on the best possible way of addressing this problem. Ironically, I was taking a class called Introduction to Feedback Control Systems (EENG307) at Mines. Early in that class, we learned about the concept of Open-loop controllers and Closed-loop controllers. Knowing this, and after some discussion I had with the professor of that class and my smart roommate, it became clear that this goal of getting the Triton to a goal point was a closed-loop system problem.
Now, after extensive testing and research, I developed two distinct controller approaches for the Tritons:
Method 1: Distance-Theta Controller
This approach used two separate proportional controllers running simultaneously:
- Distance Controller: Calculated the Euclidean distance to the target and applied a proportional gain to determine forward/backward velocity
- Theta Controller: Calculated the angular error between the robot’s current heading and the desired heading to the target, applying a separate proportional gain for rotational velocity
The algorithm continuously calculated the Euclidean distance to the target and the angular error between the robot’s current heading and the desired direction. Two separate proportional gains were applied to generate linear and angular velocities respectively.
This resulted in the Triton naturally turning toward the goal while simultaneously moving forward, creating smooth curved paths. The key advantage was that the robot always kept its front face oriented toward the destination, which was crucial for camera-based applications.
Method 2: X-Y Coordinate Controller
This approach treated the robot like a 2D plotter, with independent control of X and Y movement:
- X Controller: Directly controlled east-west movement based on X-coordinate error
- Y Controller: Directly controlled north-south movement based on Y-coordinate error
The implementation calculated X and Y coordinate errors independently, applied separate proportional gains, and then transformed these global velocity components into the robot’s local coordinate frame using rotation matrices. This transformation was necessary because the robot’s omni-wheel drivetrain required velocities in its own reference frame, not in global coordinates. This method produced the most direct paths to targets and was significantly faster, but the robot’s heading would drift since there was no explicit orientation control.
For method #1, I went into full detail about this method in my Move Turtle (TurtleSim) blog. I highly recommend you read this blog to get all the details on how PID controllers work in general, as well as how method #1 works. I developed method #1 using ROS’s TurtleSim, then transferred that code to the Triton and updated it to account for a more real-world environment.
Method #2 used a quite different but equally effective approach. Instead of thinking about the robot’s orientation and distance to the goal, this method treats the movement like a coordinate plane problem. The controller continuously calculates the error in both X and Y directions separately. For example, if the robot needs to move from (0,0) to (2,3), it sees this as needing to correct a 2-meter error in X and a 3-meter error in Y. Two proportional controllers worked simultaneously: one adjusted the robot’s velocity in the X direction based on X-error, while the other handled Y-direction movement based on Y-error. This created a more direct path to the goal, similar to how a 3D printer head moves, and allowed for smooth diagonal movements. The robot didn’t need to explicitly turn to face its target, making this method particularly effective in tight spaces or when precise positioning is required.
Both methods proved to be significantly faster and more reliable than the original approach. To see these new methods in action, check out the Tritons in Action Playlist, which shows all the Tritons in action with the new methods.
What once took 30-45 seconds for a simple point-to-point movement now took around 8-12 seconds. More importantly, the Triton could now navigate more efficiently in tight spaces, which became useful for our multi-robot scenarios.
Development Challenges and Debugging
Implementing these controllers wasn’t straightforward and involved several significant debugging challenges:
Coordinate System Transformations: One of the trickiest aspects was getting the coordinate transformations right. The Optitrack system provided data in its own coordinate frame, the robot had its local coordinate frame, and I needed to convert between them accurately. Early implementations had robots moving in the wrong directions because I had mixed up rotation matrix calculations.
Real-World vs. Ideal Behavior: The biggest challenge was accounting for real-world factors that don’t appear in textbook control theory. The robot’s wheels had different friction characteristics, the motors didn’t respond identically, and there was always some latency in the communication chain from Optitrack to the control software to the robot’s Arduino. I spent weeks tuning proportional gains and adding deadband filters to account for these physical realities.
Oscillation and Stability Issues: My first implementations suffered from oscillation problems where robots would overshoot their targets and wobble back and forth. This taught me about the importance of derivative terms in PID controllers and the need for proper gain tuning. I eventually settled on predominantly proportional control with carefully tuned gains rather than full PID, as the system’s inherent damping was sufficient for most applications.
Multi-Robot Interference: When multiple robots operated simultaneously, I discovered unexpected interference patterns. Robots would sometimes “fight” over the same space or create deadlock situations where they’d block each other indefinitely. This led me to implement coordination mechanisms and collision avoidance algorithms.
Multi-Triton Control System
Once I had solved the single Triton movement problem, the lab’s next challenge was getting multiple Tritons to work together simultaneously. This became one of my main focus areas and ended up being a significant contribution to the project.
The original system could only control one Triton at a time, which severely limited the research possibilities. The lab wanted to simulate scenarios where multiple autonomous vehicles needed to coordinate their movements, like self-driving cars communicating with each other to optimize traffic flow and create better SLAM (Simultaneous Localization and Mapping) maps.
To solve this, I implemented a multi-processing approach using Python’s multiprocessing library. Each Triton got its own dedicated process that could run independently while still being coordinated by a central control system. This allowed multiple Tritons to move simultaneously without interfering with each other’s control loops.
Multi-Robot Architecture Design
The system architecture I developed consisted of several key components:
Main Controller Process: This served as the central coordinator, handling user interface interactions, path planning, and high-level coordination between robots. It maintained the global state and distributed commands to individual robot processes.
Individual Robot Processes: Each Triton had its own dedicated Python process that handled:
- Real-time PID control calculations at ~50Hz
- Communication with the robot’s hardware (Arduino/Jetson)
- Local path execution and obstacle avoidance
- Status reporting back to the main controller
Shared Memory Communication: I used Python’s multiprocessing.shared_memory and Queue objects to enable efficient communication between processes. This allowed for real-time coordination without the overhead of network communication.
Synchronization Mechanisms: To prevent conflicts when multiple robots needed to coordinate (like avoiding collisions), I implemented semaphores and locks that allowed robots to request exclusive access to certain areas of the workspace.
The challenge was ensuring all robots could operate their control loops independently while still maintaining global coordination. Each robot process ran its own PID calculations and sent motor commands directly to the hardware, while the main process handled higher-level coordination like collision avoidance and path planning.
![]() |
![]() |
The multi-Triton system opened up entirely new research possibilities. We could now simulate:
- Vehicle-to-vehicle communication scenarios
- Coordinated path planning with obstacle avoidance
- Swarm robotics behaviors
- Multi-agent SLAM mapping
- Formation control and following behaviors
Here’s what the lab setup looked like with multiple Tritons running simultaneously:
![]() |
![]() |
I also developed a user-friendly interface that allowed researchers to visually define paths for each Triton. You could literally draw the path you wanted each robot to follow, and they would execute these paths with perfect coordination. This was incredibly useful for setting up complex experiments without having to manually code every movement.
The system could handle up to 5 Tritons simultaneously, each running its own PID controllers while being coordinated through the central control system. The performance was impressive, with all robots maintaining their individual accuracy while working together as a team.
Here’s a playlist showing the Tritons in action, from single-robot control to multi-robot coordination: Tritons in Action Playlist
Depth Sensor Integration and Coordinate Correction
Another major advancement I worked on involved utilizing the Intel RealSense D435 depth cameras mounted on each Triton. While the Optitrack system gave us incredibly precise positioning data, I wanted to explore how the robots could use their onboard sensors to improve their spatial awareness and correct for coordinate errors.
The idea was that Tritons could use their depth sensors to detect other Tritons in their vicinity and cross-reference their positions. This would serve multiple purposes:
-
Error Correction: If the Optitrack system had any calibration drift or temporary occlusion, robots could use visual confirmation of each other’s positions to maintain accurate coordinate systems.
-
Enhanced SLAM: By having multiple robots with depth sensors working together, we could create much richer maps of the environment with redundant data points.
-
Collision Avoidance: Real-time depth sensing would allow robots to detect and avoid each other even if the central control system had communication delays.
I began experimenting with algorithms that would allow Tritons to:
- Detect other Tritons using their distinctive triangular shape and reflective sphere markers
- Calculate relative positions and orientations using depth data
- Compare these measurements with Optitrack data to identify discrepancies
- Potentially adjust their coordinate system in real-time to maintain accuracy
Computer Vision Experiments
I spent considerable time experimenting with a computer vision pipeline that worked in several stages:
![]() |
![]() |
![]() |
![]() |
![]() |
Depth Data Processing: The Intel RealSense D435 provided both RGB and depth data streams. I primarily worked with the depth data, which came as a 640x480 array of distance measurements at 30Hz. The first challenge was filtering this noisy depth data to extract meaningful geometric information.
Object Detection Attempts: I experimented with multi-stage detection algorithms. I had some success segmenting the depth image to identify objects at floor level (filtering out walls, ceiling, etc.) and looking for objects with the right size characteristics, roughly 0.3x0.3 meters footprint. I tried using edge detection and geometric analysis to identify the distinctive Triton profile, with mixed results.
Marker Recognition Experiments: The three reflective spheres on each Triton seemed like the most promising detection feature. I experimented with blob detection algorithms to identify the characteristic triangular pattern of three bright spots in the depth image. I had some promising results in controlled lighting conditions, though it wasn’t consistently reliable.
Coordinate Fusion Research: I researched approaches for fusing vision-based position estimates with the Optitrack data, including basic Kalman filter implementations. The concept was to give more weight to Optitrack data when available but fall back to vision when needed, though I didn’t get this fully working before my time at the lab ended.
Performance Challenges: Getting all of this processing to run in real-time alongside the robot’s control loops proved challenging. I experimented with optimization approaches to run the algorithms at around 10-15Hz without overwhelming the Jetson Nano’s processing capabilities.
Unfortunately, I had to leave the lab before I could fully complete this computer vision work. While I had some promising early results and learned a lot about depth sensor processing, I didn’t get the system to a fully reliable state. It remained an interesting research direction that others could potentially build upon.
Here’s a video of me testing the computer vision algorithms:
Here’s what the depth sensor view looked like during my experiments:
While I didn’t complete the depth sensor integration work, the concept showed promise for applications like simulating self-driving car scenarios, where vehicles need to be aware of each other without relying solely on external infrastructure. The research direction I started exploring could potentially contribute to future work in the lab.
Documentation and Knowledge Preservation
One of my most important contributions to the HCR Lab, and perhaps the one I’m most proud of, was organizing and preserving all the project documentation. When I joined the lab, the Triton project’s knowledge was scattered across multiple platforms and formats. Critical information was spread across:
- Various Google Drive accounts belonging to different students who had graduated
- Old emails buried in inboxes
- Random Dropbox folders
- Multiple GitHub repositories
- GitLab repositories with inconsistent organization
- Handwritten notes that only specific people could interpret
This fragmented documentation was a huge problem. New students spent weeks just trying to figure out how to get started, and valuable knowledge was constantly being lost when people graduated or left the lab.
I took it upon myself to solve this problem systematically. I spent countless hours tracking down every piece of documentation, code, video, and note related to the Triton project. I then organized everything into a centralized GitLab repository with a clear, logical structure.
![]() |
![]() |
The centralized documentation included:
- Build Guides: Step-by-step instructions for assembling Tritons from scratch
- Software Setup: Complete guides for setting up the development environment
- Code Documentation: Well-commented code with clear explanations
- Hardware Specifications: Detailed parts lists, wiring diagrams, and PCB designs
- Troubleshooting Guides: Common problems and their solutions
- Video Tutorials: I created and uploaded instructional videos to YouTube, including detailed Optitrack calibration tutorials:
I also established documentation standards to ensure future contributions would be organized and accessible. The repository structure I created became the foundation for all subsequent work in the lab.
Beyond just organizing existing documentation, I created several original guides and tutorials that filled critical gaps in the knowledge base. These included detailed setup instructions for new lab members, comprehensive troubleshooting guides, and video walkthroughs of complex procedures.
The impact was immediate and lasting. New students could get up to speed in days instead of weeks. The documentation repository I created is still being used by the lab today, years after I left. It became the single source of truth for the Triton project and saved countless hours/days for future researchers.
Mentoring and Knowledge Transfer
One of the most rewarding aspects of my time in the HCR Lab was the opportunity to mentor others and share the knowledge I had gained. As my work progressed and I became more experienced with the Triton systems, I took on increasing responsibility for training new team members.
Mentoring Lab Successors
As I was preparing to eventually leave the lab to focus on finishing my degree and my work at eBay, I made sure to thoroughly train two people who would take over the Triton project after my departure. This wasn’t just about showing them how things worked, it was about ensuring they truly understood the underlying principles so they could continue innovating.
I spent weeks working closely with them, going through:
- The mathematical foundations of the PID control systems
- The multi-processing architecture for coordinating multiple robots
- The depth sensor integration and computer vision algorithms
- The documentation system and how to maintain it
- Debugging techniques and common failure modes
The knowledge transfer was incredibly thorough. We went through real debugging sessions together, I had them modify and extend the existing code, and I made sure they could independently set up new Tritons from scratch.
High School Mentorship Program
Perhaps even more rewarding was my experience mentoring a high school student through the lab’s outreach program. This was a great opportunity to introduce someone to robotics, computer science, and research at a formative stage in their education.
I designed a comprehensive curriculum that covered:
Computer Science Fundamentals:
- Programming concepts using Python as the primary language
- Introduction to object-oriented programming
- Understanding of algorithms and data structures
Robotics Concepts:
- How sensors work and how to interface with them
- Actuator control and motor systems
- The basics of autonomous systems and feedback control
ROS (Robot Operating System):
- Understanding the publish/subscribe messaging system
- Creating nodes and services
- Working with launch files and parameter servers
Hands-on Project Work:
- We collaborated on creating a ROS service that controlled the LED system on the Triton’s head
- She learned to write clean, documented code that integrated with our existing systems
- The LED control service she created became a permanent part of the Triton codebase
What made this mentorship particularly special was watching her progression from knowing virtually nothing about programming to contributing meaningful code to an active research project. She went from asking “What is a variable?” to independently debugging ROS communication issues and writing her own service implementations.
The LED control system she developed allowed researchers to easily change the colors and patterns of the Triton’s head LEDs through simple ROS commands. This might sound simple, but it required understanding ROS architecture, hardware interfacing, and proper software design patterns. Her contribution is still being used in the lab today.
The mentorship was as educational for me as it was for her. It forced me to break down complex concepts into digestible pieces and really think about the fundamentals of what we were doing. Teaching someone else made me a better engineer and researcher.
Collaboration with PhD Research
One of the most professionally rewarding aspects of my time in the lab was working closely with Peng, a PhD student whose research focused on self-driving car algorithms. The software improvements I had made to the Triton system helped support his doctoral research.
Peng’s research required precise, reliable multi-robot coordination to simulate self-driving car scenarios. Before my improvements to the movement control and multi-robot systems, these experiments were much more difficult to conduct. The robots were slower, less accurate, and couldn’t work together as effectively.
My contributions helped Peng’s research in several areas:
Intersection Management Studies: With the improved PID controllers and multi-robot coordination, Peng could simulate intersection scenarios where multiple “vehicles” (Tritons) needed to coordinate their movements. The better timing and positioning helped make these studies more feasible.
Vehicle-to-Vehicle Communication: The multi-processing framework I developed allowed Peng to implement and test communication protocols between simulated vehicles. Each Triton could make decisions while still coordinating with others, similar to how self-driving cars might need to operate.
SLAM and Mapping Research: The depth sensor integration work provided Peng with additional data for his simultaneous localization and mapping research. Having multiple robots with coordinated sensing capabilities allowed for more comprehensive mapping experiments.
What made our collaboration particularly valuable was that it wasn’t just me helping his research, it was a genuine partnership. Peng’s understanding of the theoretical aspects of autonomous vehicles helped inform my practical implementations. His feedback and requirements pushed me to make the systems more robust and capable.
We spent many hours in the lab together, debugging scenarios, discussing different control strategies, and exploring what the Triton platform could accomplish. Peng became both a colleague and a friend, and working with him taught me a lot about how academic research actually works.
The systems I built became a useful part of Peng’s dissertation work. Seeing my practical engineering contributions support research in autonomous vehicle technology was really fulfilling. It reinforced my interest in how solid engineering and research can work together to create useful outcomes.
Even after I left the lab, Peng and I stayed in touch. Knowing that my work continued to contribute to important research even after my departure was extremely rewarding.
Perspective: The Pre-LLM Era of Development
It’s worth noting that all of this work was accomplished during the pre-LLM era of software development. All of this took place between 2020 to 2021 (mainly 2021), before ChatGPT, Claude, Perplexity, or AI-powered development tools like Cursor IDE existed.
Every line of code was written from scratch, every algorithm was researched through academic papers and textbooks, and every debugging session involved traditional methods like print statements, debuggers, and methodical testing. When I got stuck on a coordinate transformation or PID tuning problem, I couldn’t just ask an AI assistant to explain the concept or help debug the issue.
This made the development process significantly more challenging but also more educational. I had to:
Research Everything Manually: Understanding PID control theory meant reading textbooks and academic papers. Figuring out coordinate transformations required working through the math by hand. Every concept had to be fully understood before implementation.
Debug Without AI Assistance: When robots moved in unexpected directions or oscillated around targets, I had to methodically trace through the logic, add debug outputs, and test hypotheses one by one. There was no AI to suggest potential issues or help interpret error patterns.
Learn from First Principles: Without the ability to quickly ask “how do I implement multi-processing in Python for robotics?” I had to understand the underlying concepts deeply. This forced me to build a solid foundation in concurrent programming, control systems, and computer vision.
Documentation Was Critical: Since I couldn’t rely on AI to explain code later, I had to write extremely clear documentation and comments. This discipline proved invaluable when transferring knowledge to others.
Looking back, while modern AI tools would have accelerated many aspects of the development, working without them forced me to develop deeper problem-solving skills and a more thorough understanding of the underlying systems. It’s fascinating to think how different this project might have been with today’s development tools available.
The Difficult Decision to Leave
As much as I loved working in the HCR Lab, by late 2021 I faced a difficult decision that many students encounter: balancing multiple opportunities and responsibilities. I was simultaneously working full-time as a software engineer at eBay, finishing my computer science degree at Mines, and contributing to research at the HCR Lab.
The eBay opportunity was significant; it was my first major software engineering role, provided invaluable industry experience, and provided me with a solid income. However, trying to maintain full-time work, complete my degree, and contribute meaningfully to research was simply unsustainable. Something had to give.
When I approached Dr. Zhang about potentially reducing my course load to focus more on the lab work, he strongly advised against it. His reasoning was sound: completing my degree should be the priority, and the industry experience at eBay would be valuable for my career development. He felt that dropping classes to focus on research, while tempting, might not be the best long-term decision.
So in September 2021, after about 8 months of intensive work in the lab, I made the difficult decision to step back from my research assistant role to focus on completing my degree and my work at eBay. It was one of the harder professional decisions I’ve had to make at the time.
Even after officially leaving the lab, I continued to provide support whenever anyone needed help with the systems I had built. I updated documentation as needed, answered questions about debugging, and helped troubleshoot issues remotely. The connections I had made and the investment I had in the project’s success didn’t just disappear because I was no longer officially part of the team.
Reflections and Looking Back
Now, in 2025, four years later, I find myself reflecting on that time with complex emotions. My career path took me deep into web development and AI/ML engineering, areas that have been incredibly rewarding and provided tremendous opportunities for growth and impact.
Yet there’s a part of me that wonders “what if.” Robotics was, and honestly still is, my true passion. There’s something about working with physical systems, seeing your code translate into real-world movement and behavior, that web development and even AI work can’t quite replicate.
I sometimes wonder what might have happened if I had taken a different path. What if I had found a way to stay in robotics research? What if I had pursued graduate school immediately after finishing my undergraduate degree? What if I had chosen to prioritize the lab work over the industry experience?
But I also recognize that every path has its trade-offs. The skills I developed in web development and AI have been incredibly valuable. The industry experience taught me about software engineering at scale, user experience design, and the practical challenges of building products that millions of people use. These experiences have made me a better engineer overall.
The work I did in the HCR Lab continues to influence how I approach problems today. The systematic thinking required for PID control systems shows up in how I design feedback loops in software systems. The documentation and knowledge preservation skills I developed have been invaluable in every role since. The experience of mentoring and teaching has shaped how I work with junior developers and contribute to team knowledge sharing.
Most importantly, the experience taught me that I thrive when working on challenging technical problems that have real-world impact. Whether that’s optimizing robot movement algorithms or building AI systems that help users accomplish their goals, the satisfaction comes from solving hard problems that matter.
The Lasting Impact
Looking back at the HCR Lab experience, I’m struck by how much I accomplished in a relatively short time. The systems I built fundamentally changed how the Triton platform operated, and many of those improvements are still being used today. The documentation repository I created became the knowledge base for the entire project. The mentorship relationships I formed had lasting impact on the people I worked with.
But perhaps most significantly, the experience showed me what I’m capable of when working on problems I’m truly passionate about. In those eight months, I:
- Improved the robot movement control system that had been limiting the platform
- Built a multi-robot coordination system from scratch
- Integrated computer vision and sensor fusion capabilities
- Created a comprehensive documentation and knowledge management system
- Mentored several people and helped with knowledge transfer
- Supported PhD-level research in autonomous vehicles
This wasn’t just about the technical achievements, though those were meaningful to me. It was about learning that with persistence and systematic thinking, you can make useful contributions even as an undergraduate student.
The Future and Robotics
While my career has taken me in other directions, my passion for robotics hasn’t diminished. I still follow developments in the field, I’m excited about advances in robot learning and autonomous systems, and I occasionally work on personal robotics projects in my spare time.
Who knows what the future holds? The skills I’m developing in AI and machine learning are increasingly relevant to robotics. The industry experience I’ve gained has taught me how to build robust, scalable systems. Perhaps there’s a future where these different threads of my experience come together in unexpected ways.
For now, I’m grateful for the time I spent in the HCR Lab and the experiences it provided. It was a formative period that shaped both my technical skills and my understanding of what kinds of work I find most fulfilling. Even though I sometimes miss it, I know that the lessons I learned and the approaches I developed continue to influence everything I do.
The robots are still there, still serving researchers, still enabling important work. And that’s pretty wonderful.