← Home
Uses
Uses
Updated 2025-05-15
Workstation
- OS: Ubuntu 22.04 LTS — everything in robotics runs on Linux
- Machine: [TODO: add hardware spec] — running ROS2, Docker, and CUDA workloads
- Editor: VS Code with C/C++, Python, and ROS extensions
Development
- Languages: C++17 for inference and robotics; Python for prototyping and data pipelines; TypeScript for tooling
- Build: CMake + colcon (ROS2 workspaces)
- Containerisation: Docker — all deployment targets use containers; dev environments use Dev Containers
- Version control: Git, GitHub
Robotics stack
- Middleware: ROS2 (Humble / Iron)
- Inference runtime: ONNX Runtime — runs the same model on x86 dev and ARM robot hardware
- Computer vision: OpenCV 4.x
- Point cloud: PCL (Point Cloud Library)
- Simulation: Gazebo / Ignition
ML / AI
- Training: PyTorch — export to ONNX for deployment
- Quantisation: ONNX Runtime quantisation tools (INT8, FP16)
- Experiment tracking: [TODO: MLflow / W&B — confirm]
Hardware I work with
- AGILOX AMR hardware (proprietary)
- LiDAR sensors: [TODO: specific models]
- RGB-D cameras: [TODO: specific models]
- Edge compute boards: [TODO: Jetson / other]
Terminal / Shell
- Shell: zsh + Oh My Zsh
- Terminal: [TODO: add terminal emulator]
- Multiplexer: tmux — essential for multi-pane ROS2 development
Updated May 2025.