ROAR-internal
Internal R.O.A.R. student org, see aug-cog for production ready code and the officially supported Lab github.
United States of America
Pinned Repositories
bngseg
Lane Segmentation via BeamNG from existing map
cvat
Annotate better with CVAT, the industry-leading data engine for machine learning. Used and trusted by teams at any scale, for data of any scale.
IAC_dataset_maker
A simple repository with a pipeline to identify and extract important camera data to be labelled.
IAC_db3_to_mcap_converter_with_topic_extractor
Extracts Desired ROS2 mcap bags
ROS_onboarding
Segment-and-Track-Anything
An open-source project dedicated to tracking and segmenting any objects in videos, either automatically or interactively. The primary algorithms utilized include the Segment Anything Model (SAM) for key-frame segmentation and Associating Objects with Transformers (AOT) for efficient tracking and propagation purposes.
ROAR-internal's Repositories
FHL-VIVE-Center-for-Enhanced-Reality/bngseg
Lane Segmentation via BeamNG from existing map
FHL-VIVE-Center-for-Enhanced-Reality/cvat
Annotate better with CVAT, the industry-leading data engine for machine learning. Used and trusted by teams at any scale, for data of any scale.
FHL-VIVE-Center-for-Enhanced-Reality/IAC_dataset_maker
A simple repository with a pipeline to identify and extract important camera data to be labelled.
FHL-VIVE-Center-for-Enhanced-Reality/IAC_db3_to_mcap_converter_with_topic_extractor
Extracts Desired ROS2 mcap bags
FHL-VIVE-Center-for-Enhanced-Reality/ROS_onboarding
FHL-VIVE-Center-for-Enhanced-Reality/Segment-and-Track-Anything
An open-source project dedicated to tracking and segmenting any objects in videos, either automatically or interactively. The primary algorithms utilized include the Segment Anything Model (SAM) for key-frame segmentation and Associating Objects with Transformers (AOT) for efficient tracking and propagation purposes.