Precise environmental perception is critical for #autonomousvehicle (AV) safety, especially when handling unseen conditions. In this episode of DRIVE Labs, we discuss a Vision Transformer model called SegFormer, which generates robust semantic segmentation while maintaining high efficiency. This video introduces the mechanism behind SegFormer that enables its robustness and efficiency.
00:00:00 - Robust Perception with SegFormer
00:00:05 - Why accuracy and robustness are important for developlng autonomous vehicles
00:00:15 - What is SegFormer?
00:00:28 - The difference between CNN and Transformer Models
00:01:23 - Testing semantic segmentation results on MB’s Cityscapes Dataset
00:02:09 - The impact of JPEG compression on SegFormer
00:02:27 - How SegFormer understands unseen conditions
00:02:41 - Learn more about segmentation for autonomous vehicle use cases
GitHub: [ Ссылка ]
Read more: [ Ссылка ]
Watch the full series here: [ Ссылка ]
Learn more about DRIVE Labs: [ Ссылка ]
Follow us on social:
Twitter: [ Ссылка ]
LinkedIn: [ Ссылка ]
#NVIDIADRIVE
Ещё видео!