← back to Blog

CV

  • SAM4D: Segment Anything in Camera and LiDAR Streams

    SAM4D introduces a 4D foundation model for promptable segmentation across camera and LiDAR streams, addressing the limitations of frame-centric and modality-isolated approaches in autonomous driving. Key Highlights: Promptable Multi-modal Segmentation (PMS) – Enables interactive segmentation across sequences from both modalities using diverse prompts (points, boxes, masks), allowing cross-modal propagation and long-term object tracking. Unified Multi-modal Positional…

    Read more →

  • Vector Embeddings Explained

    You’ve just finished listening to your favorite high-energy workout song on Spotify, and the next track that automatically plays is one you’ve never heard, but it’s a perfect fit for your playlist. Is it magic? Not quite. It’s a clever AI concept called vector embeddings, and it’s the secret sauce behind much of the smart…

    Read more →

  • VideoGameBench: Can Vision-Language Models Complete Popular Video Games?

    VideoGameBench is a rigorous benchmark that evaluates VLMs’ real-time decision-making, perception, memory, and planning by challenging them to complete 1990s-era video games with only raw visual inputs and minimal control instructions. Key Highlights Real-Time, Visually Rich Environments – Evaluates VLMs on 23 popular Game Boy and MS-DOS games, including 3 secret test games to assess generalization…

    Read more →

  • Announcing The Winners of the First Perception Challenge for Bin-Picking (BPC)

    OpenCV and sponsors at Intrinsic, BOP, and University of Hawaiʻi at Mānoa are excited to announce the prize winners of the first Perception Challenge for Bin-Picking, first revealed at CVPR during the Perception for Industrial Robotics workshop. Beginning in February 2025, this challenge had over $60,000 at stake and over 450 teams vying for a…

    Read more →

  • LeGO-LOAM: Lightweight and Ground-Optimized Lidar Odometry and Mapping on Variable Terrain

    LeGO-LOAM introduces a cutting-edge lidar odometry and mapping framework designed to deliver real-time, accurate 6-DOF pose estimation for ground vehicles, optimized for challenging, variable terrain environments. It significantly reduces computational overhead while maintaining high accuracy, making it ideal for embedded systems. Key Highlights Ground-Optimized Approach – Segments lidar point clouds by leveraging ground plane information, filtering…

    Read more →

  • Applications of Vision Language Models – Real World Use Cases with PaliGemma2 Mix

    Imagine machines that don’t just capture pixels but truly understand them, recognizing objects, reading text, interpreting scenes, and even “speaking” about images as fluently as a human. VLMs merge computer vision’s “sight” with language’s “speech,” letting AI both describe and converse about any picture it sees. From generating captions and answering questions to counting objects,…

    Read more →

  • Introduction to Vision Language Models

    Imagine an expert sommelier. They don’t just identify a wine; they experience it through multiple senses. They see its deep ruby color, inhale its bouquet of black cherry and oak, and taste its complex notes on their palate. They then translate this rich, sensory experience into evocative language, describing it as a “bold Cabernet Sauvignon…

    Read more →

  • Reliable-loc: Robust Sequential LiDAR Global Localization in Large-Scale Street Scenes Based on Verifiable Cues

    Reliable-loc introduces a resilient LiDAR-based global localization system for wearable mapping devices in complex, GNSS-denied street environments with sparse features and incomplete prior maps. Key Highlights: Dual-Stage Observation Model for MCL: Fuses global and local features into Monte Carlo Localization (MCL), using spectral matching and pose error metrics to refine particle weights in feature-poor scenes.…

    Read more →

  • Claude 4: The Next Generation of AI Assistants

    Ever heard of an AI cracking a coding bug that stumped a 30-year C++ FAANG veteran for four years and 200 hours of debugging? That just happened. The hero? Anthropic’s newly unveiled Claude 4. This isn’t just a cool story; it’s a preview of the serious firepower Anthropic is unleashing today with Claude Opus 4…

    Read more →

  • Microsoft Phi-4: A New Era of AI Efficiency

    In the ever-evolving world of artificial intelligence, breakthroughs don’t always mean bigger models; they often mean smarter, more efficient architectures. Microsoft’s Phi-4 series is a perfect illustration of this principle. By harnessing advanced training techniques and high-quality curated data, Microsoft has engineered a family of small language models that excel at complex reasoning tasks, yet…

    Read more →