Thursday on OpenCV Live! we’ve got author and data scientist Kristen Kerher who will tell us about how her interest in computer vision led to writing a children’s book about using CV to let her kids know when the school bus was coming down their street. She holds a Master of Science degree in Applied…
The OpenCV Community Survey for 2025 is open, and we’re asking for your participation! It’s a short, focused online survey open to the entire OpenCV community that will take just a few minutes to complete. Your answers to the survey questions are important to the future of OpenCV. Please help the OpenCV project and take…
Earlier this year OpenCV was selected to be part of the GitHub Secure Open Source Fund, which provides OOS maintainers with financial support to participate in a three-week program educating them on the latest tooling and methods for ensuring the safety of Open Source Software projects. We are honored to be part of the 71…
In the complex world of modern medicine, two forms of data reign supreme: the visual and the textual. On one side, a deluge of images, X-rays, MRIs, and pathology slides. On the other, an ocean of text, clinical notes, patient histories, and research papers. For centuries, the bridge between these two worlds existed only within…
In the fast-paced world of artificial intelligence, a new model is making waves for its innovative approach and impressive performance: MOLMO (Multimodal Open Language Model), developed by the Allen Institute for AI (Ai2). Unlike many of today’s powerful vision-language models, which are often proprietary and closed-source, MOLMO stands out as a shining example of open-source…
Hello, Let me show you an image, can you describe what you see? Perfect! You nailed it: a bird sitting peacefully on a railing. Now, let’s flip it. I’ll describe something, and you imagine how it might appear: “A puppy sitting on a railway track.” Nice! Something like this might be popped right into your…
SimLingo unifies autonomous driving, vision-language understanding, and action reasoning-all from camera input only. It introduces Action Dreaming to test how well models follow instructions, and outperforms all prior methods on CARLA Leaderboard 2.0 and Bench2Drive. Key Highlights Unified Model – Combines driving, VQA, and instruction-following using a single Vision-Language Model (InternVL2-1B + Qwen2-0.5B). State-of-the-Art Driving – Ranks #1 on…
OpenCV’s summer update for 2025 is now available in all your favorite flavors on the Releases page. It includes a big list of changes to Core, Imgproc, Calib3d, DNN, Objdetect, Photo, VideoIO, Imgcodecs, Highgui, G-API, Video, and HAL modules, the Python, Java and JavaScript bindings and even more. Highlights include: GIF decode and encode for imgcodecs,…
What is RISC-V and RVV 1.0? RISC-V (pronounced “risk-five”) is an open standard instruction set architecture (ISA) based on the principles of reduced instruction set computing (RISC). Unlike proprietary ISAs such as Intel’s x86 or ARM’s architecture, RISC-V is free to use and modify, enabling companies and researchers to design custom processors without licensing fees…
SAM4D introduces a 4D foundation model for promptable segmentation across camera and LiDAR streams, addressing the limitations of frame-centric and modality-isolated approaches in autonomous driving. Key Highlights: Promptable Multi-modal Segmentation (PMS) – Enables interactive segmentation across sequences from both modalities using diverse prompts (points, boxes, masks), allowing cross-modal propagation and long-term object tracking. Unified Multi-modal Positional…