In recent years, large language models (LLMs) have become a cornerstone of AI, powering chatbots, virtual assistants, and a variety of complex applications. Despite their success, a significant problem has emerged: the plateauing of the scaling laws that have historically driven model advancements. Simply put, building larger models is no longer providing the significant leaps…
As the language models are improving, their adoption is growing in more complex tasks such as free-form question answering or summarization. On the other hand, the more demanding the task – the higher the risk of LLM hallucinations. In this article, you’ll find: what the problem with hallucination is, which techniques we use to reduce…
In today’s world, CLIP is one of the most important multimodal foundational models. It combines visual and textual signals into a shared feature space using a simple contrastive learning loss on large-scale image-text pairs. As a retriever, CLIP supports many tasks, including zero-shot classification, detection, segmentation, and image-text retrieval. Also, as a feature extractor, it…
Embodied artificial intelligence (AI) involves creating agents that function within physical or simulated environments, executing tasks autonomously based on pre-defined objectives. Often used in robotics and complex simulations, these agents leverage extensive datasets and sophisticated models to optimize behavior and decision-making. In contrast to more straightforward applications, embodied AI requires models capable of managing vast…
Devvret Rishi is the CEO and Cofounder of Predibase. Prior he was an ML product leader at Google working across products like Firebase, Google Research and the Google Assistant as well as Vertex AI. While there, Dev was also the first product lead for Kaggle – a data science and machine learning community with over…
By processing complex data formats, deep learning has transformed various domains, including finance, healthcare, and e-commerce. However, applying deep learning models to tabular data, characterized by rows and columns, poses unique challenges. While deep learning has excelled in image and text analysis, classic machine learning techniques such as gradient-boosted decision trees still dominate tabular data…
A central challenge in advancing deep learning-based classification and retrieval tasks is achieving robust representations without the need for extensive retraining or labeled data. Numerous applications depend on extensive, pre-trained models functioning as feature extractors; however, these pre-trained embeddings often fail to encapsulate the specific details required for optimal performance in the absence of fine-tuning.…
Large-scale neural language models (LMs) excel at performing tasks similar to their training data and basic variations of those tasks. However, it needs to be clarified whether LMs can solve new problems involving non-trivial reasoning, planning, or string manipulation that differ from their pre-training data. This question is central to understanding current AI systems’ novel…
Image captioning has seen remarkable progress, but significant challenges remain, especially in creating captions that are both descriptive and factually accurate. Traditional image caption datasets, such as those relying purely on synthetic captions generated by vision-language models (VLMs) or web-scraped alt-text, often fall short in either rich descriptive detail or factual grounding. This shortcoming limits…
Data modeling and data analysis are two fundamental ideas in the contemporary field of data science that frequently overlap but are very different from one another. Although both are crucial in turning unstructured data into insightful knowledge, they are essentially distinct procedures with distinct functions in a data-driven setting. Anyone who works with data, whether…