Language-based agentic systems represent a breakthrough in artificial intelligence, allowing for the automation of tasks such as question-answering, programming, and advanced problem-solving. These systems, heavily reliant on Large Language Models (LLMs), communicate using natural language. This innovative design reduces the engineering complexity of individual components and enables seamless interaction between them, paving the way for…
The o1 model’s impressive performance in complex reasoning highlights the potential of test-time computing scaling, which enhances System-2 thinking by allocating greater computational effort during inference. While deep learning’s scaling effects have driven advancements in AI, particularly in LLMs like GPT, further scaling during training faces limitations due to data scarcity and computational constraints. Additionally,…
The pre-training of language models (LMs) plays a crucial role in enabling their ability to understand and generate text. However, a significant challenge lies in effectively leveraging the diversity of training corpora, which often include data from varied sources such as Wikipedia, blogs, and social media. Models typically treat all input data equivalently, disregarding contextual…
Complex domains like social media, molecular biology, and recommendation systems have graph-structured data that consists of nodes, edges, and their respective features. These nodes and edges do not have a structured relationship, so addressing them using graph neural networks (GNNs) is essential. However, GNNs rely on labeled data, which is difficult and expensive to obtain.…
Large language models (LLMs) have revolutionized natural language processing, enabling applications that range from automated writing to complex decision-making aids. However, ensuring these models produce factually accurate responses remains a significant challenge. At times, LLMs generate outputs that appear credible but are factually incorrect, a phenomenon often referred to as “hallucination.” This issue becomes particularly…
Advancements in neural networks have brought significant changes across domains like natural language processing, computer vision, and scientific computing. Despite these successes, the computational cost of training such models remains a key challenge. Neural networks often employ higher-order tensor weights to capture complex relationships, but this introduces memory inefficiencies during training. Particularly in scientific computing,…
Video-Language Representation Learning is a crucial subfield of multi-modal representation learning that focuses on the relationship between videos and their associated textual descriptions. Its applications are explored in numerous areas, from question answering and text retrieval to summarization. In this regard ,contrastive learning has emerged as a powerful technique that elevates video-language learning by enabling…
Multimodal foundation models are becoming increasingly relevant in artificial intelligence, enabling systems to process and integrate multiple forms of data—such as images, text, and audio—to address diverse tasks. However, these systems face significant challenges. Existing models often struggle to generalize across a wide variety of modalities and tasks due to their reliance on limited datasets…