Natural language processing (NLP) continues to evolve with new methods like in-context learning (ICL), which offers innovative ways to enhance large language models (LLMs). ICL involves conditioning models on specific example demonstrations without directly modifying the model’s parameters. This method is especially valuable for training LLMs quickly for various tasks. However, ICL can be highly…
Biomolecular dynamics simulations are crucial for life sciences, offering insights into molecular interactions. While classical molecular dynamics (MD) simulations are efficient, they lack chemical precision. Methods like density functional theory (DFT) achieve high accuracy but are too computationally intense for large biomolecules. MD simulations allow observation of molecular behavior, with classical MD using interatomic potentials…
Large language models (LLMs) have shown exceptional capabilities in comprehending human language, reasoning, and knowledge acquisition, suggesting their potential to serve as autonomous agents. However, training high-performance web agents based on open LLMs within online environments, such as WebArena, faces several critical challenges. The first challenge is insufficient predefined training tasks in online benchmarks. The…
Large language models (LLMs) have revolutionized natural language processing by making strides in text generation, summarization, and translation. Even though they excel at language tasks, they need help handling complex, multi-step reasoning tasks that require careful progression through each step. Researchers have been exploring structured frameworks that enhance these models’ reasoning abilities, moving beyond conventional…
Machine learning research has advanced toward models that can autonomously design and discover data structures tailored to specific computational tasks, such as nearest neighbor (NN) search. This shift in methodology allows models to learn not only the structure of data but also how to optimize query responses, minimizing storage needs and computation time. Machine learning…
In the rapidly evolving field of household robotics, a significant challenge has emerged in executing personalized organizational tasks, such as arranging groceries in a refrigerator. These tasks require robots to balance user preferences with physical constraints while avoiding collisions and maintaining stability. While Large Language Models (LLMs) enable natural language communication of user preferences, this…
AI’s rapid rise has been driven by powerful language models, transforming industries from customer service to content creation. However, many languages, particularly those from smaller linguistic communities, lack access to cutting-edge AI tools. Vietnamese, spoken by over 90 million people, is one such underserved language. With most AI advancements focusing on major global languages, reliable…
Natural language processing (NLP) has made incredible strides in recent years, particularly through the use of large language models (LLMs). However, one of the primary issues with these LLMs is that they have largely focused on data-rich languages such as English, leaving behind many underrepresented languages and dialects. Moroccan Arabic, also known as Darija, is…
The routing mechanism of MoE models evokes a great privacy challenge. Optimize LLM large language model performance by selectively activating only a fraction of its total parameters while making it highly susceptible to adversarial data extraction through routing-dependent interactions. This risk, most obviously present with the ECR mechanism, would let an attacker siphon out user…
Collaborative Filtering (CF) is widely used in recommender systems to match user preferences with items but often struggles with complex relationships and adapting to evolving user interactions. Recently, researchers have explored using LLMs to enhance recommendations by leveraging their reasoning abilities. LLMs have been integrated into various stages, from knowledge generation to candidate ranking. While…