Achieving expert-level performance in complex reasoning tasks is a significant challenge in artificial intelligence (AI). Models like OpenAI’s o1 demonstrate advanced reasoning capabilities akin to those of highly trained experts. However, reproducing such models involves addressing complex hurdles, including managing the vast action space during training, designing effective reward signals, and scaling search and learning… →
Large Language Models (LLMs) have become an integral part of modern AI applications, powering tools like chatbots and code generators. However, the increased reliance on these models has revealed critical inefficiencies in inference processes. Attention mechanisms, such as FlashAttention and SparseAttention, often struggle with diverse workloads, dynamic input patterns, and GPU resource limitations. These challenges,… →
Large Language Models (LLMs) face significant scalability limitations in improving their reasoning capabilities through data-driven imitation, as better performance demands exponentially more high-quality training examples. Exploration-based methods, particularly reinforcement learning (RL), offer a promising alternative to overcome these limitations. The transformation from data-driven to exploration-based approaches presents two key challenges: developing efficient methods to generate… →
Artificial intelligence (AI) has made significant strides in developing language models capable of solving complex problems. However, applying these models to real-world scientific challenges remains difficult. Many AI agents struggle with tasks requiring multiple cycles of observation, reasoning, and action. Moreover, existing models often lack the ability to integrate tools effectively or maintain consistency in… →
Software engineering agents have become essential for managing complex coding tasks, particularly in large repositories. These agents employ advanced language models to interpret natural language descriptions, analyze codebases, and implement modifications. Their applications include debugging, feature development, and optimization. The effectiveness of these systems relies on their ability to handle real-world challenges, such as interacting… →
Appropriateness refers to the context-specific standards that guide behavior, speech, and actions in various social settings. Humans naturally navigate these norms, acting differently based on whether they are among friends, family, or a professional environment. Similarly, AI systems must adapt their behavior to fit the context, as the standards for a comedy-writing assistant differ from… →
Large Language Models (LLMs) have revolutionized text generation capabilities, but they face the critical challenge of hallucination, generating factually incorrect information, particularly in long-form content. Researchers have developed Retrieved-Augmented Generation (RAG) to address this issue, which enhances factual accuracy by incorporating relevant documents from reliable sources into the input prompt. While RAG has shown promise,… →
Designing GUI agents that perform human-like tasks on graphical user interfaces faces a critical obstacle: collecting high-quality trajectory data for training. Existing methods depend on expensive and time-consuming human supervision or on generating synthetic data, which can hardly reflect the diversity and dynamics in the real world. Those constraints significantly limit the GUI agents’ scalability… →
Power distribution systems are often conceptualized as optimization models. While optimizing agents to perform tasks works well for systems with limited checkpoints, things begin to go out of hand when heuristics tackle multiple tasks and agents. Scaling dramatically increases the complexity of assignment problems, often NP-hard and nonlinear. Optimization methods become the white elephants in… →
The development of large language models (LLMs) has significantly advanced artificial intelligence (AI) across various fields. Among these advancements, mobile GUI agents—designed to perform tasks autonomously on smartphones—show considerable potential. However, evaluating these agents poses notable challenges. Current datasets and benchmarks often rely on static frame evaluations, which provide snapshots of app interfaces for agents… →