The dynamics of protein structures are crucial for understanding their functions and developing targeted drug treatments, particularly for cryptic binding sites. However, existing methods for generating conformational ensembles are plagued by inefficiencies or lack of generalizability to work beyond the systems they were trained on. Molecular dynamics (MD) simulations, the current standard for exploring protein…
Artificial intelligence (AI) and machine learning (ML) revolve around building models capable of learning from data to perform tasks like language processing, image recognition, and making predictions. A significant aspect of AI research focuses on neural networks, particularly transformers. These models use attention mechanisms to process data sequences more effectively. By allowing the model to…
Artificial intelligence is advancing rapidly, but enterprises face many obstacles when trying to leverage AI effectively. Organizations require models that are adaptable, secure, and capable of understanding domain-specific contexts while also maintaining compliance and privacy standards. Traditional AI models often struggle with delivering such tailored performance, requiring businesses to make a trade-off between customization and…
Model Predictive Control (MPC), or receding horizon control, aims to maximize an objective function over a planning horizon by leveraging a dynamics model and a planner to select actions. The flexibility of MPC allows it to adapt to novel reward functions at test time, unlike policy learning methods that focus on a fixed reward. Diffusion…
Large language models (LLMs) have revolutionized various domains, including code completion, where artificial intelligence predicts and suggests code based on a developer’s previous inputs. This technology significantly enhances productivity, enabling developers to write code faster and with fewer errors. Despite the promise of LLMs, many models struggle with balancing speed and accuracy. Larger models often…
Large language models (LLMs) have revolutionized the field of artificial intelligence by performing a wide range of tasks across different domains. These models are expected to work seamlessly in multiple languages, solving complex problems while ensuring safety. However, the challenge lies in maintaining safety without compromising performance, especially in multilingual settings. As AI technologies become…
Vision-Language-Action Models (VLA) for robotics are trained by combining large language models with vision encoders and then fine-tuning them on various robot datasets; this allows generalization to new instructions, unseen objects, and distribution shifts. However, various real-world robot datasets mostly require human control, which makes scaling difficult. On the other hand, Internet video data offers…
A primary feature of sophisticated language models is In-Context Learning (ICL), which allows the model to produce answers based on input instances without being specifically instructed on how to complete the task. In ICL, a few examples that show the intended behavior or pattern are shown to the model, which then applies this knowledge to…
The discovery of new materials is crucial to addressing pressing global challenges such as climate change and advancements in next-generation computing. However, existing computational and experimental approaches face significant limitations in efficiently exploring the vast chemical space. While AI has emerged as a powerful tool for materials discovery, the lack of publicly available data and…
The growing reliance on large language models for coding support poses a significant problem: how best to assess real-world impact on programmer productivity? Current approaches, such as static bench-marking based on datasets such as HumanEval, measure the correctness of the code but cannot capture the dynamic, human-in-the-loop interaction of real programming activity. With LLMs increasingly…