Trailing the advances made by AI in drug discovery, one can say there is a vast amount of untapped potential. Therapeutic nanobodies, particularly, have had relatively limited breakthroughs as they require complex interdisciplinary knowledge. The COVID-19 pandemic urged the development of therapeutic nanobodies that exhibit high binding affinity and stability for the SARS-CoV-2 in a…
Parallel computing continues to advance, addressing the demands of high-performance tasks such as deep learning, scientific simulations, and data-intensive computations. A fundamental operation within this domain is matrix multiplication, which underpins many computational workflows. Recent hardware innovations, like Tensor Core Units (TCUs), offer efficient processing by optimizing constant-size matrix multiplications. These units are now being…
Geometry representations play a crucial role in solving complex 3D vision problems. The rapid evolution of deep learning has sparked significant interest in developing neural network-compatible geometric data representations. Recent technological advances, particularly those centered on coordinate networks, have demonstrated promising capabilities in modeling 3D geometry across diverse applications. These coordinate networks offer a functional…
In recent years, the evolution of artificial intelligence has brought forth increasingly sophisticated large language models (LLMs). However, training these models remains a complex challenge due to their immense computational requirements. Traditionally, training such models has been possible only in centralized environments with high-bandwidth interconnects, typically within large data centers controlled by a few tech…
Deep learning techniques are increasingly applied to neuroimaging analysis, with 3D CNNs offering superior performance for volumetric imaging. However, their reliance on large datasets is challenging due to the high cost and effort required for medical data collection and annotation. As an alternative, 2D CNNs utilize 2D projections of 3D images, which often limits volumetric…
The rapid development of artificial intelligence (AI) has produced models with powerful capabilities, such as language understanding and vision processing. However, deploying these models on edge devices remains challenging due to limitations in computational power, memory, and energy efficiency. The need for lightweight models that can run effectively on edge devices, while still delivering competitive…
Machine learning is advancing rapidly, particularly in areas requiring extensive data processing, such as natural language understanding and generative AI. Researchers are constantly striving to design algorithms that maximize computational efficiency while improving the accuracy and performance of large-scale models. These efforts are critical for building systems capable of managing the complexities of language representation,…
Generative AI (Gen AI) is transforming the landscape of artificial intelligence, opening up new opportunities for creativity, problem-solving, and automation. Despite its potential, several challenges arise for developers and businesses when implementing Gen AI solutions. One of the most prominent issues is the lack of interoperability between different large language models (LLMs) from multiple providers.…
Large language models (LLMs) with long-context processing capabilities have revolutionized technological applications across multiple domains. Recent advancements have enabled sophisticated use cases including repository-level coding assistance, multi-document analysis, and autonomous agent development. These models demonstrate remarkable potential in handling extensive contextual information, requiring advanced mechanisms to retrieve and integrate dispersed details effectively. However, the current…
Large Language Models (LLMs) have revolutionized data analysis by introducing novel approaches to regression tasks. Traditional regression techniques have long relied on handcrafted features and domain-specific expertise to model relationships between metrics and selected features. However, these methods often struggle with complex, nuanced datasets that require semantic understanding beyond numerical representations. LLMs provide a groundbreaking…