Cardiotocography (CTG) is a non-invasive method used to monitor fetal heart rate and uterine contractions during pregnancy. This data can help identify potential complications early on, such as fetal distress, preeclampsia, or preterm labor. However, interpreting CTG recordings can be subjective and prone to errors, leading to potential misdiagnosis and delayed intervention. It can be…
Retrieval Augmented Generation (RAG) is an AI framework that optimizes the output of a Large Language Model (LLM) by referencing a credible knowledge base outside of its training sources. RAG combines the capabilities of LLMs with the strengths of traditional information retrieval systems such as databases to help AI write more accurate and relevant text.…
Cardinality estimation (CE) is essential to many database-related tasks, such as query generation, cost estimation, and query optimization. Accurate CE is necessary to ensure optimal query planning and execution within a database system. Adopting machine learning (ML) techniques has introduced new possibilities for CE, allowing researchers to leverage ML models’ robust learning and representation capabilities.…
Large language models (LLMs) have gained significant attention in machine learning, shifting the focus from optimizing generalization on small datasets to reducing approximation error on massive text corpora. This paradigm shift presents researchers with new challenges in model development and training methodologies. The primary objective has evolved from preventing overfitting through regularization techniques to effectively…
Artificial intelligence (AI) is transforming rapidly, particularly in multimodal learning. Multimodal models aim to combine visual and textual information to enable machines to understand and generate content that requires inputs from both sources. This capability is vital for tasks such as image captioning, visual question answering, and content creation, where more than a single data…
Language models have become a cornerstone of modern NLP, enabling significant advancements in various applications, including text generation, machine translation, and question-answering systems. Recent research has focused on scaling these models in terms of the amount of training data and the number of parameters. These scaling laws have demonstrated that increasing data and model parameters…
Large language models (LLMs) are designed to understand and manage complex language tasks by capturing context and long-term dependencies. A critical factor for their performance is the ability to handle long-context inputs, which allows for a deeper understanding of content over extensive text sequences. However, this advantage comes with the drawback of increased memory usage,…
Weight decay and ℓ2 regularization are crucial in machine learning, especially in limiting network capacity and reducing irrelevant weight components. These techniques align with Occam’s razor principles and are central to discussions on generalization bounds. However, recent studies have questioned the correlation between norm-based measures and generalization in deep networks. Although weight decay is widely…
Reinforcement learning (RL) is a domain within artificial intelligence that trains agents to make sequential decisions through trial and error in an environment. This approach enables the agent to learn by interacting with its surroundings, receiving rewards or penalties based on its actions. However, training agents to perform optimally in complex tasks requires access to…
Large Language Models (LLMs) are vulnerable to jailbreak attacks, which can generate offensive, immoral, or otherwise improper information. By taking advantage of LLM flaws, these attacks go beyond the safety precautions meant to prevent offensive or hazardous outputs from being generated. Jailbreak attack evaluation is a very difficult procedure, and existing benchmarks and evaluation methods…