Artificial Intelligence (AI) and Machine Learning (ML) have been transformative in numerous fields, but a significant challenge remains in the reproducibility of experiments. Researchers frequently rely on previously published work to validate or extend their findings. This process often involves running complex code from research repositories. However, setting up these repositories, configuring the environment, and…
Generative Large Language Models (LLMs) are capable of in-context learning (ICL), which is the process of learning from examples given within a prompt. However, research on the precise principles underlying these models’ ICL performance is still underway. The inconsistent experimental results are one of the main obstacles, making it challenging to provide a clear explanation…
Large language models (LLMs) like GPT-4 have become a significant focus in artificial intelligence due to their ability to handle various tasks, from generating text to solving complex mathematical problems. These models have demonstrated capabilities far beyond their original design, mainly to predict the next word in a sequence. While their utility spans numerous industries,…
Predicting battery lifespan is difficult due to the nonlinear nature of capacity degradation and the uncertainty of operating conditions. As battery lifespan prediction is vital for the reliability and safety of systems like electric vehicles and energy storage, there is a growing need for advanced methods to provide precise estimations of both current cycle life…
ML models are increasingly used in weather forecasting, offering accurate predictions and reduced computational costs compared to traditional numerical weather prediction (NWP) models. However, current ML models often have limitations such as coarse temporal resolution (usually 6 hours) and a narrow range of meteorological variables, which can limit their practical use. Accurate forecasting is crucial…
A significant challenge in text-to-speech (TTS) systems is the computational inefficiency of the Monotonic Alignment Search (MAS) algorithm, which is responsible for estimating alignments between text and speech sequences. MAS faces high time complexity, particularly when dealing with large inputs. The complexity is O(T×S), where T is the text length and S is the speech…
Prior research on Large Language Models (LLMs) demonstrated significant advancements in fluency and accuracy across various tasks, influencing sectors like healthcare and education. This progress sparked investigations into LLMs’ language understanding capabilities and associated risks. Hallucinations, defined as plausible but incorrect information generated by models, emerged as a central concern. Studies explored whether these errors…
AI assistants have the drawback of being rigid, pre-programmed for specific tasks, and in need of more flexibility. The limited utility of these systems stems from their inability to learn and adapt as they are used. Some AI frameworks include hidden features and processes that are difficult for users to access or modify. This lack…
Cognitive neuroscience studies how the brain processes complex information, particularly language. Researchers are interested in understanding how the brain transforms low-level stimuli, like sounds or words, into higher-order concepts and ideas. One important area of this research is comparing the brain’s language processing mechanisms to those of artificial neural networks, especially large language models (LLMs).…
Artificial Intelligence (AI) has long been focused on developing systems that can store and manage vast amounts of information and update that knowledge efficiently. Traditionally, symbolic systems such as Knowledge Graphs (KGs) have been used for knowledge representation, offering accuracy and clarity. These graphs map entities and their relationships in a structured form, which is…