Generative Large Language Models (LLMs) are capable of in-context learning (ICL), which is the process of learning from examples given within a prompt. However, research on the precise principles underlying these models’ ICL performance is still underway. The inconsistent experimental results are one of the main obstacles, making it challenging to provide a clear explanation…
Large language models (LLMs) like GPT-4 have become a significant focus in artificial intelligence due to their ability to handle various tasks, from generating text to solving complex mathematical problems. These models have demonstrated capabilities far beyond their original design, mainly to predict the next word in a sequence. While their utility spans numerous industries,…
Predicting battery lifespan is difficult due to the nonlinear nature of capacity degradation and the uncertainty of operating conditions. As battery lifespan prediction is vital for the reliability and safety of systems like electric vehicles and energy storage, there is a growing need for advanced methods to provide precise estimations of both current cycle life…
ML models are increasingly used in weather forecasting, offering accurate predictions and reduced computational costs compared to traditional numerical weather prediction (NWP) models. However, current ML models often have limitations such as coarse temporal resolution (usually 6 hours) and a narrow range of meteorological variables, which can limit their practical use. Accurate forecasting is crucial…
A significant challenge in text-to-speech (TTS) systems is the computational inefficiency of the Monotonic Alignment Search (MAS) algorithm, which is responsible for estimating alignments between text and speech sequences. MAS faces high time complexity, particularly when dealing with large inputs. The complexity is O(T×S), where T is the text length and S is the speech…
Prior research on Large Language Models (LLMs) demonstrated significant advancements in fluency and accuracy across various tasks, influencing sectors like healthcare and education. This progress sparked investigations into LLMs’ language understanding capabilities and associated risks. Hallucinations, defined as plausible but incorrect information generated by models, emerged as a central concern. Studies explored whether these errors…
AI assistants have the drawback of being rigid, pre-programmed for specific tasks, and in need of more flexibility. The limited utility of these systems stems from their inability to learn and adapt as they are used. Some AI frameworks include hidden features and processes that are difficult for users to access or modify. This lack…