Deep generative models learn continuous data representations from a limited set of training samples, with global metrics like Fréchet Inception Distance (FID) often used to evaluate their performance. However, these models may perform inconsistently across different regions of the learned manifold, especially in foundation models like Stable Diffusion, where generation quality can vary based on…
Data visualizations (DVs) have become a common practice in the big data era, utilized by various applications and institutions to convey insights from massive raw data. However, creating suitable DVs remains a challenging task, even for experts, as it requires visual analysis expertise and familiarity with the domain data. Also, users must master complex declarative…
Automated design in artificial intelligence (AI) is an emerging field focusing on developing systems capable of independently generating and optimizing their components. This approach is built on the premise that machine learning can surpass the limitations of manual design, enabling the creation of more efficient, adaptable, and powerful AI systems. The aim is to allow…
Data centers are poised to be among the world’s largest electricity consumers. If there is no meaningful change, they will consume between 10% and 20% of the electricity used in the U.S. by 2030. This explosive energy demand is influenced by the increasing computational demand, especially for new generative AI applications. Growth at this rate…
Language models (LMs), while powerful in generating human-like text, often produce unstructured and inconsistent outputs. The lack of structure in responses poses challenges in real-world applications, especially in long and extensive responses. It becomes difficult to extract specific information, integrate with systems expecting structured data, and present information in formats like tables or lists that…
Quantum computing has shown great potential to transform specific algorithms and applications and is expected to work alongside traditional High-Performance Computing (HPC) environments. Moreover, Noisy Intermediate-Scale Quantum (NISQ) devices have emerged as powerful computational platforms, but they face challenges such as limited qubit coherence times and a high chance of errors. Due to the complexity…
Computational social science (CSS) leverages advanced computational techniques to analyze and interpret vast amounts of social data. This field increasingly relies on natural language processing (NLP) methods to handle unstructured text data. However, while large language models (LLMs) have revolutionized CSS by enabling rapid and sophisticated text analysis, their integration into practical applications remains a…
Building Information Modeling (BIM) is an all-encompassing method of representing built assets using geometric and semantic data. This data can be used throughout a building’s lifetime and shared in dedicated forms throughout project stakeholders. Current building information modeling (BIM) authoring software considers various design needs. Because of this unified strategy, the software now includes many…
Mental health profoundly impacts individuals’ quality of life, yet accessing mental health services can be challenging due to stigma, insufficient workforce, and fragmented care systems. NLP has demonstrated its potential in this area, with models developed to detect symptoms and evaluate depression from clinical texts. Language models like BERT have also been adapted for classifying…
Professionals and enthusiasts in the finance industry need to have dependable tools for accessing and analyzing large amounts of data in order to track macroeconomic trends, cryptocurrency, equities markets, and forex. A comprehensive platform that gathers all this data in one location is essential. Many existing platforms are expensive or restrict data access and user…