Large language models (LLMs) are expanding in usage, posing new cybersecurity risks. These risks emerge from their core traits: heightened capability in code generation, heightened deployment for real-time code generation, automated execution within code interpreters, and integration into applications handling untrusted data. This poses the need for a robust mechanism for cybersecurity evaluations. Prior works…
The advent of generative artificial intelligence (AI) marks a significant technological leap, enabling the creation of new text, images, videos, and other media by learning from vast datasets. However, this innovative capability brings forth substantial copyright concerns, as it may utilize and repurpose the creative works of original authors without consent. This research addresses the…
Charts have become indispensable tools for visualizing data in information dissemination, business decision-making, and academic research. As the volume of multimodal data grows, a critical need arises for automated chart comprehension, which has garnered increasing attention from the research community. Recent advancements in Multimodal Large Language Models (MLLMs) have demonstrated impressive capabilities in comprehending images…
Large Language Models (LLMs) signify a revolutionary leap in numerous application domains, facilitating impressive accomplishments in diverse tasks. Yet, their immense size incurs substantial computational expenses. With billions of parameters, these models demand extensive computational resources for operation. Adapting them to specific downstream tasks becomes particularly challenging due to their vast scale and computational requirements,…
Contrastive learning has emerged as a transformative method for learning effective visual representations through the alignment of image and text embeddings. However, pairwise similarity computation in contrastive loss between image and text pairs poses computational challenges. This paper presents a novel weakly supervised pre-training of vision models on web-scale image-text data. The proposed method reframes…
Sleep staging is a clinically important task for diagnosing various sleep disorders but remains challenging to deploy at scale because it requires clinical expertise, among other reasons. Deep learning models can perform the task but at the expense of large labeled datasets, which are unfeasible to procure at scale. While self-supervised learning (SSL) can mitigate…
On-device machine learning (ML) moves computation from the cloud to personal devices, protecting user privacy and enabling intelligent user experiences. However, fitting models on devices with limited resources presents a major technical challenge: practitioners need to optimize models and balance hardware metrics such as model size, latency, and power. To help practitioners create efficient ML…
Neural knowledge-to-text generation models often struggle to faithfully generate descriptions for the input facts: they may produce hallucinations that contradict the given facts, or describe facts not present in the input. To reduce hallucinations, we propose a novel decoding method, TWEAK (Think While Effectively Articulating Knowledge). TWEAK treats the generated sequences at each decoding step…
The reproducibility and transparency of large language models are crucial for advancing open research, ensuring the trustworthiness of results, and enabling investigations into data and model biases, as well as potential risks. To this end, we release OpenELM, a state-of-the-art open language model. OpenELM uses a layer-wise scaling strategy to efficiently allocate parameters within each…
Extracting information quickly and efficiently from websites and digital documents is crucial for businesses, researchers, and developers. They require specific data from various online sources to analyze trends, monitor competitors, or gather insights for strategic decisions. Collecting this data can be time-consuming and prone to errors, presenting a significant challenge in data-driven industries. Traditionally, web…