Natural language processing (NLP) focuses on enabling computers to understand and generate human language, making interactions more intuitive and efficient. Recent developments in this field have significantly impacted machine translation, chatbots, and automated text analysis. The need for machines to comprehend large amounts of text and provide accurate responses has led to the development of…
Contrastive learning typically matches pairs of related views among a number of unrelated negative views. Views can be generated (e.g. by augmentations) or be observed. We investigate matching when there are more than two related views which we call poly-view tasks, and derive new representation learning objectives using information maximization and sufficient statistics. We show…
Artificial Intelligence (AI) is changing the world quickly as several nations and international organizations have adopted frameworks to direct the development, application, and governance of AI. Numerous initiatives are influencing the ethical use of AI to prioritize human rights and innovation. Here are some of the top AI governance laws and frameworks. 1. EU AI…
Large Language Models (LLMs) often provide confident answers, raising concerns about their reliability, especially for factual questions. Despite widespread hallucination in LLM-generated content, no established method to assess response trustworthiness exists. Users lack a “trustworthiness score” to determine response reliability without further research or verification. The aim is for LLMs to yield predominantly high trust…
Multi-layer perceptrons (MLPs), or fully-connected feedforward neural networks, are fundamental in deep learning, serving as default models for approximating nonlinear functions. Despite their importance affirmed by the universal approximation theorem, they possess drawbacks. In applications like transformers, MLPs often monopolize parameters and lack interpretability compared to attention layers. While exploring alternatives, such as the Kolmogorov-Arnold…
Iterative preference optimization methods have shown efficacy in general instruction tuning tasks but yield limited improvements in reasoning tasks. These methods, utilizing preference optimization, enhance language model alignment with human requirements compared to sole supervised fine-tuning. Offline techniques like DPO are gaining popularity due to their simplicity and efficiency. Recent advancements advocate the iterative application…
This study’s research area is artificial intelligence (AI) and machine learning, specifically focusing on neural networks that can understand binary code. The aim is to automate reverse engineering processes by training AI to understand binaries and provide English descriptions. This is important because binaries can be challenging to comprehend due to their complexity and lack…
Recent advancements in econometric modeling and hypothesis testing have witnessed a paradigm shift towards integrating machine learning techniques. While strides have been made in estimating econometric models of human behavior, more research still needs to be conducted on effectively generating and rigorously testing these models. Researchers from MIT and Harvard introduce a novel approach to…
In the age of digital transformation, data is the new gold. Businesses are increasingly reliant on data for strategic decision-making, but this dependency brings significant challenges, particularly when it comes to collaborating with external partners. The traditional methods of sharing data often entail transferring sensitive information to third parties, significantly increasing the risk of security…
The development of natural language processing has been significantly propelled by the advancements in large language models (LLMs). These models have showcased remarkable performance in tasks like translation, question answering, and text summarization, proving their efficiency in generating high-quality text. However, despite their effectiveness, one major limitation remains their slow inference speed, which hinders their…