With the development of language models showing no signs of letting up, Meta AI has decided to make their contribution to the AI world with the introduction of the second iteration of their groundbreaking open-source language model Llama 2. It definitely marks a significant step in the field of natural language processing (and artificial intelligence…
GenAI, understood as a class of models capable of generating human-like, high dimensional outputs like text, image or sound, is experiencing great success and explosive growth [1, 2, 3]. However, this has also quietly given rise to a critical problem that permeates applied GenAI in its entirety – what we call Evaluation Derangement Syndrome (EDS).…
We are delighted to announce that deepsense.ai has been recognized as one of the top 50 AI Providers in Central and Eastern Europe (CEE) according to the prestigious “TOP AI Driven Companies” report. Prepared meticulously by MCI Capital, Bain & Company, and Art Of Networking, the report highlights organizations, from start-ups to well-established enterprises, that…
In this blog post we walk you through our journey creating an LLM-based code writing agent from scratch – fine tuned-for your needs and processes – and we share our experience of how to improve it iteratively. Introduction This article is the second part in our series on Coding Agents. The first part provides an…
We are thrilled to announce that deepsense.ai has become one of the four companies to partner with the creators of LangChain, an innovative framework known for simplifying and accelerating the process of Large Language Models (LLMs) application development. Building LLM applications might require various specialized models, complicating integration and increasing development complexity. This is where…
In artificial intelligence, one common challenge is ensuring that language models can process information quickly and efficiently. Imagine you’re trying to use a language model to generate text or answer questions on your device, but it’s taking too long to respond. This delay can be frustrating and impractical, especially in real-time applications like chatbots or…
In the ever-evolving field of machine learning, developing models that predict and explain their reasoning is becoming increasingly crucial. As these models grow in complexity, they often become less transparent, resembling “black boxes” where the decision-making process is obscured. This opacity is problematic, particularly in sectors like healthcare and finance, where understanding the basis of…
Long-context large language models (LLMs) have garnered attention, with extended training windows enabling processing of extensive context. However, recent studies highlight a challenge: these LLMs struggle to utilize middle information effectively, termed the lost-in-the-middle challenge. While the LLM can comprehend the information at the beginning and end of the long context, it often overlooks the…
In-context learning (ICL) in large language models (LLMs) utilizes input-output examples to adapt to new tasks without altering the underlying model architecture. This method has transformed how models handle various tasks by learning from direct examples provided during inference. The problem at hand is the limitation of a few-shot ICL in handling intricate tasks. These…
We’re adding new features to help developers have more control over fine-tuning and announcing new ways to build custom models with OpenAI.