LongLLaMA will potentially handle 64x more text than ChatGPT. The LLM of researchers from Poland is based on OpenLLaMA software, created by META, the owner of Facebook.
It was developed by Szymon Tworkowski, Konrad Staniszewski, Mikołaj Pacek, and Piotr Miłoś. All of them are researchers associated with IDEAS NCBR, the University of Warsaw and the Polish Academy of Sciences. As well as by Yuhuai Wu, one of the co-founders of xAI, Elon Musk’s startup. And by Henryk Michalewski, associated with the University of Warsaw and Google DeepMind. By publishing their results in recent weeks, the researchers have caused a stir in the scientific community. The publication devoted to LongLLaMA, “Focused Transformer: Contrastive Training for Context Scaling”, has been accepted for the prestigious NeurIPS 2023 conference in New Orleans.
“LongLLaMA is an LLM available to everyone on the Internet,” said prof. Piotr Miłoś, leader of the research team at IDEAS NCBR. “Our model can handle 8,000 tokens at a time, which is approximately 30-50 pages of text. And for some tasks, much more, even 256,000 tokens, although this is only a technical result.
The first large open-source language models are available from March 2023. They allow scientists to do advanced work because creating your LLM from scratch is currently impossible.
“When Meta released OpenLLaMA, scientists from all over the world, including our team, took it to the workshop and modified it,” explains Piotr Miłoś. “Our LongLLaMA can process a much larger context than was previously possible, i.e., it can ‘eat’ much more text in one piece.”
Powerful and extremely accurate LLM
LongLLaMA’s advantage over other models is that it can process long inputs, generating more consistent and accurate answers. LongLLaMA can handle any context without truncating and filling it in, as passkey tests show.
The researchers checked whether LongLLaMA would be able to recall the password given at the beginning after receiving a very long prompt. LongLLaMA maintains 94.5% accuracy after receiving a 100,000-token prompt and 73% accuracy after receiving 256,000 tokens. OpenLLaMA could only handle a 2,000-token prompt.
Moreover, this model can now produce coherent texts with a length of 8,000 tokens and potentially even 256,000 tokens, which would significantly surpass ChatGPT. Importantly, it consumes relatively little power – a single processor is enough to use LongLLaMA – and works very fast. It can be used for all tasks in which chatbots already help us. It includes text generation, text editing, conversation with the user, creating summaries, translation, etc.
What is the difference between LongLLaMA and ChatGPT?
LongLLaMA, unlike ChatGPT, does not have an interface on the Internet. But anyone can download the model from the HuggingFace website and run it on their computer. Exerybody can modify open-source software as well. It distinguishes it from ChatGPT software, which has not been made available to the public. However, it is known to be based on the Transformer architecture as well.
It is a type of neural network architecture that analyzes text to distinguish complex connections between words on multiple layers. All of them by learning patterns from vast amounts of data. This technology has revolutionized natural language processing, enabling chatbots to generate text, translate, talk to the user. And perform many other tasks at a level previously unavailable to artificial intelligence.
When we ask a question to a chatbot using Transformer, it changes the text to tokens. These are pieces of information, usually between one character and one word. By dividing text into tokens, artificial intelligence can effectively process information.
However, the number of tokens a chatbot can accept is limited. In the case of ChatGPT 3.5’s token limit is 4,096, OpenLLaMA – is 2,000, and Google Bard – is about 1,000. Therefore, when we ask a chatbot a long question or provide a lot of information, it may be necessary to cut or omit some fragments. Most chatbots can’t analyze an entire book, a long conversation, or an article.
“The full potential of LLMs is often limited by how much context a given model can take,” said Piotr Miłoś. “That’s why we introduced Focused Transformer (FoT), a technique that uses a training process inspired by contrastive learning. This novel approach allows fine-tuning of already available LLMs so that they can take on greater context.
“ChatGPT is a commercial product. It has been optimized for pleasant service” – explains Piotr Miłoś. “Models like LongLLaMA issue rather raw information on which you can build something, such as analyzing text or producing code. LongLLaMA is a great achievement. It shows that LLMs can overcome the limitations associated with the length of prompts. And produce long texts that will be useful for humans.”
How to start LongLLaMA?
- Go to the https://colab.research.google.com/github/CStanKonrad/long_llama/blob/main/long_llama_instruct_colab.ipynb
- Click “Środowisko wykonawcze” in the menu and then “Uruchom wszystko”.
3. After a while, the code will be launched. And at the bottom of the page, a pop-up window will appear after the word “USER:” in which you can enter prompts.
Read more about ChatGPT.