AI Researchers Claim They Can Double the Efficiency of Chatbots

Abacus AI claims to have found a way to fine-tune LLMs, making them capable of processing 200% their original context token capacity.

Tokens are the basic units of text or code used by an LLM AI to process and generate language This restricts how much background information they can harness when formulating replies. According to their Github page, Abacus AI claims that its scaling method drastically increases the number of tokens that a model can handle.

Via: decrypt.co