Ever faced an issue with Merlin not generating an adequate amount of text, or hallucinating on details after hitting a limit?
We've now enabled
Large Context models
on Merlin.
image
How to use it?
Simply open the model selector on Merlin and choose the filter 'Large Context' from the top bar.
Now you can use models like:
  • Claude 3 Haiku
  • Gemini 1.5 Flash
  • GPT 4o Mini
... to chat with
HUGE
documents, websites, textual context and attachments.
The limit on these models is
100K tokens
. That is the equivalent of attaching or generating
  • A full length novel
    , making it perfect for literary analysis.
  • More than 150 iterations of code with 100 lines of code per iteration
    , so great for developers looking to iterate over large codebases in a single chat session.
  • 250 pages of a Word document
    , ideal for reviewing or summarizing long research papers, reports, or books.
  • 8,000 to 20,000 lines of source code
    , depending on formatting, allowing engineers to debug, review, or refactor extensive sections of code.
  • Massive prompts that can be
    100-200 times larger than a typical prompt
    , enabling more complex queries and interactions.
  • 2 full-length Master's theses
    or 200 to 250 pages of academic work.
We're soon rolling it out to Projects too.
Thank you for creating value out of Merlin everyday, and being a part of this journey! <3