Plans For Unlocking Models Full Potential
under review
Renji人機
It seems that by comparing with GPT4.1 in other platforms including chatgpt itself, the outcome from Merlin's provided model seems to have limited context output that could not generate a smooth answer. (Novel writing and translating for my work). Especially when GPT4.1 have higher context output compared to 4o, which would further raise my suspicion on the model we are currently using in Merlin have limited their model's context length.
Vaidant Agrawal
under review
A
Angelo Aviles
There are limitations on the models, and even when I uploaded several images and asked various models in the app to only transcribe the content from the images, I encountered errors. I entered the free version of POE, and it transcribed everything. It's time for improvements in this area, please. I believe this isn't the first time people have requested the removal of these limits.
J
Jürgen Höfs
I am also surprised that even Claude 3.7 Sonnet (Thinking) shows a message that you should choose a “large context” model. And that with 60 credits!
Gabriele Monni
I've also noticed this: compared to other services, most models seem very reluctant to give long answers, even when directly asked.