Context management across chat has become terrible
t
theunmindful
Model: GPT-5 (high). Usage: Chat.
It's impossible to converse (even without attachment —attachments are another area of poor implementation) with high context size. Not even the 2nd response is aligned with the context. For example, in chat,
Q1: I shared long context and query
R1: (say) Part 1 of 5. Details of part 1 and plan for part 2.
Q2: Continue with next part
R2. Part 2 is completely different from what is mentioned in R1. 'thinking tokens' shows that the model is struggling to find what if part 2 context (due to lack of data) and it tries to create a new Part 2.
Now, either I don't know how to correctly prompt so that I make useful chat on Merlin or there is something terribly broken in the context management.