"Agentic thinking" feature - multiple "thinking" steps
under review
t
theunmindful
Using an LLM model to analyze (with or without search) in a chain of thought-like method (multiple "thinking" steps). To begin with, think like the "reasoning" option in Perplexity.
- Base (what Perplexity "reasoning" option already has)
- Add-on 1: To follow a set of instructions one by one (each line item as an individual thought and then summarize it so that the context is not lost for the next line item thought). This way, user can first create a "plan" and then the model can step-by-step execute the line items (while at the same time stitching and getting deep understanding of the subject
- Add-on 2: The ability to read a long context. Say, The model can read only 50k (input context). So, if a user gives 500k input, the model reads first 50k, and summarize it into 10k, then reads the next 50k and summarizes it into 10k and so on so that a large document can be systematically synthesized.
- Add-on 3 (Extension of add-on 2) If complex, the model can break the document into overlapping sections and more no of "thinking" iterations. Say, a text is of 1000 words and a model can read on 50 words at a time. So, 1st thinking and summarization (Word1 to Word50), 2nd (Word30 to Word 60) and so on. (more no of "thinking steps")
endu
under review
endu
Hi theunmindful! 😊 That’s very interesting—thanks for the detailed suggestions! We’ll definitely work on this, but it’ll be after the implementation of the credits system. Appreciate your input! 🙌
t
theunmindful
endu thanks
t
theunmindful
endu I hope your credit system is sensibly designed. This usually creates a 'deficiency' mindset in the user (negative experience) and may not work in the long run.
openAi was thinking about it but then they ended Sora's credit system (of course context matters).