Improve Web Search: Remove Rigid Source Limits
under review
Samuel Jackson
Right now, the web search function is far too limited, it pulls information from only about 10 websites, which often isn't enough for detailed or nuanced prompts.
Could you expand this capability and give the model more freedom to decide how many sources to consult based on the complexity of the query? Instead of a fixed cap, let the model determine what’s necessary to produce a high-quality, well-supported response. That would make the research tools far more powerful and flexible.
Y
Yume
Samuel Jackson Seriously: Use Google AI they don't limit research search or daily. Been using it way more and they actually go across the web and AI search. It's free and NO daily limits. Now use that and "sometimes" Merlin, since they've become so restrictive on daily usage... Sadly had to find a work-around.
For example I found Google AI in one of my queries sourced 392 websites. Merlin restrictions coupled with daily limits has dulled the tool (IMHO).
Vijay Bharadwaj
under review
Vijay Bharadwaj
Hi Samuel, thanks for bringing this up. I understand the problem you're quoting, but the solution is not really the number of search results. It's more about the number of searches.
If you ask reasoning models to "search more" or define custom instructions in Settings > Personalisation, asking Merlin to do multiple searches to arrive at a conclusion, it will gladly do that.
That being said, let me think of a way that we can make this easy to control in the interface of the application.
Thanks again!
Samuel Jackson
Vijay Bharadwaj Hi Vijay,
I appreciate your reply and I do see the value in multiple, iterative searches. They can certainly build depth. But here’s why I think broadening the pull in a single search is still worth considering alongside that approach. When you rely entirely on iterative passes, you introduce latency and extra steps for the user, and each pass risks drifting from the original intent. If I can pull from a wide, diverse set of domains in one go, I get a faster, more stable synthesis without having to “babysit” the process.
There are also use‑cases where the breadth has to be front‑loaded policy surveys across jurisdictions, quick multi‑industry scans, verifying scientific claims across both academic and informal sources. If those first ten sites don’t capture the dissenting or long‑tail perspectives, the synthesis can skew before iteration even starts.
That’s why I’m not arguing against the multiple‑search model, I think it’s smart. I’m suggesting a hybrid: give us the option to control breadth per query, while still allowing iterative expansion when needed. That way, people who just want to fire off one well‑armed query and get a comprehensive, balanced draft can do so, while those who prefer incremental digging can still go that route.