Skip to main content

Tips for getting better answers from Anara

Practical techniques to improve the quality and relevance of Anara's AI responses: from scoping with @mentions to adjusting response length and adding text selections to chat.

Updated today

Anara searches your actual documents and cites every claim. The quality of your answers depends on how clearly you guide it. These techniques make a noticeable difference.

Be specific in your questions

Vague questions produce vague answers. The more specific you are, the more targeted the response:

  • Instead of "Tell me about this paper," ask "What statistical methods did the authors use and what were their limitations?"

  • Instead of "Summarize my research," ask "What are the three most common themes across the papers in my climate folder?"

Scope with @mentions

Use @mentions to tell the AI exactly where to look. Type @ in the chat input to choose:

  • Sources like @Library or @Web to control which data sources are searched.

  • Files to scope the AI to one or more specific documents.

  • Tools to request a specific action like @Create note or @Plan.

Use source filter buttons

Below the chat input, toggle buttons let you disable library search or web search entirely. Disabling web search keeps answers strictly grounded in your documents β€” useful during literature reviews or when you want to stay within a defined body of work.

Add text selections from documents

When reading a PDF, select a specific passage and click Chat from the floating toolbar. This sends the exact text to the chat input, letting you ask precise questions about specific claims, sentences, or data points.

Tip: Selecting a passage before asking is more effective than "what does page 7 say?" It gives the AI the exact context and eliminates any ambiguity about what you're asking.

Use folder chat for cross-document questions

If your question spans multiple papers, use folder chat rather than asking about each document individually. The AI searches across all documents in the folder at once and synthesizes findings into a single coherent response with citations from each source.

Try different models

Different models have different strengths. Fast models handle quick lookups well; more capable models handle complex synthesis and multi-document reasoning better. Switch models using the selector next to the chat input β€” you can change it mid-conversation without starting over. See Choosing and changing AI models.

Plan note: More powerful models like Claude Sonnet 4.6, Claude Opus 4.6, GPT 5.4, and Gemini 3.1 Pro require Pro or Max plans.

Control response length

If responses are consistently too long or too short, adjust the setting in Settings > Preferences under Chat. Response length is a preference setting, not a per-message option.

Use @Plan for complex tasks

For multi-step work β€” like a systematic literature review or synthesizing data from many papers β€” type @Plan in the chat input to ask the AI to outline its approach before starting. This helps you catch misunderstandings early and gives you control over the plan before the AI executes it.

Did this answer your question?