Abstract A growing number of tools have used Large Language Models (LLMs) to support developers’ code understanding. However, developers still face several barriers to using such tools, including challenges in describing their intent in natural language, interpreting the tool outcome, and refining an effective prompt to obtain useful information. In this study, we designed an LLM-based conversational assistant that provides a personalized interaction based on inferred user mental state (e.g., background knowledge and experience). We evaluate the approach in a within-subject study with fourteen novices to capture their perceptions and preferences. Our results provide insights for researchers and tool builders who want to create or improve LLM-based conversational assistants to support novices in code understanding.