QuantAgent decomposes the trading analysis problem into four focused agents, each responsible for a distinct analytical layer. A LangGraphDocumentation Index
Fetch the complete documentation index at: https://mintlify.com/Y-Research-SBU/QuantAgent/llms.txt
Use this file to discover all available pages before exploring further.
StateGraph wires them into a sequential pipeline that runs from market data ingestion through to a final LONG/SHORT order.
Graph structure
The compiled graph follows a strict linear topology:START → Indicator Agent
Raw OHLCV data enters the graph. The Indicator Agent computes momentum and oscillator values using five TA-Lib tools.
Indicator Agent → Pattern Agent
Indicator results and an
indicator_report are written into the shared state. The Pattern Agent reads this state and generates a candlestick chart for visual pattern recognition.Pattern Agent → Trend Agent
A
pattern_report is appended to state. The Trend Agent generates a trendline-annotated chart and uses a vision LLM to interpret support and resistance dynamics.Trend Agent → Decision Maker
A
trend_report is appended to state. The Decision Maker reads all three reports simultaneously and issues a structured JSON trade decision.SetGraph.set_graph() in graph_setup.py:
Shared state
All four agents communicate exclusively through a singleIndicatorAgentState TypedDict defined in agent_state.py. No agent calls another directly — every result is written to state and read by the next node.
Input fields
kline_data, time_frame, stock_name — provided by the caller before graph invocation.Indicator Agent writes
rsi, macd, macd_signal, macd_hist, stoch_k, stoch_d, roc, willr, indicator_reportPattern Agent writes
pattern_image, pattern_image_filename, pattern_image_description, pattern_reportTrend Agent writes
trend_image, trend_image_filename, trend_image_description, trend_reportindicator_report, pattern_report, and trend_report and writes final_trade_decision.
Two LLM roles
QuantAgent uses two distinct LLM instances with different capability requirements:| Role | Config key | Default model | Responsibilities |
|---|---|---|---|
graph_llm | graph_llm_model | gpt-4o | Primary LLM — used by the Indicator agent for tool-calling and report generation, and by the Pattern and Trend agents for vision-based chart analysis and the Decision agent for final synthesis. Must be vision-capable. |
agent_llm | agent_llm_model | gpt-4o-mini | Tool-dispatch LLM — used only in the Pattern and Trend agents to call the image-generation tools (generate_kline_image, generate_trend_image). A lighter model is sufficient here. |
The Indicator Agent uses
graph_llm exclusively. The Pattern and Trend agents use agent_llm for the tool-dispatch step (generating charts) and graph_llm for the vision analysis step (interpreting charts). The Decision Agent uses graph_llm for final synthesis.Why vision-capable LLMs are required
The Pattern Agent and Trend Agent both encode candlestick charts as base64 PNG images and pass them directly to the LLM using theimage_url content type:
graph_llm must support multimodal (vision) inputs. OpenAI gpt-4o, Anthropic Claude 3+ models, and Qwen VL models all satisfy this requirement.
Supported LLM providers
TradingGraph supports three providers, configured via the agent_llm_provider and graph_llm_provider config keys:
openai—ChatOpenAI(e.g.,gpt-4o,gpt-4o-mini)anthropic—ChatAnthropic(e.g.,claude-3-5-sonnet-20241022)qwen—ChatQwen(e.g.,qwen-vl-max-latest)