Skip to content

Database Node

The Database node lets your workflow talk directly to relational databases. By providing the connection details and SQL statement, you can pull live business data, build reports, or write AI-generated content back into existing systems without leaving FlowAI.

When to Use

  • Fetch live records: Query CRM, order, or inventory data for downstream analysis or prompts.
  • Write workflow output: Store summaries, decisions, or generated content back in your database.
  • Automated checks: Schedule health checks or statistics and pair them with alert nodes.
  • Variable-driven queries: Reference upstream outputs inside SQL to build dynamic filters or statements.

Node Configuration

database node config

Connection Basics

  1. Node name

    • Choose a descriptive label, such as “Fetch Orders” or “Store Report”.
    • Downstream nodes can access the result via $NodeName.result.
  2. Database type

    • Currently supports MySQL and PostgreSQL.
    • FlowAI selects the proper connection routine based on this field.
  3. Host and port

    • host accepts an internal hostname or public IP.
    • port is optional; leave blank to use the database default.
  4. Database and credentials

    • database sets the target schema.
    • username and password must have the required permissions.
    • Prefer scoped read-only or least-privilege accounts for safety.
  5. Variable interpolation

    • Any input field supports FlowAI variables, such as $Start.data.host.
    • Values are resolved before connecting, enabling fully dynamic setups.

SQL and Output Format

  1. SQL statement

    • Provide the command in the sql field.
    • The node automatically classifies statements as query or non-query.
    • Query statements (for example SELECT, SHOW, WITH) return a result set; other statements return execution metadata.
  2. Result format

    • Markdown tables are the default, great for direct display or notifications.
    • Set format to json to receive a structured payload containing columns, rows, and truncated, which is ideal for further automation.
  3. Row limit

    • max_rows caps the number of returned rows (200 by default, capped at 5000).
    • When the limit is exceeded, the node truncates the result and marks it accordingly.

Timeout and Advanced Options

  1. Execution timeout

    • timeout_seconds defines the maximum runtime for a single execution (30 seconds by default).
    • Running longer than this limit cancels the task and reports an error to prevent stalled workflows.
  2. SSL and extra parameters

    • PostgreSQL connections can set ssl_mode (for instance disable, require).
    • Use the options map to append extra connection parameters, e.g. { "timezone": "UTC" }.
  3. Connection reliability

    • The node verifies connectivity before running the statement and immediately reports failures.
    • Pair it with loops or condition nodes if you need custom retry policies.

Node Outputs

  • $NodeName.result: Returned when the statement succeeds.
    • Query statements output a Markdown table by default, or { columns, rows, truncated } when using JSON mode.
    • Write/update statements provide metadata such as rows_affected and, when available, last_insert_id.
  • $NodeName.error: Populated with the error message on failure and cleared on success, making it easy to branch with condition nodes.

Use the result downstream like this:

Latest orders:
{{$FetchOrders.result}}

With JSON output, downstream nodes should read $FetchOrders.result and perform any needed parsing internally.

Examples

Example: Read the latest orders

  1. Configure the Database node with:

    host: db.internal.local
    database: sales
    type: mysql
    username: report_reader
    password: ******
    sql: SELECT id, customer, total_amount FROM orders ORDER BY created_at DESC LIMIT 10;
    format: markdown table
  2. Name the node “Fetch Orders” and insert $FetchOrders.result in a notification or LLM node to show the latest records.

  3. To filter by date, inject a variable from an upstream node, such as ... WHERE created_at >= '{{ $TimeNode.result }}'.

Example: Write workflow execution logs

  1. Store the generated summary from an upstream LLM node as $Summary.result.

  2. Configure the Database node with:

    sql: INSERT INTO workflow_logs (workflow_id, summary, created_at)
    VALUES ({{ $WorkflowInfo.id }}, '{{ $Summary.result }}', NOW());
    format: json
  3. After execution, review $WriteLog.result to confirm the insert and trigger follow-up steps.

Best Practices

  • Scoped accounts: Use accounts limited to the specific schemas and operations you need.
  • Manage result size: Combine max_rows with SQL LIMIT clauses to stay efficient.
  • Monitor errors: Route $NodeName.error into condition or notification nodes for faster troubleshooting.
  • Cache where helpful: Pair the Database node with variable or JSON-processing nodes to reuse repeated query results.
  • Review timeouts: Frequent timeouts generally indicate a query or indexing issue that should be optimized.