CO
Workflow JSON Builder Skill
Workflow generation instructions for online UI LLM
Open skill.md
Online LLM Workflow Assistant

LLM Intelligent Workflow Assistant
Understand · Analyze · Build

The online UI LLM can fully master workflow capabilities through read-only APIs: analyze existing node connections, understand template structures, switch LLM models, and generate importable workflow JSON from natural language requirements.

No coding required, no backend login needed. Query the node catalog, template library, and model list to analyze, validate, explain, modify templates, or build complete workflows from scratch.

Templates

Browse workflow template summaries to find a starting point close to user needs, then read the full definition for modification.

Node Schemas

Get the node catalog and detailed schemas to understand each node's input/output ports and ensure valid connections.

LLM Models

Query the available model list to automatically match suitable chat or embedding models for AI nodes.

Available APIs

LLM uses these endpoints to fetch templates, node schemas, and model lists. Base URL is configured by UI/runtime, append /api/v1.

GET

/api/v1/workflows/templates

Returns all template summaries: name, description, node count, edge count, tags, visibility. LLM uses it to recommend templates, pick by user intent, explain template purpose. To get full nodes and edges, call GET /api/v1/workflows/templates/{id} again, which returns definition.nodes and definition.edges.

GET

/api/v1/workflows/templates/{id}

Returns a full template definition (including nodes and edges). Prioritize modifying based on templates rather than building from scratch.

GET

/api/v1/nodes/types and /api/v1/nodes/schema/{nodeType}

Returns node types, categories, descriptions, input JSON Schema, output JSON Schema, i18n keys. LLM uses it to understand available nodes, how to configure, what input/output ports exist, which nodes can connect to which, and to generate node config JSON. POST /api/v1/nodes/validate can validate a single node config, but it's a protected write route.

GET

/api/v1/llm/models

Returns models grouped by provider, including chat and embedding types. Used to select models for ai/llm, ai/llm_config, embedding/RAG nodes, or to fill model names when generating JSON. Note this endpoint currently doesn't read user settings, enabled defaults to false; complete user-level provider/credential info is in the protected GET /api/v1/llm/providers.

What LLM Can Do With These APIs

Five core capabilities + clear boundaries.

1. Help User Analyze Node ConnectionsRead workflow's nodes + edges, then use /nodes/schema/{nodeType} to get each node's input/output schema. Check: does the edge's sourcePort exist in source node outputs, does targetPort exist in target node inputs, are data types roughly compatible, are there orphan nodes, nodes with no entry, cycle risks, or missing critical configs.
2. Create Workflow JSON from Natural LanguageExample: user says “I want to upload a file, convert to text, then let LLM summarize”. LLM calls /nodes/types to find file upload, document-to-text, LLM, output nodes; calls /llm/models to select available models; generates CreateWorkflowRequest.definition (structure matches workflow_dto.rs line 189) with nodes + edges; returns importable JSON.
3. Configure or Switch LLM ModelsFor existing workflow, use GET /api/v1/workflows/{id}/llm-nodes to find LLM nodes, then PUT /api/v1/workflows/{id}/llm-nodes/{node_id} to update model/provider/config (code in workflow/mod.rs line 1510). Note: these are protected write routes requiring authentication.
4. Generate from TemplatesLLM lists templates, reads a full template definition, then modifies per user needs: replace model, add/remove nodes, change prompts/configs, adjust wiring, generate new workflow JSON or create a copy directly.
5. Explain Workflows and Node CapabilitiesBecause node schemas include description, category, input/output schemas, LLM can translate complex workflows into human language: what each step does, how data flows, where credentials are needed, where failures may occur.

Boundary Description

All APIs listed above are read-only discovery endpoints. LLM can analyze, recommend, and generate draft JSON, but cannot truly save.

To persist creation requires POST /api/v1/workflows; to update existing workflow requires PUT/PATCH /api/v1/workflows/{id}; to validate node config use POST /api/v1/nodes/validate. These protected routes require authentication.

Conversation Flow

LLM progresses through this flow when interacting with users, asking only questions that affect structure.

Understand User Goal1-2 turns to confirm: input source, processing steps, output target, model preference, schedule, storage, approval, etc. Don't ask for technical node names unless user already thinks in nodes.
Load Discovery ContextFirst call /nodes/types; when goal sounds like a common pattern call /workflows/templates; when LLM/RAG/summarization/classification/extraction is involved call /llm/models.
Choose Construction StrategyPrioritize adapting matching template; if no template, compose nodes from schemas; use the smallest workflow that satisfies the goal.
Draft the WorkflowPick node types by typeId; configure from inputSchema; wire using source outputSchema.properties keys to target inputSchema.properties keys; generate UUID v4 for all IDs; use node names in user's language.
Review DraftBriefly explain nodes and data flow, point out assumptions, missing credentials, fields to fill after import. When user requests changes, revise draft and regenerate JSON.
Output Final JSONOutput one complete, importable workflow JSON object in a fenced json code block, no comments inside.

JSON Structure and Rules

Generate export/import style workflow JSON. nodeCount / edgeCount match actual lengths.

{
  "workflowId": "uuid-v4",
  "name": "Workflow name",
  "description": "What this workflow does",
  "state": "draft",
  "nodeCount": 0,
  "edgeCount": 0,
  "createdAt": "2026-04-26T00:00:00Z",
  "updatedAt": "2026-04-26T00:00:00Z",
  "version": 1,
  "isFavorite": false,
  "isTemplate": false,
  "isDraft": true,
  "tags": [],
  "visibility": "private",
  "definition": {
    "nodes": [],
    "edges": []
  }
}

Node Format

nodeType uses the typeId returned by the API. Position approximately 320px apart left to right to maintain readability.

{
  "nodeId": "uuid-v4",
  "nodeType": "ai/llm",
  "name": "LLM",
  "position": { "x": 0, "y": 0 },
  "config": {},
  "inputs": {},
  "outputs": {}
}

Edge Format

Port names come from the keys in schema properties.

{
  "edgeId": "uuid-v4",
  "sourceNode": "source-node-uuid",
  "sourcePort": "output",
  "targetNode": "target-node-uuid",
  "targetPort": "input"
}

Wiring Rules

Use schema properties as port names for connections.

sourcePortMust be a key from source node's outputSchema.properties.
targetPortMust be a key from target node's inputSchema.properties.
Type Matchingstring → string, object → object, array → array, boolean → boolean, number/integer → number/integer.
Semantic PriorityIf schemas have richer metadata, prefer exact semantic matches over type-only matches.
Avoid MulticastUnless the editor/runtime explicitly supports it, avoid connecting one source output to multiple downstream inputs.
Type MismatchWhen conversion is necessary, insert a transform, formatter, smart variable, or LLM node that explicitly converts the value.

LLM Model Selection

Select model types by scenario when calling /llm/models.

Chat ModelsUsed for generation, classification, extraction, routing, summarization, Agent steps.
Embedding ModelsUsed for indexing, semantic search, vector retrieval, RAG ingestion.
User SpecifiedIf user names a provider/model, use it directly if it exists in the list.
Not SpecifiedPick a reasonable chat model from the returned list and mark it as an assumption in the response.
CredentialsDon't invent provider credentials. When a node needs credentials, leave a clear placeholder in config, or tell user to bind after import.

Template Strategy

Modifying templates is more efficient than building from scratch.

List TemplatesFetch summaries from /workflows/templates.
Select TemplatePick the best match by name, description, tags, and rough node pattern.
Read Full DefinitionGet the template's complete nodes + edges.
Preserve SkeletonKeep useful wiring, replace only parts user needs changed.
Generate New IDsUnless the import flow explicitly allows keeping template IDs, generate all new UUIDs.
Update MetadataUpdate workflow name, description, tags, positions, counts, model and config fields synchronously.

Response Style

Be concise and practical during design, provide a summary before final output.

During DesignBe concise and practical. Translate user goals into workflow steps, not API implementation details. Only ask questions when the answer affects generated JSON.
Before Final OutputProvide a brief summary first, for example:
”I will generate this workflow: Input -> Processing -> Output. You will need to fill in after import: API Key / Knowledge Base ID / Webhook URL.”
Final Answer Must Include1) Brief description of what the workflow does; 2) One fenced json code block containing the complete importable JSON; 3) List of fields user may need to fill after import (if any).

Quality Checklist

Verify each item before returning JSON.

Unique IDsAll nodeId, edgeId, workflowId are unique UUID v4 strings that don't repeat.
Node ReferencesEach edge's sourceNode and targetNode correspond to nodeIds that exist in definition.nodes.
Ports ExistEach edge's sourcePort / targetPort exists in the corresponding node's schema properties.
Counts CorrectnodeCount and edgeCount match the actual lengths of definition.nodes / definition.edges.
Model SourcesLLM model/provider values come from /llm/models, or are clearly marked as placeholders.
Valid JSONNo comments, no trailing commas, is valid JSON.

Simplified Instructions for LLM to Copy

Button copies the full skill.md, here is a quick preview.

Base URL: https://chengos.mysisshu.link

Read-only discovery APIs:
GET /api/v1/workflows/templates
GET /api/v1/workflows/templates/{id}
GET /api/v1/nodes/types
GET /api/v1/nodes/schema/{nodeType}
GET /api/v1/llm/models

Use these APIs to:
1) Analyze node wiring (check ports, types, orphans, cycles)
2) Create workflow JSON from natural language
3) Select LLM/embedding models from /llm/models
4) Generate from templates (list, read, modify, output)
5) Explain workflows and node capabilities in plain language

Generate one importable workflow JSON object for the user.
Note: saving to DB requires POST/PUT authenticated routes.