Skip to content

AI & LLM

AI model integration, text generation, embeddings, and autonomous agents.

18 modules

ModuleDescription
Autonomous AgentSelf-directed AI agent with memory and goal-oriented behavior
Chain AgentSequential AI processing chain with multiple steps
Tool Use AgentAI Agent that can call tools/functions
Text EmbeddingsGenerate vector embeddings from text using AI models
AI ExtractExtract structured data from text using AI
Local Ollama ChatChat with local LLM via Ollama (completely offline)
AI MemoryConversation memory for AI Agent
Entity MemoryExtract and track entities (people, places, concepts) from conversations
Redis MemoryPersistent conversation memory using Redis storage
Vector MemorySemantic memory using vector embeddings for relevant context retrieval
AI ModelLLM model configuration for AI Agent
AI ToolExpose a module as a tool for AI Agent
Vision AnalyzeAnalyze images using AI vision models
Claude ChatSend a chat message to Anthropic Claude AI and get a response
Google Gemini ChatSend a chat message to Google Gemini AI and get a response
OpenAI ChatSend a chat message to OpenAI GPT models
DALL-E Image GenerationGenerate images using DALL-E
AI AgentAutonomous AI agent with multi-port connections (model, memory, tools)

Modules

Autonomous Agent

agent.autonomous

Self-directed AI agent with memory and goal-oriented behavior

Parameters:

NameTypeRequiredDefaultDescription
goalstringYes-The goal for the agent to achieve
contextstringNo-The goal for the agent to achieve
max_iterationsnumberNo5Additional context or constraints
llm_providerselect (openai, ollama)NoopenaiMaximum reasoning steps
modelstringNogpt-4-turbo-previewModel name (e.g., gpt-4, llama2, mistral)
ollama_urlstringNohttp://localhost:11434Model name (e.g., gpt-4, llama2, mistral)
temperaturenumberNo0.7Ollama server URL (only for ollama provider)

Output:

FieldTypeDescription
resultstringCreativity level (0-2)
thoughtsarrayThe operation result
iterationsnumberThe operation result
goal_achievedbooleanAgent reasoning steps

Example: Research task

yaml
goal: Research the latest trends in AI and summarize the top 3
max_iterations: 5
model: gpt-4

Example: Problem solving

yaml
goal: Find the best approach to optimize database queries
context: PostgreSQL database with 10M records
max_iterations: 10

Chain Agent

agent.chain

Sequential AI processing chain with multiple steps

Parameters:

NameTypeRequiredDefaultDescription
inputstringYes-Initial input for the chain
chain_stepsarrayYes-Initial input for the chain
llm_providerselect (openai, ollama)NoopenaiArray of processing steps (each is a prompt template)
modelstringNogpt-4-turbo-previewModel name (e.g., gpt-4, llama2, mistral)
ollama_urlstringNohttp://localhost:11434Model name (e.g., gpt-4, llama2, mistral)
temperaturenumberNo0.7Ollama server URL (only for ollama provider)

Output:

FieldTypeDescription
resultstringCreativity level (0-2)
intermediate_resultsarrayThe operation result
steps_completednumberThe operation result

Example: Content pipeline

yaml
input: AI and machine learning trends
chain_steps: ["Generate 5 blog post ideas about: {input}", "Take the first idea and write a detailed outline: {previous}", "Write an introduction paragraph based on: {previous}"]
model: gpt-4

Example: Data analysis chain

yaml
input: User behavior data shows 60% bounce rate
chain_steps: ["Analyze what might cause this issue: {input}", "Suggest 3 solutions based on: {previous}", "Create an action plan from: {previous}"]

Tool Use Agent

agent.tool_use

AI Agent that can call tools/functions

Parameters:

NameTypeRequiredDefaultDescription
promptstringYes-The goal or task for the agent
toolsarrayYes-List of tool definitions [{name, description, parameters}]
providerselect (openai, anthropic)NoopenaiLLM provider for the agent
modelstringNogpt-4oModel to use
api_keystringNo-API key (falls back to environment variable)
max_iterationsnumberNo10Maximum number of tool call rounds
system_promptstringNo-Optional system prompt to guide the agent

Output:

FieldTypeDescription
resultstringThe agent final response
tool_callsarrayAll tool calls made during execution
iterationsnumberNumber of iterations completed
modelstringModel used

Example: File Processing Agent

yaml
prompt: Read the config file and update the version number
tools: [{"name": "read_file", "description": "Read contents of a file", "parameters": {"type": "object", "properties": {"path": {"type": "string", "description": "File path"}}, "required": ["path"]}}, {"name": "write_file", "description": "Write contents to a file", "parameters": {"type": "object", "properties": {"path": {"type": "string", "description": "File path"}, "content": {"type": "string", "description": "File content"}}, "required": ["path", "content"]}}]
provider: openai
model: gpt-4o
max_iterations: 5

Text Embeddings

ai.embed

Generate vector embeddings from text using AI models

Parameters:

NameTypeRequiredDefaultDescription
textstringYes-Text to embed
providerselect (openai, local)NoopenaiAI provider for embeddings
modelstringNotext-embedding-3-smallEmbedding model to use
api_keystringNo-API key (falls back to environment variable)
dimensionsnumberNo-Embedding dimensions (for models that support it)

Output:

FieldTypeDescription
embeddingsarrayVector embedding array
modelstringModel used for embedding
dimensionsnumberNumber of dimensions in the embedding vector
token_countnumberNumber of tokens processed

Example: Single Text Embedding

yaml
text: The quick brown fox jumps over the lazy dog
provider: openai
model: text-embedding-3-small

Example: Reduced Dimensions

yaml
text: Semantic search query
provider: openai
model: text-embedding-3-small
dimensions: 256

AI Extract

ai.extract

Extract structured data from text using AI

Parameters:

NameTypeRequiredDefaultDescription
textstringYes-Text to extract data from
schemaobjectYes-JSON schema defining the fields to extract
instructionsstringNo-Additional extraction instructions
providerselect (openai, anthropic)NoopenaiAI provider to use
modelstringNogpt-4o-miniModel to use for extraction
api_keystringNo-API key (falls back to environment variable)
temperaturenumberNo0Sampling temperature (0-2)

Output:

FieldTypeDescription
extractedobjectExtracted structured data
modelstringModel used for extraction
raw_responsestringRaw model response

Example: Extract Contact Info

yaml
text: John Smith is a senior engineer at Acme Corp. Email: john@acme.com
schema: {"type": "object", "properties": {"name": {"type": "string"}, "title": {"type": "string"}, "company": {"type": "string"}, "email": {"type": "string"}}}
provider: openai
model: gpt-4o-mini

Example: Extract Invoice Data

yaml
text: Invoice #1234 from Acme Corp. Total: $500.00. Due: 2024-03-01
schema: {"type": "object", "properties": {"invoice_number": {"type": "string"}, "vendor": {"type": "string"}, "total": {"type": "number"}, "due_date": {"type": "string"}}}
instructions: Extract all invoice fields. Parse amounts as numbers.

Local Ollama Chat

ai.local_ollama.chat

Chat with local LLM via Ollama (completely offline)

Parameters:

NameTypeRequiredDefaultDescription
promptstringYes-The message to send to the local LLM
modelselect (llama2, llama2:13b, llama2:70b, mistral, mixtral, codellama, codellama:13b, phi, neural-chat, starling-lm)Nollama2The message to send to the local LLM
temperaturenumberNo0.7Sampling temperature (0-2)
system_messagestringNo-System role message (optional)
ollama_urlstringNohttp://localhost:11434System role message (optional)
max_tokensnumberNo-Ollama server URL

Output:

FieldTypeDescription
responsestringMaximum tokens in response (optional, depends on model)
modelstringResponse from the operation
contextarrayResponse from the operation
total_durationnumberModel name or identifier
load_durationnumberConversation context for follow-up requests
prompt_eval_countnumberTotal processing duration
eval_countnumberModel loading duration

Example: Simple local chat

yaml
prompt: Explain quantum computing in 3 sentences
model: llama2

Example: Code generation with local model

yaml
prompt: Write a Python function to calculate fibonacci numbers
model: codellama
temperature: 0.2
system_message: You are a Python programming expert. Write clean, efficient code.

Example: Local reasoning task

yaml
prompt: What are the pros and cons of microservices architecture?
model: mistral
temperature: 0.7

AI Memory

ai.memory

Conversation memory for AI Agent

Parameters:

NameTypeRequiredDefaultDescription
memory_typeselect (buffer, window, summary)YesbufferType of memory storage
window_sizenumberNo10Number of recent messages to keep (for window memory)
session_idstringNo-Unique identifier for this conversation session
initial_messagesarrayNo[]Pre-loaded conversation history

Output:

FieldTypeDescription
memory_typestringPre-loaded conversation history
session_idstringPre-loaded conversation history
messagesarrayType of memory
configobjectSession identifier

Example: Simple Buffer Memory

yaml
memory_type: buffer

Example: Window Memory (last 5 messages)

yaml
memory_type: window
window_size: 5

Entity Memory

ai.memory.entity

Extract and track entities (people, places, concepts) from conversations

Parameters:

NameTypeRequiredDefaultDescription
entity_typesmultiselectNo['person', 'organization', 'location']Types of entities to extract and track
extraction_modelselect (llm, spacy, regex)YesllmModel for entity extraction
session_idstringNo-Unique identifier for this memory session
track_relationshipsbooleanNoTrueTrack relationships between entities
max_entitiesnumberNo100Maximum number of entities to remember

Output:

FieldTypeDescription
memory_typestringMaximum number of entities to remember
session_idstringMaximum number of entities to remember
entitiesobjectType of memory (entity)
relationshipsarraySession identifier
configobjectTracked entities by type

Example: People & Organizations

yaml
entity_types: ["person", "organization"]
extraction_model: llm

Example: Full Entity Tracking

yaml
entity_types: ["person", "organization", "location", "concept"]
track_relationships: true
max_entities: 200

Redis Memory

ai.memory.redis

Persistent conversation memory using Redis storage

Parameters:

NameTypeRequiredDefaultDescription
redis_urlstringYesredis://localhost:6379Redis connection URL
key_prefixstringNoflyto:memory:Prefix for all Redis keys
session_idstringYes-Unique identifier for this memory session
ttl_secondsnumberNo86400Time-to-live for memory entries (0 = no expiry)
max_messagesnumberNo100Maximum messages to store per session
load_on_startbooleanNoTrueLoad existing messages from Redis on initialization

Output:

FieldTypeDescription
memory_typestringLoad existing messages from Redis on initialization
session_idstringLoad existing messages from Redis on initialization
messagesarrayType of memory (redis)
connectedbooleanSession identifier
configobjectLoaded message history

Example: Local Redis

yaml
redis_url: redis://localhost:6379
session_id: my-session
ttl_seconds: 3600

Example: Cloud Redis with Auth

yaml
redis_url: redis://:password@redis-cloud.example.com:6379
session_id: user-session
ttl_seconds: 86400
max_messages: 500

Vector Memory

ai.memory.vector

Semantic memory using vector embeddings for relevant context retrieval

Parameters:

NameTypeRequiredDefaultDescription
embedding_modelselect (text-embedding-3-small, text-embedding-3-large, text-embedding-ada-002, local)Yestext-embedding-3-smallModel to use for generating embeddings
top_knumberNo5Number of most relevant memories to retrieve
similarity_thresholdnumberNo0.7Minimum similarity score (0-1) for retrieval
session_idstringNo-Unique identifier for this memory session
include_metadatabooleanNoTrueInclude timestamp and other metadata with memories

Output:

FieldTypeDescription
memory_typestringInclude timestamp and other metadata with memories
session_idstringInclude timestamp and other metadata with memories
embedding_modelstringType of memory (vector)
configobjectSession identifier

Example: Default Vector Memory

yaml
embedding_model: text-embedding-3-small
top_k: 5

Example: High Precision Memory

yaml
embedding_model: text-embedding-3-large
top_k: 10
similarity_threshold: 0.85

AI Model

ai.model

LLM model configuration for AI Agent

Parameters:

NameTypeRequiredDefaultDescription
providerselect (openai, anthropic, ollama)NoopenaiAI model provider
modelstringNogpt-4oSpecific model to use
temperaturenumberNo0.7Creativity level (0=deterministic, 1=creative)
api_keystringNo-API key (defaults to provider env var)
base_urlstringNo-Custom API base URL (for Ollama or proxies)
max_tokensnumberNo4096Maximum tokens in response

Output:

FieldTypeDescription
providerstringMaximum tokens in response
modelstringLLM provider name
configobjectLLM provider name

Example: OpenAI GPT-4

yaml
provider: openai
model: gpt-4o
temperature: 0.7

Example: Anthropic Claude

yaml
provider: anthropic
model: claude-3-5-sonnet-20241022
temperature: 0.5

AI Tool

ai.tool

Expose a module as a tool for AI Agent

Parameters:

NameTypeRequiredDefaultDescription
module_idstringYes-Module ID to expose as tool (e.g. http.request, data.json_parse)
tool_descriptionstringNo-Custom description for the agent (overrides module default)

Output:

FieldTypeDescription
module_idstringModule ID exposed as tool

Example: HTTP Request Tool

yaml
module_id: http.request

Example: JSON Parse Tool

yaml
module_id: data.json_parse

Vision Analyze

ai.vision.analyze

Analyze images using AI vision models

Parameters:

NameTypeRequiredDefaultDescription
image_pathstringNo-Local path to image file
image_urlstringNo-URL of the image to analyze
promptstringNoDescribe this image in detailWhat to analyze or ask about the image
providerselect (openai, anthropic)NoopenaiAI provider for vision analysis
modelstringNogpt-4oVision model to use
api_keystringNo-API key (falls back to environment variable)
max_tokensnumberNo1000Maximum tokens in response
detailselect (low, high, auto)NoautoImage detail level (low/high/auto)

Output:

FieldTypeDescription
analysisstringAI analysis of the image
modelstringModel used for analysis
providerstringProvider used for analysis
tokens_usednumberNumber of tokens used

Example: Analyze Screenshot

yaml
image_path: /tmp/screenshot.png
prompt: Describe what you see in this UI screenshot
provider: openai
model: gpt-4o

Example: Analyze from URL

yaml
image_url: https://example.com/photo.jpg
prompt: What objects are in this image?
provider: anthropic
model: claude-sonnet-4-20250514

Claude Chat

api.anthropic.chat

Send a chat message to Anthropic Claude AI and get a response

Parameters:

NameTypeRequiredDefaultDescription
api_keystringNo-Anthropic API key (defaults to env.ANTHROPIC_API_KEY)
modelstringNoclaude-3-5-sonnet-20241022Claude model to use
messagesarrayYes-Array of message objects with role and content
max_tokensnumberNo1024Content returned by the operation
temperaturenumberNo1.0Sampling temperature (0-1). Higher values make output more random
systemstringNo-System prompt to guide Claude behavior

Output:

FieldTypeDescription
contentstringSystem prompt to guide Claude behavior
modelstringClaude response text
stop_reasonstringModel used for response
usageobjectWhy the model stopped generating (end_turn, max_tokens, etc)

Example: Simple question

yaml
messages: [{"role": "user", "content": "What is the capital of France?"}]
max_tokens: 100

Example: Text summarization

yaml
system: You are a helpful assistant that summarizes text concisely.
messages: [{"role": "user", "content": "Summarize this article: ${article_text}"}]
max_tokens: 500

Google Gemini Chat

api.google_gemini.chat

Send a chat message to Google Gemini AI and get a response

Parameters:

NameTypeRequiredDefaultDescription
api_keystringNo-Google AI API key (defaults to env.GOOGLE_AI_API_KEY)
modelstringNogemini-1.5-proGemini model to use
promptstringYes-The text prompt to send to Gemini
temperaturenumberNo1.0Controls randomness (0-2). Higher values make output more random
max_output_tokensnumberNo2048Maximum number of tokens in response

Output:

FieldTypeDescription
textstringGenerated text response from Gemini
modelstringModel used for generation
candidatesarrayAll candidate responses

Example: Simple question

yaml
prompt: Explain quantum computing in simple terms

Example: Content generation

yaml
prompt: Write a professional email about ${topic}
temperature: 0.7
max_output_tokens: 500

OpenAI Chat

api.openai.chat

Send a chat message to OpenAI GPT models

Parameters:

NameTypeRequiredDefaultDescription
promptstringYes-The message to send to GPT
modelselect (gpt-4-turbo-preview, gpt-4, gpt-3.5-turbo)Nogpt-4-turbo-previewThe message to send to GPT
temperaturenumberNo0.7Sampling temperature (0-2)
max_tokensnumberNo1000Sampling temperature (0-2)
system_messagestringNo-Maximum tokens in response

Output:

FieldTypeDescription
responsestringSystem role message (optional)
modelstringResponse from the operation
usageobjectResponse from the operation

Example: Simple chat

yaml
prompt: Explain quantum computing in 3 sentences
model: gpt-3.5-turbo

Example: Code generation

yaml
prompt: Write a Python function to calculate fibonacci numbers
model: gpt-4
temperature: 0.2
system_message: You are a Python programming expert

DALL-E Image Generation

api.openai.image

Generate images using DALL-E

Parameters:

NameTypeRequiredDefaultDescription
promptstringYes-Description of the image to generate
sizeselect (256x256, 512x512, 1024x1024, 1792x1024, 1024x1792)No1024x1024Description of the image to generate
modelselect (dall-e-3, dall-e-2)Nodall-e-3DALL-E model version
qualityselect (standard, hd)NostandardImage quality (DALL-E 3 only)
nnumberNo1Number of images to generate (1-10)

Output:

FieldTypeDescription
imagesarrayList of generated images
modelstringModel name or identifier

Example: Generate artwork

yaml
prompt: A serene mountain landscape at sunset, digital art
size: 1024x1024
model: dall-e-3
quality: hd

Example: Create logo

yaml
prompt: Modern tech startup logo with blue and green colors
size: 512x512
model: dall-e-2
n: 3

AI Agent

llm.agent

Autonomous AI agent with multi-port connections (model, memory, tools)

Parameters:

NameTypeRequiredDefaultDescription
prompt_sourceselect (manual, auto)NomanualWhere to get the task prompt from
taskstringNo-The task for the agent to complete. Use {{input}} to reference upstream data.
prompt_pathstringNo{<!-- -->{input}<!-- -->}Path to extract prompt from input (e.g., {{input.message}})
join_strategyselect (first, newline, separator, json)NofirstHow to handle array inputs
join_separatorstringNo`

| Separator for joining array items | |max_input_size| number | No |10000| Maximum characters for prompt (prevents overflow) | |system_prompt| string | No |You are a helpful AI agent. Use the available tools to complete the task. Think step by step.| Instructions for the agent behavior | |tools| array | No |[]| List of module IDs (alternative to connecting tool nodes) | |context| object | No |{}| List of module IDs (alternative to connecting tool nodes) | |max_iterations| number | No |10| Additional context data for the agent | |provider | select (openai, anthropic, ollama) | No | openai| AI model provider | |model| string | No |gpt-4o| Specific model to use | |temperature| number | No |0.3| Creativity level (0=deterministic, 1=creative) | |api_key| string | No | - | API key (defaults to provider env var) | |base_url` | string | No | - | Custom API base URL (for Ollama or proxies) |

Output:

FieldTypeDescription
okbooleanWhether the agent completed successfully
resultstringWhether the agent completed successfully
stepsarrayWhether the agent completed successfully
tool_callsnumberThe final result from the agent
tokens_usednumberList of steps the agent took

Example: Web Research Agent

yaml
task: Search for the latest news about AI and summarize the top 3 stories
tools: ["http.request", "data.json_parse"]
model: gpt-4o

Example: Data Processing Agent

yaml
task: Read the CSV file, filter rows where status is "active", and count them
tools: ["file.read", "data.csv_parse", "array.filter"]
model: gpt-4o

Released under the Apache 2.0 License.