This document defines the fundamental concepts in AStack that form the building blocks of all AI applications built with the framework. These concepts are: Components, Pipelines, Ports, Agents, Tools, and Model Providers. Understanding these concepts is essential for working with AStack.
For information about the philosophical principles behind these concepts, see Framework Philosophy. For architectural implementation details, see Core Architecture.
A Component is the fundamental unit of computation in AStack. Everything in the framework—from simple data transformers to complex AI agents—is implemented as a component. This unified abstraction enables consistent composition and reuse across the entire system.
Components implement the HLang Node interface and expose two key methods for execution:
| Method | Purpose | Usage Mode |
|---|---|---|
_transform($i, $o) | Port-based reactive processing | Pipeline/reactive mode |
run(input) | Direct synchronous execution | Standalone mode |
The _transform method provides access to input ports $i and output ports $o, enabling reactive data flow processing. The run method accepts direct input and returns a promise, supporting standalone execution without pipeline infrastructure.
Component Lifecycle Diagram
Components can operate independently or be orchestrated through pipelines. In standalone mode, components execute immediately when run() is called. In pipeline mode, components are registered, connected via ports, and executed by the pipeline orchestrator.
Sources: README.md29-30 examples/text-splitter/index.ts52-76
A Pipeline is the orchestration engine that manages component execution and data flow. Pipelines coordinate multiple components, establish connections between them, and control execution order.
The Pipeline class provides the following core methods:
| Method | Signature | Purpose |
|---|---|---|
addComponent() | addComponent(name: string, instance: Node) | Register a component with a unique identifier |
connect() | connect(sourceName: string, sinkName: string) | Connect component ports using dot notation |
run() | run<T>(triggerName: string, params: unknown): Promise<T> | Execute the pipeline from a specified entry point |
getComponent() | getComponent(name: string): Node | undefined | Retrieve a registered component by name |
Connections use dot notation to specify component ports: componentName.portName. For example:
textSplitter.text refers to the text input port of the textSplitter componentagent.out refers to the out output port of the agent componentPipeline Orchestration Flow
Sources: packages/core/src/pipeline/index.ts8-189 examples/text-splitter/index.ts59-76
The pipeline system automatically creates two hidden components for execution management:
| Component | Class | Port | Purpose |
|---|---|---|---|
__BASE_PRODUCER__ | BaseProducer | out | Injects input data at pipeline entry point |
__BASE_CONSUMER__ | BaseConsumer | in | Collects output data at pipeline endpoint |
These internal components are created automatically during the first run() call and connect to the user-specified trigger and sink points.
Sources: packages/core/src/pipeline/index.ts100-115
Ports are the communication endpoints that enable data flow between components. The port system implements AStack's zero-adaptation layer design, allowing components to connect directly without intermediate adapters.
Components define two types of ports:
| Port Type | Access Method | Direction | Purpose |
|---|---|---|---|
| Input Port | Component.Port.I(name) | Incoming | Receives data from upstream components |
| Output Port | Component.Port.O(name) | Outgoing | Sends data to downstream components |
Zero-Adaptation Port Connection
The connection is established through the HLang runtime's port system. When pipeline.connect('A.out', 'B.in') is called, the pipeline retrieves the output port from Component A and the input port from Component B, then creates a direct connection through the HLang flow graph.
Sources: packages/core/src/pipeline/index.ts23-58 packages/core/src/pipeline/index.ts84-90
The pipeline resolves port references using the following logic:
componentName.portName)this.components.get(componentName))component.I(portName) or component.O(portName))This resolution happens in the from() and to() private methods of the Pipeline class.
Sources: packages/core/src/pipeline/index.ts23-58
An Agent is a specialized component that integrates LLM reasoning with tool execution capabilities. Agents can understand user requests, plan multi-step actions, invoke tools, and maintain conversation context through memory.
Agent Execution Flow
Agents implement a reasoning loop that:
| Agent Type | Class | Capability | Use Case |
|---|---|---|---|
| Base Agent | Agent | Standard LLM reasoning with tools | General purpose agent tasks |
| Streaming Agent | StreamingAgent | Real-time output streaming | Interactive chat applications with live updates |
For detailed information on agent implementations, see Agent System.
Sources: README.md151-175 README.md230-260
Tools extend agent capabilities by providing interfaces to external systems, APIs, and operations. Tools follow a standardized interface that enables agents to discover and invoke them without custom integration code.
Tools are defined using the createTool factory function with the following structure:
| Property | Type | Purpose |
|---|---|---|
name | string | Unique identifier for the tool |
description | string | Natural language description for LLM understanding |
parameters | object | JSON Schema defining expected input parameters |
execute | function | Async function implementing tool logic |
Multi-Round Tool Execution
Tools are provided to agents during initialization. When the model provider returns a tool call, the agent looks up the tool by name, validates parameters, executes it, and feeds the result back to the model. This cycle continues until the model generates a final response without tool calls.
Sources: README.md236-243 README.md154-177
For tool implementation details, see Tool Integration and Tools Package.
Model Providers abstract the integration with LLM services, providing a unified interface for different model backends. This abstraction enables applications to switch between model providers without changing application code.
Model providers implement two core methods:
| Method | Signature | Purpose |
|---|---|---|
chatCompletion() | chatCompletion(messages, options?): Promise<ChatCompletionResult> | Synchronous chat completion |
streamChatCompletion() | streamChatCompletion(messages, options?): AsyncIterableIterator<ChatCompletionChunk> | Streaming chat completion with real-time tokens |
Model providers consume and produce messages in a standardized format:
| Field | Type | Description |
|---|---|---|
role | 'system' | 'user' | 'assistant' | 'tool' | Message sender role |
content | string | Message text content |
tool_calls | ToolCall[] | Tool invocations requested by the model |
tool_call_id | string | Identifier linking tool results to requests |
Model Provider Abstraction Layer
The provider abstraction allows applications to work with any LLM service that implements the ModelProvider interface. Current implementations include Deepseek and OpenAI integrations.
Sources: README.md246-249
For detailed provider implementation, see Model Provider Interface and Integrations Package.
The following diagram illustrates how these fundamental concepts relate to each other in a complete AStack application:
AStack Concept Architecture
This architecture demonstrates AStack's layered design:
Sources: README.md21-52 README.md92-122
Components and pipelines support two distinct execution modes:
| Mode | Invocation | Use Case | Data Flow |
|---|---|---|---|
| Standalone | component.run(input) | Simple transformations, testing | Direct input → output |
| Pipeline | pipeline.run(trigger, params) | Complex workflows, multiple components | Multi-stage data flow through ports |
In standalone mode, components execute independently with direct input/output. In pipeline mode, components are orchestrated through the HLang flow engine with reactive port-based data flow.
Sources: examples/text-splitter/index.ts65-76 packages/core/src/pipeline/index.ts179-185
For detailed information on execution patterns and workflows, see Computation Patterns and Pipeline System.
Refresh this wiki