Scope: This guide focuses specifically on configuring StepFun’sstep-3.5-flash-2603orstep-3.5-flashmodel in OpenClaw. For other OpenClaw features such as Skills, Channels, and Multi-Agent setups, refer to the official OpenClaw documentation.
Step 1: Subscribe to Step Plan and Get Your API Key
Before configuring OpenClaw, you’ll need to complete two things: subscribe to Step Plan and obtain an API key.1.1 Subscribe to Step Plan
Step Plan is StepFun’s dedicated service tier for AI coding agent workflows. You must have an active subscription before you can usestep-3.5-flash-2603 or step-3.5-flash through the Step Plan API endpoint.
Head to the subscription page and follow the instructions to get started:
Step Plan subscription
1.2 Get Your API Key
- Go to the StepFun platform at platform.stepfun.ai.
- Sign up and log in (skip this if you’ve already created an account during Step Plan subscription).
- Navigate to the API Keys section of your console and create a new key.
- Copy the key and store it somewhere safe — you’ll need it shortly.
Step 2: Install OpenClaw
If you don’t have OpenClaw installed yet, follow the steps below for your operating system.System Requirements
| Requirement | Minimum | Recommended |
|---|---|---|
| Node.js | v22+ | v24 |
| RAM | 2 GB | 4 GB+ |
| Disk Space | 1 GB | 5 GB+ |
| OS | macOS / Linux / Windows (WSL2) | — |
macOS / Linux
Open a terminal and run the one-line installer:Windows
OpenClaw on Windows requires WSL2 (Windows Subsystem for Linux). Running it natively on Windows is not recommended due to path differences and toolchain incompatibilities. 1. Install WSL2 (skip if already installed) Open PowerShell as Administrator and run:wsl --shutdown from PowerShell and reopen your Ubuntu terminal.
Verify that systemd is working:
Tips for Windows users:
- Install Node.js inside WSL, not on Windows itself. Mixing the two environments causes issues.
- Keep OpenClaw’s working directory within the WSL file system (e.g.,
~/) rather than under/mnt/c/. Cross-filesystem I/O is significantly slower.
Step 3: Configure <model_id>
This is the core of the guide. OpenClaw supports custom model providers through the models.providers section of its configuration file. There are two ways to set things up: the CLI onboarding wizard or editing the config file directly.
Note: In the examples below, replace<model_id>with eitherstep-3.5-flash-2603orstep-3.5-flash. Important: Whichever path you take, you’ll end up editing the config file. The CLI wizard handles the basics (provider, API key, etc.), but it assigns default model parameters — notably acontextWindowof only 16K — that are below the recommended256000used in the examples here. You’ll need to fix this manually.
Option A: CLI Onboarding + Manual Fix
If this is your first time installing OpenClaw or you’d like to re-run the onboarding flow:- Select “Custom Provider” as the model provider.
- Choose
openai-completionsas the API compatibility mode (StepFun’s API is OpenAI-compatible). - Enter the Base URL:
https://api.stepfun.ai/step_plan/v1 - Paste in the API key you obtained in Step 1.
- Enter the Model ID:
<model_id>
~/.openclaw/openclaw.json, find the stepfun provider block the CLI generated, and update the models array to:
Option B: Edit the Config File Directly (Recommended)
The OpenClaw configuration file is located at~/.openclaw/openclaw.json.
Open it in your editor of choice and add or modify the following:
| Field | Purpose |
|---|---|
agents.defaults.model.primary | Sets stepfun/<model_id> as the default model. Format: provider/model |
models.mode | "merge" means this custom provider is added alongside built-in providers, not replacing them |
models.providers.stepfun | The provider identifier. You can name it anything, but something meaningful is best |
baseUrl | The Step Plan API endpoint: https://api.stepfun.ai/step_plan/v1 |
apiKey | Your API key. Using ${STEP_API_KEY} references an environment variable (recommended). You can also paste the key directly |
api | Set to "openai-completions" since StepFun uses the OpenAI Chat Completions format |
models[].id | The model ID expected by the StepFun API: <model_id> |
models[].name | A display name for the model, shown in OpenClaw’s UI only |
models[].reasoning | Set to true to enable chain-of-thought / reasoning mode |
models[].contextWindow | Maximum input tokens. The examples here use 256000; keep it aligned with the capabilities of the selected <model_id> |
models[].maxTokens | Maximum output tokens per response. 8192 is a sensible default |
On defaults: If you omitreasoning,contextWindow, ormaxTokens, OpenClaw falls back to:It’s best to set these explicitly so OpenClaw knows exactly what the model can do.
reasoning:falsecontextWindow:200000maxTokens:8192
Step 4: Set Up the Environment Variable (Recommended)
If you referenced${STEP_API_KEY} in the config file, which is the recommended approach because it keeps secrets out of your config, you’ll need to export the variable in your shell.
macOS / Linux (Bash / Zsh)
Add the following line to your shell config file:- Zsh (default on macOS): edit
~/.zshrc - Bash: edit
~/.bashrcor~/.bash_profile
Windows (WSL2)
Same as Linux. Edit~/.bashrc inside your WSL Ubuntu terminal:
Security note: Never commit your API key to version control. If your openclaw.json is tracked by Git, always use the environment variable approach.
Step 5: Verify the Setup
Once everything is configured, run through these checks to make sure it’s working.Check the Gateway status
List available models
stepfun/<model_id> appears in the output.
Switch to <model_id>
If you didn’t set it as the default model, you can switch on the fly:
/model command.
Send a test message
Send any message in the OpenClaw chat interface and verify the model responds. If something goes wrong, check:- Is the API key correct?
- Is the environment variable loaded? (Run
echo $STEP_API_KEYto verify.) - Can your machine reach
api.stepfun.ai?
FAQ
”Model is not allowed” after configuration
If you have anagents.defaults.models allowlist configured, stepfun/<model_id> needs to be included there. Either add it explicitly:
agents.defaults.models key entirely to disable the allowlist.
Why openai-completions for the api field?
StepFun’s API is fully compatible with the OpenAI Chat Completions request format. When configuring a custom provider in OpenClaw, there are two API types:
openai-completions— for providers compatible with OpenAI’s format (the vast majority of third-party providers)anthropic-messages— for providers compatible with Anthropic’s Messages format
Do I have to use WSL2 on Windows?
Yes, it’s strongly recommended. OpenClaw relies on Linux toolchains (make, g++, etc.) and systemd for service management. Running it natively on Windows leads to a variety of compatibility issues. WSL2 is the officially supported path for Windows.