Skip to main content
Scope: This guide focuses specifically on configuring StepFun’s step-3.5-flash-2603 or step-3.5-flash model in OpenClaw. For other OpenClaw features such as Skills, Channels, and Multi-Agent setups, refer to the official OpenClaw documentation.

Step 1: Subscribe to Step Plan and Get Your API Key

Before configuring OpenClaw, you’ll need to complete two things: subscribe to Step Plan and obtain an API key.

1.1 Subscribe to Step Plan

Step Plan is StepFun’s dedicated service tier for AI coding agent workflows. You must have an active subscription before you can use step-3.5-flash-2603 or step-3.5-flash through the Step Plan API endpoint. Head to the subscription page and follow the instructions to get started: Step Plan subscription

1.2 Get Your API Key

  1. Go to the StepFun platform at platform.stepfun.ai.
  2. Sign up and log in (skip this if you’ve already created an account during Step Plan subscription).
  3. Navigate to the API Keys section of your console and create a new key.
  4. Copy the key and store it somewhere safe — you’ll need it shortly.

Step 2: Install OpenClaw

If you don’t have OpenClaw installed yet, follow the steps below for your operating system.

System Requirements

RequirementMinimumRecommended
Node.jsv22+v24
RAM2 GB4 GB+
Disk Space1 GB5 GB+
OSmacOS / Linux / Windows (WSL2)

macOS / Linux

Open a terminal and run the one-line installer:
curl -fsSL https://openclaw.ai/install.sh | bash
This script detects your environment, installs dependencies (including Node.js if needed), and launches the onboarding wizard. You can also install manually via npm:
npm install -g openclaw@latest
openclaw onboard --install-daemon

Windows

OpenClaw on Windows requires WSL2 (Windows Subsystem for Linux). Running it natively on Windows is not recommended due to path differences and toolchain incompatibilities. 1. Install WSL2 (skip if already installed) Open PowerShell as Administrator and run:
wsl --install
This enables WSL2 and installs the default Ubuntu distribution. Restart your computer when prompted. To install a specific version (e.g., Ubuntu 24.04):
wsl --install -d Ubuntu-24.04
2. Enable systemd (required by the OpenClaw Gateway) In your WSL terminal, run:
sudo tee /etc/wsl.conf >/dev/null <<'EOF'
[boot]
systemd=true
EOF
Then run wsl --shutdown from PowerShell and reopen your Ubuntu terminal. Verify that systemd is working:
systemctl --user status
3. Install OpenClaw inside WSL From here, the process is identical to macOS/Linux. Run the installer script in your WSL Ubuntu terminal:
curl -fsSL https://openclaw.ai/install.sh | bash
Tips for Windows users:
  • Install Node.js inside WSL, not on Windows itself. Mixing the two environments causes issues.
  • Keep OpenClaw’s working directory within the WSL file system (e.g., ~/) rather than under /mnt/c/. Cross-filesystem I/O is significantly slower.

Step 3: Configure <model_id>

This is the core of the guide. OpenClaw supports custom model providers through the models.providers section of its configuration file. There are two ways to set things up: the CLI onboarding wizard or editing the config file directly.
Note: In the examples below, replace <model_id> with either step-3.5-flash-2603 or step-3.5-flash. Important: Whichever path you take, you’ll end up editing the config file. The CLI wizard handles the basics (provider, API key, etc.), but it assigns default model parameters — notably a contextWindow of only 16K — that are below the recommended 256000 used in the examples here. You’ll need to fix this manually.

Option A: CLI Onboarding + Manual Fix

If this is your first time installing OpenClaw or you’d like to re-run the onboarding flow:
openclaw onboard
When prompted:
  1. Select “Custom Provider” as the model provider.
  2. Choose openai-completions as the API compatibility mode (StepFun’s API is OpenAI-compatible).
  3. Enter the Base URL: https://api.stepfun.ai/step_plan/v1
  4. Paste in the API key you obtained in Step 1.
  5. Enter the Model ID: <model_id>
After the wizard finishes, you must correct the model parameters. Open ~/.openclaw/openclaw.json, find the stepfun provider block the CLI generated, and update the models array to:
"models": [
  {
    "id": "<model_id>",  // Replace with step-3.5-flash-2603 or step-3.5-flash
    "name": "<model_id>",
    "reasoning": true,
    "contextWindow": 256000,  // The CLI defaults to 16384 — change this to 256000
    "maxTokens": 8192
  }
]
Without this fix, OpenClaw will truncate your input at a 16K context window, which means the model won’t be able to handle longer code files or conversation histories. The OpenClaw configuration file is located at ~/.openclaw/openclaw.json. Open it in your editor of choice and add or modify the following:
{
  "agents": {
    "defaults": {
      "model": {
        "primary": "stepfun/<model_id>" // Replace <model_id> with step-3.5-flash-2603 or step-3.5-flash
      }
    }
  },
  "models": {
    "mode": "merge",
    "providers": {
      "stepfun": {
        "baseUrl": "https://api.stepfun.ai/step_plan/v1",
        "apiKey": "${STEP_API_KEY}",
        "api": "openai-completions",
        "models": [
          {
            "id": "<model_id>",
            "name": "<model_id>",
            "reasoning": true,
            "contextWindow": 256000,
            "maxTokens": 8192
          }
        ]
      }
    }
  }
}
Field reference:
FieldPurpose
agents.defaults.model.primarySets stepfun/<model_id> as the default model. Format: provider/model
models.mode"merge" means this custom provider is added alongside built-in providers, not replacing them
models.providers.stepfunThe provider identifier. You can name it anything, but something meaningful is best
baseUrlThe Step Plan API endpoint: https://api.stepfun.ai/step_plan/v1
apiKeyYour API key. Using ${STEP_API_KEY} references an environment variable (recommended). You can also paste the key directly
apiSet to "openai-completions" since StepFun uses the OpenAI Chat Completions format
models[].idThe model ID expected by the StepFun API: <model_id>
models[].nameA display name for the model, shown in OpenClaw’s UI only
models[].reasoningSet to true to enable chain-of-thought / reasoning mode
models[].contextWindowMaximum input tokens. The examples here use 256000; keep it aligned with the capabilities of the selected <model_id>
models[].maxTokensMaximum output tokens per response. 8192 is a sensible default
On defaults: If you omit reasoning, contextWindow, or maxTokens, OpenClaw falls back to:
  • reasoning: false
  • contextWindow: 200000
  • maxTokens: 8192
It’s best to set these explicitly so OpenClaw knows exactly what the model can do.
If you referenced ${STEP_API_KEY} in the config file, which is the recommended approach because it keeps secrets out of your config, you’ll need to export the variable in your shell.

macOS / Linux (Bash / Zsh)

Add the following line to your shell config file:
  • Zsh (default on macOS): edit ~/.zshrc
  • Bash: edit ~/.bashrc or ~/.bash_profile
export STEP_API_KEY="your-api-key-here"
Then reload:
# Zsh
source ~/.zshrc

# Bash
source ~/.bashrc

Windows (WSL2)

Same as Linux. Edit ~/.bashrc inside your WSL Ubuntu terminal:
echo 'export STEP_API_KEY="your-api-key-here"' >> ~/.bashrc
source ~/.bashrc
Security note: Never commit your API key to version control. If your openclaw.json is tracked by Git, always use the environment variable approach.

Step 5: Verify the Setup

Once everything is configured, run through these checks to make sure it’s working.

Check the Gateway status

openclaw gateway status
Make sure the Gateway is running (it listens on port 18789 by default).

List available models

openclaw models list
Confirm that stepfun/<model_id> appears in the output.

Switch to <model_id>

If you didn’t set it as the default model, you can switch on the fly:
openclaw models set stepfun/<model_id>
You can also switch models inside an OpenClaw chat session using the /model command.

Send a test message

Send any message in the OpenClaw chat interface and verify the model responds. If something goes wrong, check:
  1. Is the API key correct?
  2. Is the environment variable loaded? (Run echo $STEP_API_KEY to verify.)
  3. Can your machine reach api.stepfun.ai?

FAQ

”Model is not allowed” after configuration

If you have an agents.defaults.models allowlist configured, stepfun/<model_id> needs to be included there. Either add it explicitly:
{
  "agents": {
    "defaults": {
      "models": {
        "stepfun/<model_id>": {
          "alias": "Step Model"
        }
      }
    }
  }
}
Or remove the agents.defaults.models key entirely to disable the allowlist.

Why openai-completions for the api field?

StepFun’s API is fully compatible with the OpenAI Chat Completions request format. When configuring a custom provider in OpenClaw, there are two API types:
  • openai-completions — for providers compatible with OpenAI’s format (the vast majority of third-party providers)
  • anthropic-messages — for providers compatible with Anthropic’s Messages format
StepFun uses the former.

Do I have to use WSL2 on Windows?

Yes, it’s strongly recommended. OpenClaw relies on Linux toolchains (make, g++, etc.) and systemd for service management. Running it natively on Windows leads to a variety of compatibility issues. WSL2 is the officially supported path for Windows.

References