Skip to content

Configuration

NEX can be configured by editing the file located at:

~/.nex/config

This file uses a standard env-style format and accepts key-value pairs in the form:

VARIABLE=value

Changes are applied automatically upon restart.

Configure License

To activate your NEX license, add the following line to your config file:

LICENSE_KEY=your-license-key-here

At the next startup, the license will be automatically detected and NEX will use the default AI API gateway.


Configure a Custom LLM Service

NEX allows you to connect your own Large Language Model (LLM) provider instead of using the default gateway.

All providers share these common fields:

Variable Description
LLM_PROVIDER The backend provider name (e.g. openai, google, bedrock, ollama)
LLM_API_KEY Your API key for the provider (use any for local models)
LLM_MODEL The model identifier or name
LLM_DISPLAY_NAME A friendly name shown in the NEX UI
LLM_BASE_URL (Optional) Custom API base URL (mainly for local/self-hosted providers)
LLM_PROVIDER_REGION (Optional) Cloud region for providers like AWS Bedrock

Info

Only one provider can be active at a time. Make sure you define a single LLM_PROVIDER block in your config.

Example Configurations

Below are ready-to-use examples for the most common providers.

Google (Gemini)

LLM_PROVIDER=google
LLM_API_KEY=your-api-key-here
LLM_MODEL=gemini-3-pro-preview
LLM_DISPLAY_NAME="Gemini 3 Pro"

OpenAI

LLM_PROVIDER=openai
LLM_API_KEY=your-api-key-here
LLM_MODEL=gpt5.2
LLM_DISPLAY_NAME="GPT 5.2"

AWS Bedrock (Anthropic Claude)

Warning

Make sure your AWS credentials and Bedrock permissions are properly configured in your environment before starting NEX.

LLM_PROVIDER=bedrock
LLM_API_KEY=your-api-key-here
LLM_MODEL=arn:aws:bedrock:us-east-1:456510618647:inference-profile/global.anthropic.claude-sonnet-4-5-20250929-v1:0
LLM_PROVIDER_REGION=us-east-1
LLM_DISPLAY_NAME="Claude Sonnet 4.5"

Ollama and Other Local LLM Providers

Tip

Ensure your local LLM server is running before starting NEX.

LLM_PROVIDER=ollama
LLM_MODEL=gpt-oss:latest
LLM_BASE_URL=http://localhost:11434/v1
LLM_DISPLAY_NAME="GPT-OSS"
LLM_API_KEY=any