The problem:
Right now RTILA X only supports Local LLMs (GGUF) and OpenRouter for cloud models. There’s no way to connect custom OpenAI-compatible API endpoints. Services like GLM Coding Plan, Minimax, and Alibaba AI Coding Plan offer high quality models at very competitive prices, but users can’t plug them in.
This came up from a community member here: Using GLM/Minimax Coding Plan
The solution:
Add a “Custom Provider” option in the AI settings where users can enter a base URL, API key, and model name for any OpenAI-compatible endpoint. This way anyone can bring their own provider without waiting for official integrations.
Context:
GLM, Minimax, and Alibaba are offering frontier models for as low as $3/month. Many power users already have these subscriptions and would love to use them inside RTILA X.
Current workaround:
You can run GGUF versions of Qwen, DeepSeek, or Llama locally through our built-in engine at zero cost. We also have our own finetuned model for RTILA X workflows: rtila-corporation/rtila-assistant-lite-1.5 · Hugging Face