Using GLM/Minimax Coding Plan

hey there is it possible to use codingplan in rtila X, i think it’ll. be awesome to integrate those as brain

Hey!

Right now RTILA X lets you either run local models offline (Qwen, DeepSeek, Llama via GGUF) or use cloud models through OpenRouter. We don’t support custom API endpoints yet though, so no way to plug in a coding plan directly.

Totally fair ask, we’ll note it as a feature request. In the meantime you can grab the GGUF versions of Qwen or DeepSeek and run them locally through our engine, zero cost and fully private. Also worth checking out our own finetuned model built specifically for RTILA X workflows: rtila-corporation/rtila-assistant-lite-1.5 · Hugging Face

Thanks for the suggestion!