I’m dealing with a highly aggressive anti-bot site. They are randomizing the DOM tree, using canvas elements for text, and injecting fake invisible buttons to trap scrapers. CSS and XPath are completely useless here.
Can the AI features in RTILA X help with this?
1 Like
Yes! This is exactly where our AI Vision Agent shines. When the DOM is a hostile environment, we stop relying on the code and start looking at the page like a human.
When you trigger the Vision Agent, RTILA X injects a visual annotation layer over the page. It calculates the bounding boxes of all visible, interactable elements (ignoring hidden traps) and draws numbered boxes over them. It then takes a screenshot and sends it to the LLM.
You can just prompt the AI: “Click the checkout button.” The AI looks at the screenshot, identifies the numbered box over the visual checkout button, and tells the engine to click those exact coordinates. It completely bypasses the need for CSS selectors.
1 Like
Yes! This is exactly where our AI Vision Agent shines. When the DOM is a hostile environment, we stop relying on the code and start looking at the page like a human.
When you trigger the Vision Agent, RTILA X injects a visual annotation layer over the page. It calculates the bounding boxes of all visible, interactable elements (ignoring hidden traps) and draws numbered boxes over them. It then takes a screenshot and sends it to the LLM.
You can just prompt the AI: “Click the checkout button.” The AI looks at the screenshot, identifies the numbered box over the visual checkout button, and tells the engine to click those exact coordinates. It completely bypasses the need for CSS selectors.