Time to print some money with social media.
The Goal: $3,000/mo selling LinkedIn growth packages to founders.
The Angle: I need to know exactly what content goes viral in specific niches (e.g., “B2B SaaS Sales”). I’m going to scrape LinkedIn feeds, sort by engagement (Likes/Comments), and use AI to rewrite the winning frameworks for my clients.
The Stack: RTILA X + OpenRouter (Claude 3.5) + LinkedIn.
The Plan:
I’m using RTILA to log into a burner LinkedIn account, search a hashtag, and scroll the feed to extract the post text, author, and like count.
Update 1: Virtualized DOM is kicking my ass
LinkedIn uses a virtualized list. As the bot scrolls down, the older posts at the top are removed from the DOM to save memory. If I wait until the end of the scroll to call extractData, I only get the last 5 posts.
How do I capture everything while scrolling?
Ah, the classic virtualized DOM! You can’t wait until the end. You need to extract while you scroll.
Instead of using the standard autoScroll helper, you need a custom while loop in your run_script:
- Scroll down a bit.
- Call
await helpers.extractData('posts', { saveToApi: false }).
- Push the results into a
Set or a Map (keyed by the post URL or author) to deduplicate them in memory.
- Repeat until you hit your limit.
- Finally, call
await helpers.saveData('posts', Array.from(yourMap.values())).
This captures the nodes before they disappear from the DOM!
Update 2: The Viral Frameworks are flowing 
The custom scroll loop worked perfectly. I scraped 500 posts under #SaaS, sorted them by likes in my database, and fed the top 10 into Claude to extract the copywriting frameworks.
I pitched a founder today using one of the generated frameworks as a sample. He loved it and signed a $1k/mo retainer. The RTILA → AI pipeline is lethal.