from the `free` output we can surmise that roSgNode was eating on average each >200KB (62mb/210). lovely, just lovely.
it's not going to work the way you imagined, proactively creating Nodes for every possible channel. instead, keep a stable of nodes for only the content seen on-screen (wholly or partially) - when it goes out, either release it or re-purpose for the next channel.
PS. i am actually reminded of a case many years ago, when Java was young and still considered a good choice for GUI. A team in my then company had built an event viewer for a SNA stack on exotic platform. It was advanced^, it was ambitious... but turned out to be a usability disaster. They took it to a trade show and it got universally panned by both the target clients and the other corp. branches. The issue was that it was unbearably slow - opening a log would take tens of minutes - if indeed it opens and JVM did not run out of memory first. Not to mention scrolling the list of events was glacially slow.
We did a post-mortem on the project, all-hands (we were a small branch). The team leader was describing the untenable situation: we may have a log file with millions of networking events (something the did not test), and each of these millions of events consists of fields and because different types of events had different formats, that resulted into a flurry of sub-classes... and then each field inside was either a primitive type - or more often another class, everything was OOP'd "by the book". "Wait" - i said - "are you telling me that you create tens of millions of java objects while parsing a log file? And that the scrollable grid-view of events is actually an in-memory table with million rows? "Yah!" - he said - "is there any other way?!". It was a face-palm moment for me - to me it was obvious that digesting the compact binary file format into gazillions of objects is not going to work in this universe (i am bit foggy but think it was a time when PCs had 64MB RAM) - even as if may have seemed the as the straight-forward approach. I recommended couple of things - don't grind the log files into such fine powder, instead use accessor methods which can fetch the info needed based on offsets retained or calculated - and don't put in the table View by magnitudes more lines that are visible on screen, instead add and remove entries as it scrolls (yes, you had to re-implement scroll bar behavior but that's small price to pay, all things considered). Couple of weeks later the project manager came to thank me for the ideas - after they did that, the memory issues vanished (no more GC and swap file coffee breaks) and the speed jumped up - now it was opening logs "instantly" and one could start browsing events "immediately". The updated version received a thumbs-up from the SNA branch and went into replacing the legacy viewer.
(^) had fancy filters like in todays' WireShark but with a visual AND/OR/NOT builder... yours truly had a design finger into that pie