"joetesta" wrote:
We definitely don't need that much data, I just don't have control over the API so have to trim down what's there.
I see. Ha! This is interesting. Can you try using this right after parseJSON():
function deflate(json, deduper=invalid):
if deduper = invalid then deduper = { } : deduper.setModeCaseSensitive()
if getInterface(json, "ifAssociativeArray") <> invalid:
for each k in json:
json[k] = deflate(json[k], deduper)
end for
elseif getInterface(json, "ifArray") <> invalid:
for i = 0 to json.count() - 1:
json[i] = deflate(json[i], deduper)
end for
elseif getInterface(json, "ifString") <> invalid:
s = deduper[json]
if s = invalid then deduper[json] = json : s = deduper[json]
json = s
end if
return json
end function
To see what effect it may (or may not) have - like so, parse - print GC info - "deflate" - print GC info, something like:
BrightScript Debugger> foo = parseJson(bar)
BrightScript Debugger> ? runGarbageCollector(): foo = deflate(foo): ? runGarbageCollector()
Can you try - and share the results?
PS. i just tested it on a 6MB proper JSON, which sired about 103,000 instances. After deflating that number went down to 64,000 - so 39,000 strings were eliminated. Per `bscs`, remaining are 16,000 arrays, 6800 dictionaries and 42,000 strings. In other words 48% of the strings are gone as duplicates.