ShinobiCharts memory problem


Hi, I really hope you can help as this is causing a major problem in my app re memory usage with graphs!

I have up to 7 small graphs on an iPad screen, but for testing I am just making use of 3 and each of these graphs have 3 line series data point streaming from sensors.

The graphs are updated every 100ms in a way that is shown in the sample accelerometer streaming app from this site.  

When I run using profiiler I watch the memory usage continuosly grow at an alarming rate going up to and beyond 130mb before crashing.

I have tracked the memory usage down to the adding of data to each graph.  Although I record the data into a singleton array this does not affect memory usage in any way, I commented out the call to update graphs and memory creep is near non existent.   As soon as I uncomment the code that adds the new datapoint data memory creep goes up and up and away…

When I run profiler on the sample app it behaves almost the same, but much slower probably due to just one graph.

I notice an interesting snippet of code in the sample app:

if (_data[i].count > 500) {
            // when we hit 500, remove the first point in the series
            [_data[i] removeObjectsInRange:NSMakeRange(0, 1)];
            [_chart removeNumberOfDataPoints:1 fromStartOfSeriesAtIndex:i];

Is this in some way an attempt to manage this problem?  

I tried changing the remove 1 datapoint to 10 but do not see much affect and even with this original code in place there is a slow creep on memory usage.

Does this mean that it is a requirment to restrict the amount of datapoints in the graph so at to avoid memory issues?

If so…how can I record more than 500 data points?

I have even tried place various @autoreleasepool statements to no avail.

This has basically nobbled my app and I’m stuck as to the way forward, so would really appreacite some help…



Hi Paul,

I definitely expect you to be able to record more than 500 datapoints before experiencing memory pressure! 

Have you tried profiling your applications memory, using generation markers (on the left hand side when you’ve got the allocations tool focused) at regular intervals to inspect exactly which new objects are being created? That would definitely help us understand why the memory is growing so rapidly. 

We’ve recently fixed some memory issues that were causing a noticable increase in memory for users who were rendering often / streaming. We expect that these fixes this will be released early next week, however, at this point in time I couldn’t possibly confirm whether the problems you are having are symptom of the same issue.

As a side note, I’d say that redrawing every chart in your application every 100ms is going to give you performance issues -  you’ll constantly have your charts rendering and hogging the main thread. Would your users really be able to take in the meaning of extra data ten times a second? Your best bet on that front would be to buffer your incoming data and reload your charts at a slower rate. (I know you didn’t ask about this but I figured I’d chuck the tip out there - take it or leave it!  :laughing:)