Discussion:
High CPU usage drawing XY Graph
(too old to reply)
crcragun
2008-06-26 18:10:06 UTC
Permalink
LabView 7.1.1 and 8.5.

 

My application requires that a XY Graph that allows the Operator to select which of many channels are plotted on the X and Y axis.  I have implemented a 10,000 point circular buffer that updates every 200ms.  The problem occurs after testing is complete and the buffer over time becomes filled with noise for each of the two selected channels.  The result is a tight blob of 10,000 points all connected with lines on the XY Graph.  Since the auto-scale is enabled, the size of the blob of data is maximized to fill the XY Graph.  In addition, the operator is in control of when the XY Plot is enabled/disabled.  The result of this scenario is excessive CPU usage causing the whole computer to be bogged down and nearly unresponsive to Operator selections.

 

The attached VI is a very simple (reduced) example of what I am experiencing.

 

Through some troubleshooting, I have learned the following.

The point style, interpolation, and line thickness all directly effect CPU usage.  A round point requires more CPU usage than does a square point.  A large dot takes more CPU usage than a small dot.  A thick line takes more CPU usage than a thin or no line.

The summary of what I observe is that the more pixels that are required to be represent the graph, the higher the CPU usage.

 

Does anyone have any ideas of how to further reduce the CPU usage either by other XY Graph settings or by some method of intelligently selecting which points to plot?

 

Thank you.


CUP Hog - XY Plot.vi:
http://forums.ni.com/attachments/ni/170/335354/1/CUP Hog - XY Plot.vi
GerdW
2008-06-26 19:10:05 UTC
Permalink
Hi crcragun," the more pixels that are required to be represent the graph, the higher the CPU usage" - that's true! :smileyvery-happy:So your options are:- obvious: draw less points (with thinner lines)! Can you really distinguish 10k points on this plot??? Especially when they are located on just 5 x-values???- also obvious: don't draw in each timeout case! Draw only when new points are added or plot options change...Message Edited by GerdW on 06-26-2008 09:03 PM
StevenA
2008-06-26 19:40:08 UTC
Permalink
To me it does not seem un reasonable to have 10k points on a graph. (the data in the example is only sample data not real data)  It seems that regardless of the number of points, If the graph is taking up a significant portion of the plot area you see the same results.  LV uses a lot of CPU to plot it.  If you have the same number of points follow a more continuous line over a larger range and there is a lot of empty space in the plot area, LV handles it without any trouble. 
Is there anyway to make this more efficient other than modifying the data and the range that data occupies?Message Edited by StevenA on 06-26-2008 01:31 PM
crcragun
2008-06-26 20:10:05 UTC
Permalink
I understand that 10K points is a lot, however there are reasons for this.  I can and will look into reducing the number of plotted points.
I should also make one thing clear; any time valid data is plotted and there is actual useful data displayed on the XY Graph, the CPU usage is very low.  When useful data is displayed, the data points are spread out over the whole plot which is constantly auto-scaling.  The problem only seems to occur when many points are filling the plot causing and single large blob consisting of lines and points.
Thanks
johnsold
2008-06-26 20:10:07 UTC
Permalink
Can you identify that "blob" by some simple measure such as standard deviation or max - min? If so, you could suppress the plot and have a boolean indicator to show "Invalid data" or something. Lynn
Ben
2008-06-26 20:10:09 UTC
Permalink
It sonds like the issue is when you have over-lapping points.
Try using the property "defer front panel updates" to defer updates (T) before the data is presented to the graph and then undefer (F) after the update.
Ben
StevenA
2008-06-26 22:10:06 UTC
Permalink
I have experienced high CPU usage on XY graphs in the past... I know there are ways to try and manipulate the data you put on the graph to make it less intensive on your cpu, but I would like to see NI work on making this more effecient.  I ran the posted example on my machine ( 3 GHz Dual Core Xeon with 3 GB ram and a 256MB Quadro FX 3500 video card) and it brought it to it's knees.  
johnsold
2008-06-27 13:10:07 UTC
Permalink
It may be an OS issue, at least in part, or a video driver. I ran the posted VI on my Dual G5 (PPC) Mac. The CPU usage does increase with Interpolation On, but the program remains responsive. I ran an heavy duty number cruncher simultaneously and did not notice any sluggishness in either VI. Other (non-LV) programs also have normal responsiveness while it runs. Both VIs are running while I type this. (It does not make my typing any better, though!) Lynn
Ben
2008-06-27 13:10:08 UTC
Permalink
Again, defer Front Panel updates before the data is presented to the graph.
<img src="Loading Image...">
Region A did not use defer FP updates.
Region B DID use defer FP updates
This is the code I used.
<img src="Loading Image...">
BenMessage Edited by Ben on 06-27-2008 08:00 AMMessage Edited by Ben on 06-27-2008 08:00 AM


Perf_Graph.PNG:
http://forums.ni.com/attachments/ni/170/335635/1/Perf_Graph.PNG


Defer_FP_Updates.PNG:
http://forums.ni.com/attachments/ni/170/335635/2/Defer_FP_Updates.PNG
crcragun
2008-06-27 16:40:05 UTC
Permalink
Ben,
The "Defer Front Panel Updates" is a great suggestion!&nbsp; I have used this&nbsp;property, but never in a situation like this where it is called twice every 200ms.&nbsp; Do you see a fundamental problem with calling this property with such frequency?&nbsp; In addition, the VI that I posted represents the simplest example that captures that displays the problem.&nbsp; My full application includes several other screens that are all open at the same time and include parallel running loops&nbsp;as well as communications with an RT.&nbsp; The VI that includes this Graph also has many Event states driven from front panel buttons.&nbsp; Will defering the front panel updates&nbsp;cause any issues with capturing&nbsp;any of these events?&nbsp; I&nbsp;am sure that I can evaluate your suggestion in my full application based upon CPU usage alone, but is there anything else that I should also look at to ensure that the evaluation is comprehensive?
&nbsp;
Lynn,
Since the various data channels provide a very wide range (~100dB) of signal levels, it would be tricky to detect at what point the graph is displaying meaningless data.
&nbsp;
Steve
I agree that it would be best if XY Graph could simply (some how) deal-with this situation.&nbsp; This would clearly rest at the feet of the LabVIEW developers.
Thanks for all of your feedback!!
Ben
2008-06-27 17:10:09 UTC
Permalink
1) defer only acts on the FP it is associated with. The others will update fine.
2) If you only present the data to the graph when the data changes instead of always send the same data 5 times a second will keep the CPU demands low.
Ben
crcragun
2008-06-27 17:10:12 UTC
Permalink
Thanks Ben,
The data being send to the XY Graph is in reality always changing although it may be by a very small amount.
Is it possible to defer updates for a specific object such as the XY Graph rather than for an entire front panel?
Cliff
altenbach
2008-06-27 17:10:13 UTC
Permalink
You can just place the graph&nbsp;terminal in a case structure and only update every Nth iteration of the loop. Use quotient&amp; remainder to divide the iteration count by N and wire the remainder to your case selector. Place the graph update code in the "0" case and leave the default case empty.
crcragun
2008-06-27 17:40:12 UTC
Permalink
altenbach
If the defer panel update does not work as hoped, I plan on doing something similar to you suggestion.&nbsp; I still want to Graph to update every 200ms, but with less data.&nbsp; I plan on reducing the array size from 10K to 5K and send every-other point from the RT to the Host.&nbsp; In the end I will probably continue to decimate the RT data until the CPU usage is reasonable.&nbsp; In addition, changing the line thickness to a thinner line dramatically reduces the CPU usage.
Cliff
StevenA
2008-06-27 18:10:06 UTC
Permalink
There is no question that reducing the amount of data on the graph will help with the performance issue.&nbsp; I think Ben's idea with the defer panel updates is an interesting one.&nbsp;&nbsp;
&nbsp;As I'm familiar with Cliffs project, let me give a little more perspective on the issue.&nbsp; I think the challenge he is having is that this graph is monitoring continuous data from a machine&nbsp; and the X and Y channels that are plotted against each other are selectable by the user on the fly depending on what he needs to monitor.&nbsp; The selection of channels that can be chosen for the graph have very different ranges.&nbsp; Keeping the auto scaling turned on is very nice so that the data can be properly displayed.&nbsp; What happens is that if the data selected by the user goes to zero, and all you have is some noise, the plot will zoom in (auto scaling) and turn into this "blob" of lines that the X-Y graph really starts to choke on.&nbsp; Yet at any instant there could be a large transient in the data that needs the resolution of 10k points to see the detail.&nbsp; The graph has no problem plotting the 10K points when those points are spread out over a large range.
Programatically reducing the number of data points when the data is near zero and turns into the dreaded "blob" is not trivial because in some cases, depending on the channels selected, there is valid data in that range.&nbsp;
Thanks for everyones&nbsp;input :smileyhappy:&nbsp;
&nbsp;
&nbsp;
Ben
2008-06-27 19:40:05 UTC
Permalink
"Taming the beast" is an art form.
Throtle the update by presenting the data in 1 second batches.
When apply updates, check the new data min max against the previous min-max and doa an "auto-scale once now"&nbsp; after the data is presented then set no auto-scale.&nbsp;(before the un-defer).
Ben

Loading...