Discussion:
Labview 8 Performance , FGV's very very slow. Stuck,Need solution ASAP.
(too old to reply)
ashm01
2006-01-31 11:10:44 UTC
Permalink
Hi,

 

I have a very STRANGE issue here, I migrated my working Labview 7.1.1 RT code to Labview 8.0 and noticed the performance degradation beyond acceptance level.:smileymad:

 

I use "RT Simple CPU Usage.VI" to check the CPU resources. I usually poll the FPGA cards in a time critical VI, (I have 2 FPGA cards). After polling I use a FGV to transfer it out into a normal priority thread and process it. In Labview 7.1 my CPU Usage never exceed 40%. However in labview 8 it?s a constant 100% . I have attached a screen shot of my 7.1 Vi's which have been migrated to 8.0.

 

The processing would be comparing values, storing to disk etc. However, the FGV?s in Labview 8 are hogging up the resources and that seems to be the bottle neck. ( I disabled every other loops/ tasks inorder to come down to this conclusion). I also tried using the so called "Shared variables w/FIFO's enabled" and the result is still the same. if i disable the reading of the FGV's, my idle usage with polling is 20%. This doesn't make sense to me, in theory what works in LV 7.1 should work in 8.0.

 

Also, I cannot switch to LV7.1 once 8.0 is installed. Does anyone know the dependencies required to bring it back to 7.1 through the MAX "add/remove software"? 

 

 

Please advice.


NP-Reader.JPG:
Loading Image...


TC-POLLFPGA's.JPG:
Loading Image...
sbassett
2006-02-02 21:10:43 UTC
Permalink
ashm01,
The answer to your second question is that to move from 8.0 Real-Time to 7.1 your will need to uninstall the Network Variable Engine and the Variable Client Support from your RT device.  This should allow you to revert back to Real-Time 7.1.
For the first question, it is very interesting that it works with 7.1.1 and not in 8.0. Since using shared variables with a RT FIFO also gives a cpu usage of 100% this makes me think it is most likely it is specific your code.   You may try doing the following:
1) In the timed loop set the source name to be 1KHz and the period to be one, this way you are not taking too much cpu usage and wasting the 999 cycles
2) In fact go ahead and just use the regular while loop as time critical as opposed to using a timed loop. The clock rate will be as mentioned in step 1
3) Also in the normal priority loop increase the wait time a little and observe the cpu usage
4) Use the shared variable with the RT communication wizard and do a simple read and write. Observe if it is going to 100% or not
Also you may want to refer to the following links that recommend how to prevent the cpu usage going to 100%.
<a href="http://digital.ni.com/public.nsf/websearch/F4D776187EFCC30986256EFC007FC922?OpenDocument" target="_blank">http://digital.ni.com/public.nsf/websearch/F4D776187EFCC30986256EFC007FC922?OpenDocument</a>
Hope this helps, please let me know if you have more questions. Good luck
Thanks<a href="http://ae.natinst.com/operations/ae/public.nsf/web/searchinternal/f4d776187efcc30986256efc007fc922?OpenDocument" target="_blank"></a>
Steven B.
ashm01
2006-02-03 12:40:33 UTC
Permalink
Steven,
Need to add to your comments on my second question. I uninstalled the Network Variable Engine and the Variable Client Support, but you would also need to downgrade the NI-RIO to 1.4, only then will it allow you to install LV7.
However, this did not solve my problem because a 78433R was not recognized in the MAX(Needed NI-RIO 1.3, which Magically disappeared as an option)&nbsp;. Hence, uninstalled all NI products, 7.1/8.0 and reinstalled. A few hrs later:smileymad:
Same old code is perfectly stable in 7.1.1, allowing me interthread transfer @ 1ms.:smileyvery-happy: So this may be an issue which someone may want to recreate on their systems.
Until this problem is solved, I cannot migrate to LV8.
Regards,
Ash
ashm01
2006-02-21 14:10:37 UTC
Permalink
its been a while since I had some updates on this issue. I finally found some time which allowed me to test the suggestions of the AE's and forum replies.
&nbsp;
Unfortunately, the results are very inclusive. I tried two examples to isolate the problem.
&nbsp;
1)A simple VI which has a TC VI writing values using an FGV and a NPL picking them up. The RTSM shows a usage of 10-12% So FGV's can me ruled out?
&nbsp;
2)Based on the above attached example, using real live data from the 7831R &amp; 7833R, my CPU usage is @80% just for the polling. :smileymad:
(I&nbsp;simply don't understand why the same code in LV7.1.1 does not yield the same result.) It is next to impossible to locate this issue.
&nbsp;
&nbsp;If I run my TCL loop by itself, I see 5% usage in LV8. If the NPL is used, the CPU is @80% AFTER I disable/uncheck the Memory usage in the RTSM. Without the option unchecked, I can't get a CPU Usage % display (NaN).
&nbsp;
Below&nbsp;I am mentioning the steps which I have taken to ensure that I have migrated correctly.

- First opened up the FPGA Migration utility to convert them to 8.0.

- Converted them to 8.0

- opened up my Polling code and refreshed/reassigned the resource and VI.

- However, I got an error saying "Resources/Targets not found" because it did not create them under my RT Target.(Go figure)

- So I erased the FPGA targets from the Host computer and recreated them in the RT target.

- Added the existing VI's, which resulted in missing IO mapping

- Reassigned/Remapped all the IO points&nbsp;for the FPGA VI's within the project

- Updated the FPGA&nbsp;code with the new IO Alias's Mapping in the VI.

- Recompiled&nbsp;both these Vi's with the right target. (7831R &amp; 7833R)

- Reassociated the RT host Vi's with the newly compiled FPGA&nbsp;Vi's and target.

- The above process yields 80% with the NPL&nbsp;Vi's and only&nbsp;5%&nbsp;in the TCL. (Go figure)

If I have missed something or done something improper. Please advice. As of now, LV8 is still a no GO for me.
Regards,
Ashm01
&nbsp;
ashm01
2006-02-03 04:10:35 UTC
Permalink
Hi Steven,
&nbsp;
I am seriously contemplating reverting back to RT 7.1, considering I don't have much time to do extra R&amp;D. Anyway, to ensure that the problem was not the code I have wasted three days without much progress&nbsp;by changing the things listed below.
&nbsp;

- Yes, I did change the timed loop to a regular while loop with a "wait until next&nbsp;ms" with 1 ms. and i played around with the Normal priority VI until I saw the CPU&nbsp;usage come down. Basically I started the NPL to&nbsp;a 100ms and gradually came down. The conclusion was anything&nbsp;less than 25ms caused the&nbsp;CPU&nbsp; to peak at 100%. This latency is simply not&nbsp;acceptable considering what is&nbsp;required.

- I reinstalled Labview 8 on a fresh PC to make sure that no version issues existed with 7.1 and 8.0, still no go. :smileymad:

- Mass compiled&nbsp;all VI's.

- Took the same above example and replaced the FGV with Shared variables and&nbsp;the&nbsp;result is&nbsp;the same.

- Put additonal&nbsp;benchmarks for calculating the time required to execute each part of the code by getting the tick count and tickcount after the code is executed in a sequence structure. The sad answer is the difference&nbsp;was ZERO.

- Within the timed loops, also check whether "Finished Late" turned on. Answer = NO.

Anything else which I can do to investigate?
This one has really got me, maybe I should have learnt from prior experiences that any version of Labview x.0 is buggy, I prefer the versions which have revisions. :smileyindifferent:
&nbsp;
Regards,
Ash
ashm01
2006-02-06 05:10:49 UTC
Permalink
This post might be inappropriate. Click to display it.
sbassett
2006-02-07 19:10:43 UTC
Permalink
Ash,
Thanks for posting your code.&nbsp; I took a look at your code and I noticed that you were using the RT CPU Usage VIs that were developed for&nbsp;LabVIEW 7.1.&nbsp; I believe the VIs you are using were found from the following link:
Programmatically Monitoring the CPU Usage of a LabVIEW Real-Time Target (ETS Only) <a href="http://sine.ni.com/apps/we/niepd_web_display.display_epd4?p_guid=BEC1E4CCD3E15E28E034080020E74861&amp;p_node=DZ52103" target="_blank">http://sine.ni.com/apps/we/niepd_web_display.display_epd4?p_guid=BEC1E4CCD3E15E28E034080020E74861&amp;p_node=DZ52103</a>
These VIs have most likely not been tested with LabVIEW 8.0 and could be the cause of the behavior you are seeing.&nbsp; I would suggest removing them from your code and&nbsp;try using &nbsp;the Real Time System Manager in LabVIEW 8.0 to monitor your CPU usage.&nbsp; You can fine the RTSM tool by going to Tools &gt; Real-Time Module &gt; System Manager.
Let me know if you get different results using the RTSM.&nbsp;
Steven B.
&nbsp;
sbassett
2006-02-10 18:12:39 UTC
Permalink
Ash,
You are correct that the RTSM will add some overhead to monitor the CPU Usage and Memory but that would be expected.&nbsp; I tested one of the example programs distributed with LabVIEW RT 7.1 on LabVIEW RT 8.0 with the RTSM and found when using the RTSM they both gave the same CPU Usage (about 6%).&nbsp; I would be curious to see if this result would be the same on your RT device.&nbsp; I have attached the&nbsp;project I used in LabVIEW 8.0 and the example can be found in the LabVIEW 7.1 Example finder by going to Toolkits and Modules &gt; Real Time &gt; Communication &gt; Functional Global Communication.vi
Thanks,
&nbsp;
Steven B.Message Edited by sbassett on 02-10-2006 11:53 AM


FuncGlobVar.zip:
http://forums.ni.com/attachments/ni/170/166842/1/FuncGlobVar.zip
Avi Harjani
2006-02-23 23:10:35 UTC
Permalink
Hello All!
&nbsp;&nbsp; &nbsp; &nbsp; I have been working on this issue with Steve
and Rohit and following it. We set up the system with two R series
boards and compiled the two FPGA VI's. As shown in the attachment with
LabVIEW 8.0 the CPU usage is around 22%.&nbsp; This is when observing
the Host VI's front panel in LabVIEW Real-Time using the RTSM. When you
look at the FPGA VI's front panel and launch them, they both
additionally increase the CPU usage on the RT to about 60-70%. This is
expected as the FPGA front panel still takes resources out of the RT
CPU and we do not recommend customers view front panel's of the FPGA
VI. The FPGA's are running at a very fast rate and the real time cannot
keep up to it. Please let us know if this is an issue, and did you try
viewing the FPGA VI's front panel in 7.1 along with the RT VI in the
RTSM.
Thanks and Good luck
Best regards
Avi Harjani


CPU Usage.GIF:
http://forums.ni.com/attachments/ni/170/169581/1/CPU Usage.GIF
ashm01
2006-02-24 04:40:41 UTC
Permalink
Hi Avi,
&nbsp;
To answer your question, No I did not open the FPGA VI's front panel because that is not required. Ideally,your CPU usage&nbsp;seems correct because I would assume I would see the same in RT7.11 (Ballpark Figure)
&nbsp;
However, one issue does come to mind looking at your screenshot. Behind the RTSM window, I see references to the VISA resources. I did this differently, I had removed these references and recreated the FPGA VI's reference through the project (Converted 7.1.1 code screenshot). This was done, because It seemed neater to have references through the project than a huge string constant.
See attached screenshots of how I integrated all into one project (See sample&nbsp;project Structure).
I can be wrong in my implementation, but I thought the main intent and advantage of a project structure is to integrate the complete gamet of all targets. Does having FPGA targets within the project impact on the CPU usage? even though the panel is not opened?
&nbsp;
After creating the FPGA targets in the project, I would open their reference as shown in the (OpenReference) screen shot.
&nbsp;
Please advice.
&nbsp;
AshM01


Converted 7.1.1 code.GIF:
http://forums.ni.com/attachments/ni/170/169649/1/Converted 7.1.1 code.GIF


Project Sample.JPG:
http://forums.ni.com/attachments/ni/170/169649/2/Project Sample.JPG


OpenReference.JPG:
Loading Image...
.avi file
2006-02-28 23:40:45 UTC
Permalink
Greetings!

&nbsp; &nbsp; &nbsp;&nbsp; From the screenshots you posted, it is just fine
creating it under folders and letting LabVIEW look up the resource. That is
indeed the true benefit of the project structure. We set up the system similar
to yours and did not find the CPU usage raise up to 80%, it remained at 21%.
Refer to the attached screenshots.

1) Did you test the devices and made sure they are not configured for emulator
mode and that you have the R series boards configured in the PXI systems. The
reason is when configured in emulator the CPU takes up the extra operation
overhead and bumps it to 80%.
2) I have attached the code in 8.0 (Your application) we used to test the
system out. Please refer
3) Our recommendation is to use the same setup you have and just make a simple
application in a SCTL and observe in the RTSM the CPU usage.

Thanks and hope this helps
Best regards
Avi Harjani


Issue-Ash.zip:
http://forums.ni.com/attachments/ni/170/170593/1/Issue-Ash.zip


LabVIEWReferenceFPGA-Folder.GIF:
Loading Image...


LabVIEWReferenceFPGA.GIF:
Loading Image...
ashm01
2006-03-03 07:40:40 UTC
Permalink
Steve,
&nbsp;
Attached is the screenshot from MAX. I hope it helps. For what I can see is, I have some additional Drivers.Can they be&nbsp;relevant to the issues.:smileysurprised:
&nbsp;
&nbsp;i doubt it that the extra drivers can cause this.
&nbsp;
Regards,
Ashm01Message Edited by ashm01 on 03-03-2006 01:34 AM


Max.JPG:
Loading Image...
ashm01
2006-03-07 07:10:42 UTC
Permalink
Steven,
I have done the following exercises and hope the results steer us in a proper direction.

- I got rid of the&nbsp;4 extra drivers from the PXI, Rebooted...

- re-ran the same Example we had issues with. No Change in result.

Then I shifted focus to the examples from the Example finder. I used the Digital port input and output for R series project.

- Since the target was for a PCI R series, I had to create a PXI target and then subsequently an FPGA target.

- Created a host VI and rebinded the FPGA VI and read the Input ports and wrote them also. The result was 5% stable with 7831R.

To Further troubleshoot our problems, I then copied the same VI for the 7833R, recompiled it,created additional references for the second card etc and ran it. The result is attached. This is the first time I am using the 7831&amp;7833R's together, usually they are both the same (either both 7831's or 7833's)
Regards,
AshM01


BasicIOHost.JPG:
Loading Image...
ashm01
2006-03-01 11:40:33 UTC
Permalink
First of all, let me appreciate all your efforts in this so far. However, I don't have good news :smileyindifferent:

- I took the same code you guys had attached and opened up the same project. I had to recomiple the FPGA's since the Bit files were missing.

- Also, whenever we set the RIO targets as emulators, it shows up in parenthesis. I also checked within the properties and ensured that they were not turned on.

- I also removed (Physically) all third party cards which use the VISA server and I still see 80-90%/

Still 80-90%, I am really stumped&nbsp;on what could be wrong, :smileymad:&nbsp;
How can I troubleshoot this issue further? I have installed LV8 on two different PC's and both yielding the same result.
&nbsp;
See attached screenshots. Also reverting between LV8 and LV711 takes some time, I truly&nbsp;wish I had another PXI system where I can test everything without&nbsp;disturbing the 7.1.1 setup.
&nbsp;
Regards.
Ashm01&nbsp;


Complete Picture.JPG:
http://forums.ni.com/attachments/ni/170/170685/1/Complete Picture.JPG


Complete Picture2.JPG:
http://forums.ni.com/attachments/ni/170/170685/2/Complete Picture2.JPG
Bassett Hound
2006-03-02 15:10:40 UTC
Permalink
Greetings!
Since we were able to the same code with different results I believe the best way continue troubleshooting this problem is to focus on what the difference between our two setups is.&nbsp; I have posted a screen shot of the drivers used on our PXI-8186.&nbsp; Please view and determine if any of your drivers are different.
Regards,
Steven B.


8186_Software.JPG:
Loading Image...
Bassett Hound
2006-03-03 17:40:42 UTC
Permalink
ashm01,
The drivers are mostly likely not the issue but it should not hurt to remove them so to make our systems the same.&nbsp; From my observation there are four drivers that you have on your system that are different:
NI-IrDA Rt 1.0.2, NI-1394 External Drive Support 1.3.3.3.0, DAQmx OPC Support 1.0.0, and Language Support for LabVIEW RT 1.0.0.2
We may also need to shift the focus to another application so we can verify that we both receive the same CPU usage on an alternate RT/FPGA code.
Regards,
Steven
Duane Mattheisen
2006-03-30 21:40:09 UTC
Permalink
Hi,
&nbsp; I found the RT Simple CPU Usage.vi but when I tried to run it&nbsp;with Labview 7.1 it could not find "ni_emb.dll".&nbsp; Do you know where I could get this dll?
&nbsp;
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Thanks Duane
Bassett Hound
2006-03-31 21:10:14 UTC
Permalink
Hi Duane,
The RT Simple Usage.vi should be used for Real Time targets.&nbsp; The ni_emb.dll should be located at the following spot: if you have installed RT
&nbsp;C:\Program Files\National Instruments\LabVIEW 7.1\vi.lib\addons\rt\ETS.
I have attached the library file that contains all the RT Simple Usage VIs.&nbsp; Have you installed RT with your version of LabVIEW 7.1?
Regards,
S. Bassett


RT_CPU_Usage_71.llb:
http://forums.ni.com/attachments/ni/170/177162/1/RT_CPU_Usage_71.llb
ashm01
2006-04-11 14:40:08 UTC
Permalink
First of all, let me thank all the people who have tried to help.
Sorry for the LONG absense... Anyway, I have some good news and something unexpected.
Finally after getting sufficient time to try the upgrade procedure again. I came across the following:

- Installed LV 8.0.1 updates and recompiled everything

- As previously mentioned, I am using third party VISA driven communications card in RT. I decided to remove ALL the cards and ALSO the INF's file associated with them.

- After adding the cards individually&nbsp;(one by one) in LV8 the code seems to be better.&nbsp;:smileysurprised:

Now the problem:
I used the Number to Boolean array function&nbsp;at numerous places.&nbsp; what I have noticed is it is slowing the code down to unacceptable levels. This did not hamper much in the previous version. Is there anything different in this version? FYI.. I am trying to convert U8 to a boolean array from my FPGA cards to an array of LED indicators.

- If I break up the code into acquisition and display only it seems to work fine because there is no over head of polling the communications cards.

- If I remove the Boolean Conversion (this includes all communication polling etc) from the code and just display the Numeric values, the updates are quite fast. What gives?

Appreciate your assistance.
Regards,
Ashm01
&nbsp;
ashm01
2006-04-18 08:10:08 UTC
Permalink
Hi Steven,
The setup has changed a bit now, previously I had two PXI RIO boards. Now, just 1 7831R coupled with 2 cRIO 9151 chassis with 8 cRIO modules.
I will try to benchmark the execution time today. I was comparing it to the display update from the RT to the host. After disabling (one by one) all the loops&nbsp;in the RT system, I&nbsp;could pin point&nbsp;to the loop where&nbsp;I converted the values from the FGV's U8 to Boolean array.

- I then tried using the Type casting and manipulate the&nbsp;array&nbsp;using that, it made a little difference but, not satisfactory.&nbsp;

- Since, this loop is primarily used for display&nbsp;purposes, I slowed it down to 50 ms and it seems OK. (However, would like to improve this)

I have a few queries&nbsp;regarding labview 8:

- I mass compiled my VI's and got a few errors in the log file, It kinda concerns me... (Attached is the text file)

- I miss the&nbsp;the Save As&nbsp;(Save with options) to new LLB...&nbsp; Is this still possible in LV8 or I need to create a build?

- If the above is possible, I&nbsp;can&nbsp;send you the piece of code.

&nbsp;Regards,
Ashm01


New Text Document.txt:
http://forums.ni.com/attachments/ni/170/179877/1/New Text Document.txt
ashm01
2006-04-18 13:10:10 UTC
Permalink
Steven,
&nbsp;
I did the Benchmarks and the answer is negative! The number to Array conversion does not take up time. However, what is noticed is the front panel updates become very slow and the CPU spikes up. I am assuming that embedded LV&nbsp;is trying to update the UI and it just cant keep up with the updates.
&nbsp;
Can't seem to figure what else could be wrong :smileysurprised:
&nbsp;
Regards,
Ashm01


CPUusage.JPG:
Loading Image...


Num2bool.jpg:
Loading Image...
Bassett Hound
2006-04-18 22:10:08 UTC
Permalink
Ashm01,
&nbsp;
I'll first answer your previous questions from before.&nbsp; First the compile errors you received when mass compiling is expected behavior.&nbsp; When mass compiling LabVIEW, the results window may indicate some "Bad VIs". Generally these mass compile messages do not indicate a problem with the LabVIEW 8.0.1 Update installation.&nbsp; Some VIs that are shipped with LabVIEW will not compile during the Mass compile, for example the FPGA examples will not compile(and you wouldn't want to wait that long either) and will show up as bad VIs.&nbsp; Second to create .llbs from the project explorer all you need to do is right click the build specification and create a Source Distribution.&nbsp; This will give you the same functionality as the Save as Source Distribution in 7.1.
&nbsp;
Finally, the LabVIEW FPGA Module currently updates&nbsp;the&nbsp;Front Panel&nbsp;as fast as the development computer operating system allows, even if the FPGA VI has not written to indicator.&nbsp; This is most likely the reason you are seeing a lot of CPU usage.&nbsp; Ass a side note on&nbsp;FPGA utilization, &nbsp;every item on the Front Panel of an FPGA VI creates a register so that the host can use it to communicate. This process takes extra gates. If you do not need to communicate with the host make the terminals into constants or global variables will save space on your FPGA device.
&nbsp;
I hope this helps out,
&nbsp;
Regards,
&nbsp;
Steven B.
ashm01
2006-04-19 13:10:11 UTC
Permalink
This post might be inappropriate. Click to display it.
Bassett Hound
2006-04-19 13:40:08 UTC
Permalink
Hey Ashm01,
Sorry for the confusion.&nbsp; The updates to the Front Panel are only when you are in a development mode and have the Front Panel of the FPGA VI opened.&nbsp; In&nbsp;an actual application the FPGA Front Panel would not be opened as you mentioned.&nbsp; I assumed (incorrectly) that you were in development mode with the Front Panel open.
Hope that helps,
Steven&nbsp;
ashm01
2006-04-27 09:40:08 UTC
Permalink
Hey Simon,
I am trying to roll with the massive changes effected after the upgrade to Labview 8. They don't seem to make sense but, seems to make the RT run better. Go figure?:smileysurprised:
I have tried many things which have yielded a little better results HOWEVER they still don't compare to RT7.1.1

- One of them was&nbsp;slowing the&nbsp;display updates to the front panel by updating a few hundred ms.

- replacing Number to Boolean VI and&nbsp;use Typecasting for Number to Boolean conversion.

- try to get rid of any strings if special structures if you used any.

I have tried my code with and without FGV's and after the above changes, you can't tell the difference. This has turned out to be another project to just migrate existing code to labview 8!:smileymad:
Regards,
Ashm01
&nbsp;
ashm01
2006-04-27 13:10:12 UTC
Permalink
Simon,
I used the following method:
I used FGV's to transfer data to the remaining vi's and Whenever I am updating the front panel controls,I would mod the While loop iteration and update every 2nd iteration.
This somewhat reduced the updates. I have this loop running at 50 ms, in turn the data updates 100ms.
Regards,
Ashm01
SimonBader
2006-05-05 07:10:08 UTC
Permalink
Hey Ashm01I don't use FGV's. I simply have a vi that runs on the RT-Target with the Front Panel open on my Windows-Development computer. Within the vi there are fast loops (100us and 200us) which update indicators and controls shown on the frontpanel. LabVIEW 8.0 tries to refresh frontpanel items as fast as possible, which makes CPU usage to be at almost 100%. I wrote a small test-vi (without FPGA) that updates some frontpanel items (66) quite fast -&gt; same CPU problem.Can I configure LabVIEW to only update the frontpanel every some 100ms?Simon


frontpanel.lvproj:
http://forums.ni.com/attachments/ni/170/183227/1/frontpanel.lvproj


frontpanel.vi:
http://forums.ni.com/attachments/ni/170/183227/2/frontpanel.vi

Loading...