|
BuckBrown wrote: I am using a DataGridView Control to represent a silicon wafer with 12,000 die on it
That might be considered abuse of that control. A grid control, any of them, is going to draw each cell to the screen. I don't know what your requirements are but the opposite of that approach is to draw the entire wafer in memory and then paint it to the screen. The difference in performance between those two approaches would be significant.
|
|
|
|
|
Hi Mike,
Yes, the idea of drawing the entire wafer in memory and then painting it to the screen is exactly what I want to do but with no Paint() or OnPaint() method in my application the DGV still paints. I think Luc, in his response, has the right idea. In my response to him you can read about why it is I am doing this.
Thanks
Buck
|
|
|
|
|
I don't know. I think the way the DataGridView control is painted is a function of the control. Once I display the Form (that takes 10 seconds to paint the DGV control) I can click on another running app at the bottom of the screen and when I click back on the Form that contains the DGV control the painting of the control is not all at once it is still done by painting row 0 col 1, row 0 col 2, row 0 col 3, etc. I was hoping that by setting the CausesValidation property to false that no painting would be done until an Invalidate() was issued, but that is not the case. I'm sure that some Windows guru might know how to do this but as an old UNIX guy I try not to get into the intricacies of the Windows message pump. I'm going to spend the day researching the DGV control and the GDI+, but I have a feeling I'm trying to use this control in a way that the original designers were not thinking of.
Buck
|
|
|
|
|
It is (as other mentioned) an abuse if the controls. You better think about writing a own controls: ownerdrawn and a data structure which fullfills your needs as change of status and selections. Then you will better control YOUR Paint function and can optimize it. (Memory DC and only draw changes)
This sounds like a bunch of work but the result will be worth it.
Greetings from Germany
|
|
|
|
|
I'm trying to mimic the pipe capability of a unix command line in windows(2003 Ent. Ed).
My unix shell script executes the following:
~~~~~~~~~~~~~~~~~~start~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
mknod exp_pipe_dmp p
exp user/passwd@SID parfile=mydb.par &
gzip < exp_pipe_dmp > exp_sid.dmp.gz
~~~~~~~~~~~~~~~~~~~end~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Exp is an oracle command-line utility to create a database dump file(binary).
We execute the exp command in the background then immediately redirect "<"
the contents of the 'pipe' file to gzip and gzip then creates it's own
file, exp_sid.dmp.gz.
The importance of solving this problem:
On unix, Solaris, Linux the exp and the gzip execute simultaneously!
I have not been able to do this using Dave Roths Win32 perl packages, the Win32::API
or IPC::Run. I have turned to C, C++ or even C# to try to accomplish this but have not been able to reproduce this.
I can create a named pipe with the OVERLAPPED struct. I can open the pipe using a simple client as provided by the MSDN but I cannot write to the pipe and also read from it simultaneously - the exp process executes first THEN the gzip processes the pipe contents. I tried to create a stream file instead and the same exact thing occurs.
My goal is to save time. Any direction torward solving would be a great help. I document all code with references from ALL sources and would identify contributors to this effort.
Thanks,
Tracy
|
|
|
|
|
higgsbo wrote: to create a database dump file
what does that mean? Are you putting an image of the database into the zip file, like a backup or something?
|
|
|
|
|
Check the parameter for creating the pipe, there some values for bidirectional use. Or create 2 pipes one for "read" and onr for "write" pipe: 1=>2 and 2=>1
Greetings from Germany
|
|
|
|
|
Hi,
Please consider the following code fragment in C++/CLI (I am quoting from memory )...
Assembly^ a = Assembly::LoadFile("MyTypes.dll");<br />
Type^ myType = a->GetType("MyNamespace.MyClass");<br />
Object^ obj = Activator::CreateInstance(myType);<br />
<br />
IMyInterface^ itf = (IMyInterface^)obj;
The last line throws an invalid cast exception even if MyClass implements IMyInterface. Any idea whats going on and how to solve this problem?
SDX2000
|
|
|
|
|
Where is "IMyInterface" located? Is it in "MyTypes.dll"? If not, is the assembly containing "IMyInterface" common to the current code and the loaded assembly? Are you using a "using namespace MyTypes;" somewhere in your code?
"We make a living by what we get, we make a life by what we give." --Winston Churchill
|
|
|
|
|
Hi George,
Thanks for responding. IMyInterface is located in the same assembly (say MyLoader) which is trying to load MyTypes.dll, in other words its not located in MyTypes.dll.
A reference to MyLoader has been added while compiling MyTypes.dll.
I am not using "using namespace MyTypes;" anywhere. Actually I don't want to. Doing so will defeat the purpose of interface based programming.
You could think of this as a plugin based application where the main application has been written in C++/CLI and the plugins can be written in any language. Its not possible for me to add the references to all present and future plugins while compiling MyLoader.
Regards,
SDX.
SDX2000
|
|
|
|
|
Please note that two identical interfaces created in two different assemblies are not the same type. Thus, you will get casting errors.
Using namespaces does not defeat the purpose of interface programming. The .NET Framework is an example of that.
"We make a living by what we get, we make a life by what we give." --Winston Churchill
|
|
|
|
|
George, I think I was bit hasty in describing my problem. I agree with you "Using namespaces does not defeat the purpose of interface programming." but that is not what I meant.
Thanks for your help anyway. I have found the reason why I was getting an Invalid cast exception. An invalid cast exception may be generated due to missing assemblies! (consider Assembly1::Class1 extends Assembly2::Interface1). (Note:Assemblies can be in the same folder and still be missing!) refer suzzane cooks blog http://blogs.msdn.com/suzcook/archive/2004/06/02/debugging-an-invalidcastexception.aspx
The best way to accelerate a Macintosh is at 9.8m/sec-sec.
- Marcus Dolengo
|
|
|
|
|
I'm writing a game using C++/CLI (.NET 2.0), taking full advantage of .NET framework for the ease of writing dialog boxes, menus, etc, but keeping my core logic and OpenGL code in "native" C++. Since my main window is a .NET window, I am using a Timer control to trigger the rendering cycle. Since I want to draw as fast as possible, the timer interval is set to 1 ms.
The test scene I'm rendering contains a measely 10,000 triangles. My framerate is a poor 64fps -- running the executable directly (running in the IDE drops my frames to 5fps). I don't know a lot about graphics cards, but I do know their performance is sometimes measured in millions of triangle per sec. If you do the math, 10k triangles * 64fps = 640,000 triangles per second. NOT GOOD.
I do most of my development on a notebook that has a screen refresh rate of 60Hz. I thought that might be the problem, so I switched over to my tower, which is running an Nvidia GeForce 5900XT and a refresh rate of 75Hz. Guess what? I still get the 64fps rendering rate.
Is this the best I can expect from a .NET-encompassed application? Would switching back to an older technology like MFC or even plain old Win32 programming increase my framerate? I do have a good C++ background with pointers and all that stuff, so I'm not afraid to roll up my sleeves if that's what it takes.
Thanks in advance for your help.
|
|
|
|
|
I would strongly suggest you read my timers article.
Luc Pattyn [Forum Guidelines] [My Articles]
this weeks tips:
- make Visual display line numbers: Tools/Options/TextEditor/...
- show exceptions with ToString() to see all information
- before you ask a question here, search CodeProject, then Google
|
|
|
|
|
Very interesting article, and good demo application. That explains why I can't break the 65fps barrier using the timer control to trigger my rendering cycle. I've posted a response farther down this thread that explains why I was using a timer at all. I think your article is a good reason for getting rid of the timer and taking the approach listed in that other response.
|
|
|
|
|
Xpnctoc wrote: Since I want to draw as fast as possible, the timer interval is set to 1 ms.
Even if you could get an accurate timer event at 1ms intervals, why would you
ever need to redraw at that rate? Even a 20ms interval is way more than sufficient
for something a human is going to view, and would free up a LOT of CPU cycles for
doing other (more useful) things.
Mark
Mark Salsbery
Microsoft MVP - Visual C++
|
|
|
|
|
The purpose of the timer was not to achieve a certain drawing rate. It was only to use the .NET event-driven model to create a rendering cycle. Having an interval of 1ms was only to eliminate any delay between rendering cycles.
Suppose I have a simple scene with 5000 triangles. That's a piece of cake for even the cheapest graphics card. Add a 20ms delay in between, and we get 49-50 fps. Fine. But suppose the scene to be rendered is sufficiently complex that even on an infinite loop the graphics card could only crank out 30fps. That means we're taking 33ms to render a single frame. If we have to wait 20ms in between each 33ms rendering because of the higher timer interval, that gives us 53ms per frame, or 18-19fps. And THAT starts looking choppy.
The only work-around I've been able to find is to put a loop in the Form::Shown event handler as follows:
<br />
void MyRenderingWindow::Shown(System::Object^ sender, System::EventArgs^ e)<br />
{<br />
while (Form::Created)<br />
{<br />
if (bDataAvailable)<br />
RenderScene();<br />
Application::DoEvents();<br />
}<br />
}<br />
Eliminating the timer completely gives me a framerate of 80 - 110fps in a range of 8k-12k triangles on my notebook. That's still under 1 million triangles per second, but it's more comforting than the 58-64fps I was getting before. I haven't tested this solution on my tower with the better graphics card yet.
The solution above seems kind of weird and kludgy. I'm open to other suggestions of how to set up a high speed rendering loop, with or without timers.
-- modified at 17:18 Saturday 22nd September, 2007
|
|
|
|
|
I've just read this thread and found it very interesting. I'm curious: Any solution in C++/CLI since 2007?
|
|
|
|
|
Over the past couple of years I have found a number of factors that enhance or inhibit performance. With regard to this immediate posting, I did remove the timer in favor of a petal-to-the-metal, untimed loop on the main application form Shown() event like this:
private void FMain::Shown(System::Object^ sender, System::EventArgs^ e)
{
while (Form::Created)
{
}
}
However, as I said, there are many other factors that I've learned about:
1. Running through Visual Studio is notably slower than native execution. No doubt this is due to the debugging/tracing layer involved.
2. If your video card has V-sync, that can mess you up. It explains why my piece of junk notebook was cranking out 122 FPS while my $250 NVidia graphics card on my tower was still topping out at 75 FPS. The problem is that without V-sync, you can get weird artifacts like image "tearing".
3. At the time I made the original post, all my graphics code was using straight-up graphics commands like glVertex3d(). Since then I have refactored my code to use vertex arrays. This does make for a significant gain -- especially in complex shapes like cylinders. After all, it's a waste to perform all those SIN and COS calculations every cycle.
While V-sync limits the max frame rate, #1 and #3 allow the frame rate to remain maxed out over a lot higher count of triangles before things start to drop off.
|
|
|
|
|
Dear Xpnctoc
Thank you for your answer. As a conclusion, you could enhance the performance. Still I am interested in one thing: Do you have the impression that native execution of managed C++ code is still slower than native execution of unmanaged code?
Regs
U. Kant
|
|
|
|
|
That depends on exactly what type of code.
As far as my original post goes, the answer is no, I don't believe it's any slower. But my game is using "mixed mode" programming, and all of my graphics calls are technically unmanaged. What was slowing me down was the use of a timer to regulate the rendering loop, which I never did in strictly unmanaged applications. In other words, I got myself confused.
There are, of course, managed graphics library wrappers (i.e., in C#). I haven't played with those much, but I would expect them to be a little slower just because they have to go through the extra step to convert their managed calls into unmanaged calls to the native OpenGL API anyway.
One more thing to be aware of, though it may be a little off topic. The answer to your question outside of graphics programming can be a definitive "yes"! For instance, invoking the Sort() method on a System::Collections::Generic::List class is WAY slower than a hand-coded, native mode Quick Sort.
This is why I'm writing my game in mixed mode. I use .NET to make life easier as far as pop-up messages, dialog boxes, etc (if you've ever used MFC you know what a pain that can be). But all of my core gaming logic and graphics calls are written in unmanaged classes to avoid as much "middle-man" junk as I can.
|
|
|
|
|
Dear Xpnctoc
Again thank you for sharing your experience. You've answered both of my issues: First think I was interested in was if embedding performance-critical code in native objects and seperating this layer strictly from the managed code actually works and performance does not go down for some reason.
Thus I will go forward with my plan and implement all of my code in C++/CLI except high-performance alorithms i n native C++. My strategy is very similar to yours: Use .NET for all the stuff around and native C++ for the critical stuff.
Cheers
UK
|
|
|
|
|
Hi,
Please consider the following interface in C++.Net
public __interface IAnimal<br />
{<br />
void Eat(String^ food);<br />
};<br />
when I try to implement this in a C# class...
class Dog : IAnimal<br />
{<br />
...<br />
}<br />
It gives me an error "error CS0509: 'Dog': cannot derive from sealed type 'IAnimal'"
What am I doing wrong here?
SDX2000
SDX2000
|
|
|
|
|
Don't you want a managed C++ interface class instead?
public interface class IAnimal<br />
{...<br />
<br />
Mark Salsbery
Microsoft MVP - Visual C++
|
|
|
|
|
Thanks for pointing this out. I have used C++ and C# but I am new to C++/CLI. This works as desired.
SDX2000
|
|
|
|