|
I'm having a problem with an architectural issue and the MS newsgroups are too slow, so here's to hoping...
I have a licensing service (hosted by a Windows Service) that essentially uses the number of registered sponsors as the current number of licenses granted. This is done using signed XML, a unique ID (the computer's SID), and a number representing the number of licenses granted. The service - after having read and verified the license information - increments the granted license count with each registered sponsor and decrements the count when a sponsor unregisters*, where 0 <= n <= m, where n is the number of registered sponsors and m is the max number of licenses. If a client can't get a license, they're booted from the system (after a friendly error message).
* Now, the problem is that the only thing in the entire .NET Framework (I've searched all the extracted IL with regex's) that calls ILease::Unregister (where I would decrement the count) is ClientSponsor::Unregister . Nothing in the .NET Framework calls ClientSponsor::Unregister , however. This becomes a problem in any case because the server basically has to know when a sponsor is dropped, whether they've expired, quit, or have been unloaded unexpectedly (perhaps from Environment::Exit after a fatal error, or the OS crashes).
So, I must take the DCOM approach and poll the sponsor list that I keep track of (I implement my own ILease ). This is not a unacceptable idea since this will all probably happen on a local network, but I would rather avoid it.
So, is there any way that the lease can know when a sponsor is dropped without GC? If the item is GC'd on the client, can I expect that the item in the list is GC'd eventually as well? I guess I'm just hoping for some ideas to solve this counter problem.
PS: The remoting interface is merely a marker. All the actual work is done by the lease/sponsor relationship since a Register/Unregister mechanism already exists.
Reminiscent of my younger years...
10 LOAD "SCISSORS"
20 RUN
|
|
|
|
|
Tom Barnaby posted the following message on http://www.dotnet247.com/247reference/msgs/25/128945.aspx:
<clip>
I use what I call a "Disposing Sponsor" in situations where I need timely cleanup of a remote object when its lease expires. This is a server side sponsor that does NOT renew the lease but just calls dispose on the sponsored object. Here is an example: (lifted right out of my book )
class DisposingSponsor : ISponsor
{
private IDisposable mManagedObj;
public DisposingSponsor(IDisposable managedObj)
{
mManagedObj = managedObj;
}
public TimeSpan Renewal(ILease leaseInfo)
{
mManagedObj.Dispose();
return TimeSpan.Zero;
}
}
Then you simply register this sponsor with the remote object. This can be
done in a number of places, but a logical place is the remote object's
InitializeLifetime service method:
public override object InitializeLifetimeService()
{
ILease leaseInfo = (ILease)base.InitializeLifetimeService();
// Register a CustomerSponsor object as a sponsor.
leaseInfo.Register(new CustomerSponsor());
// Register a DisposingSponsor object
leaseInfo.Register(new DisposingSponsor(this));
// RegisterSponsors(leaseInfo);
return leaseInfo;
}
HTH
Tom Barnaby
Author: "Distributed .NET Programming in C#"
www.intertech-inc.com
</clip>
I used this approach to solve a similar situation and it worked great. I hope this helps!
-Guy
|
|
|
|
|
I'm wondering whether a 1.0 .NET App (C#) can work / run on a machine that has the 1.1 .NET Framework Runtime installed?
|
|
|
|
|
Unless an inappropriate .config file tells otherwise, the answer is yes. The application will start, and is very likely to run fine.
Note there is a difference between a machine with only the 1.1 CLR installed, and a machine with both CLRs installed.
|
|
|
|
|
What kind of differences exactly?
|
|
|
|
|
If both CLRs are installed, then the application will start using CLR 1.0, unless a .config file tells otherwise.
If only CLR 1.1 is installed, then the application will start using CLR 1.1.
|
|
|
|
|
|
We should make links like this more visible both in the documentation as well as public newsgroups. Please check:
http://www.gotdotnet.com/team/changeinfo/default.aspx
There is a wealth of information that talkes about the various .config file changes that have to be done to modify your application to run under various Fx version scenarios. There's also a link that talks about various API breaking changes.
aL
Albert Ho
.NET Developer Evangelist
Microsoft - Norcal
|
|
|
|
|
There are some 7-8 issues where applications written in 1.1 break down when run on 1.0.
|
|
|
|
|
I am searching a sample / doc how to use IrDAListener !
I want to create a IrDA Server application in C# with the
new .NET IrDA classes, but I can't found any doc !
Daniel
---------------------------
Never change a running system!
|
|
|
|
|
Daniel S. wrote:
new .NET IrDA classes
? where do find those?
Hey leppie! Your "proof" seems brilliant and absurd at the same time. - Vikram Punathambekar 28 Apr '03
|
|
|
|
|
|
Consider this quote from the .NET documentation.
"Value types are sealed, which means that no other type can be derived from them. However, you can define virtual methods directly on the value type, and these methods can be called on either the boxed or unboxed form of the type. Although you cannot derive another type from a value type, you might define virtual methods on a value type when you are using a language in which it is more convenient to work with virtual methods than with nonvirtual or static methods."
I want to create a method that acts on integer values, but I cannot find anywhere online or in the documentation how to accomplish. To provide an example, I want to do this:
Int32 i=0x0004;
i.ReverseByteOrder();
byte[] array=i.GetBytes();
where ReverseByteOrder and GetBytes are functions I define. Is this possible? The documentation implies yes, but am I misunderstanding it?
Any help would be appreciated. Thank you!
|
|
|
|
|
PepeTheCow wrote:
The documentation implies yes, but am I misunderstanding it?
The documentation doesn't imply it, but it isn't very straight-forward about it.
What it is saying is that if you define your own value type (in C# that means a struct) then it is legal to declare methods on it as being virtual.
In C# virtual means a derived class can override the implementation of that method, but to over ride you must first inherit from that struct which isn't allowed. The documentation is clarifying why it is legal to use virtual, and that is for the times when you are using your value type in other languages where a virtual method maybe easier to use/call than a non-virtual one.
Thats how I read it anyway
James
"It is self repeating, of unknown pattern"
Data - Star Trek: The Next Generation
|
|
|
|
|
Currently, I am making a multi-layer applicaiton like PhotoShop or Painter above .NET Framework using C#.
When I come to the point to realize the multi-layer part, I am stopped. I've tried to use .NET Framework library only.But it seems doesn't work.
I create several bitmap to store each layer's drawing for double buffering to view, that's OK.
The thing is I cannot make the desired effect on the view(panels that contain the painting). The effect I want is:
If I draw on the deeper layer(panel), the stokes/shapes on upper layer(panel) will form a mask that cover the strokes/shapes I am drawing.
I tried one method : Draw to the buffer and invalidate the panel on every invoke of mouseMove event handler(whenever mousemovement is captured), but it caused serious ficker.
Hope I give a clear explainatio of my problem.Could any one give me advice on how to realize this function.
3x
|
|
|
|
|
You have to create a region which excludes the shapes.
"Do unto others as you would have them do unto you." - Jesus
"An eye for an eye only makes the whole world blind." - Mahatma Gandhi
|
|
|
|
|
but I want the alpha color effect, too. A control's region only define a geometry shape.
I have almost realize the layer function using override OnPaintBackground() method.
Every stroke I draw, I draw it on the specific bitmap buffer offscreen( each layer is a offscreen bitmap )
and imediately refresh the control to call OnPaintBackground() method which in it I draw the offscreen bitmap buffers to screen.
the point is to use the OnPaintBackground() method, not the OnPaint() method. OnPaint() method seems to cause flicker, as OnPaintBackground() method has no flicker at all.
But I don't understand the inner reason why this would work, can anyone tell me why?
|
|
|
|
|
LynnSong wrote:
If I draw on the deeper layer(panel), the stokes/shapes on upper layer(panel) will form a mask that cover the strokes/shapes I am drawing.
So you have one panel for each layer of your image?
You would be better off having one control and have a property on that control to tell it which layer (ie bitmap) should be modified when you 'draw' on it. Then in the constructor for that control set the double buffer style bits (this.SetStyle(ControlStyles.DoubleBuffer | ControlStyles.UserPaint | ControlStyles.AllPaintingInWmPaint, true); ).
Now you just need to make the mousedown/mousemove events modify the image and call invalidate with the changed area while OnPaint does the drawing (hopefully smartly, ie change no more than needs to be drawn).
The OnPaint would just go through each bitmap and draw it on the Graphics object passed in.
Optimizations left to you of course
James
"It is self repeating, of unknown pattern"
Data - Star Trek: The Next Generation
|
|
|
|
|
I have done it almost the same as you said
The different is I use Overrided OnPaintBackGround() method instead of OnPaint() to redraw my layer buffers offscreen onto the client. I didn't use the Control.SetStyle() method, it seems that OnPaintBackGround() does a quite excellent job for me.
So, It is call of (this.SetStyle(ControlStyles.DoubleBuffer | ControlStyles.UserPaint | ControlStyles.AllPaintingInWmPaint, true);) that stop flickering? I am struggling with flicker these days for what I am making is a paint program and as well a application with totally graphical(non-traditional windows desktop looking) UI.
Another question, is 'and call invalidate with the changed area' in your reply means that the call of SetStyle() enalbed part invalidation(that stops flicker) or I should do the work?
|
|
|
|
|
LynnSong wrote:
this.SetStyle(ControlStyles.DoubleBuffer | ControlStyles.UserPaint | ControlStyles.AllPaintingInWmPaint, true);
This code set's up double buffering, which is similar to what you already in that you do all of your drawing to an off-screen buffer then the framework draws it on the screen for you.
There is an added advantage to letting the framework do this. First is that you get some speed increases because the framework ensures the underlying BitBlt will work as fast possible to the screen (Compatible bitmaps etc).
Now to the issue of the flicker. Flicker is caused by taking an image, clearing it, then drawing another image on top of it. The eye picks up each of those 3 'frames' and thus you see flicker. Usually this happens because OnPaintBackground erases the image with a single color, then OnPaint replaces that with the image. But from what you say you are doing all of your drawing in OnPaintBackground, so as long as you don't draw to the Graphics object representing the screen until you are ready to erase what is there you shouldn't be seeing flicker. Unless of course, you call base.OnPaintBackground which is going to cause flicker as it draws the solid background for you then you draw your background.
There is one consideration when dealing with a DoubleBuffer'd control. You only get the DoubleBuffer when you do your drawing during the Paint event, at any other time you will be drawing to the screen which negates the point of using the DoubleBuffer. This is why I mentioned calling Invalidate you needed something to be redrawn, Invalidate will cause OnPaint to be called, which uses the DoubleBuffer. As an optimization you can call Invalidate, and pass in just the area that needs to be updated.
Even if you just have a dumb implementation of OnPaint (ie it just redraws everything) you should see some improvement by calling Invalidate with the changed areas because the Graphics object shouldn't be drawing stuff outside of the ClipRectangle anyway, but you can make it better yet by having OnPaint only draw what needs refreshing (stuff within the ClipRectangle).
HTH,
James
"It is self repeating, of unknown pattern"
Data - Star Trek: The Next Generation
|
|
|
|
|
base.OnPaintBackground
It's true this code will cause flicker.
What I have done up to now:
UI - draw background segaments on client according to the clientsize. I want to put a sizegrip on southeast corner of the client, whenever user changes the cilent size. The code in OnPaint() redraw the background segaments' bitmap to the new location and with new length(I use the trick to draw four 1 height/width bitmaps repeatly on border areas). Because there are more calculating in redrawing procedure, it seems even OnPaintBackGround() could not stop flicker. So I use SetStyle(), I haven't finish the sizeGrip control so I don't know if changing clientSize continuously(along with continuously calculating in redraw procedure) would cause flicker even I set Control.DoubleBuffer to true?
Why I am paint the client with bitmaps but not generate single background bitmap is the memory using issue.
If using single bitmap buffer offscreen to redraw the whole client, at my desktop resolution of 1400x1050, my applicaiton(20k) under a fullscreen mode will take up 30mb memory in all
Layer draw - draw on two bitamp buffers and when invalidated the client redraw them on client by order, when use OnPaintBackGround() to redraw even a continuously drawing of strokes( along with a continuously invalidating )will not show even a slightly flicker. And draw on buffer allow another tool in my application to be easily made: the zoom tool, that is the brushe tool only know the canvas size and draw on only one rectangle area, the View(a contorl) would calculate the coordinate transformation for the brush tool.
I still want to know deeper inside all these things, I tried as this: under the current version of my application(about 28k), it start up with a memory use of around 14mb(is .NET application upon CLR alway takes that much memory???), when enlarge to full screen(1400x1050), the memory use rises to about 15mb much. So what I want to know is what had the .NET framework did for me after I set Control.DoubleBuffer to true? How is the
underlying BitBlt you've mentioned worked underlying? It seems there is not a whole bitmap underlying been generated for buffering.
3x for your replys by the way;)
|
|
|
|
|
LynnSong wrote:
it start up with a memory use of around 14mb(is .NET application upon CLR alway takes that much memory???),
Yes, this memory is used for a couple things:
Number one is the Garbage Collector (GC), when you start your application the Framework allocates a largish chunk of memory which it uses when you create new objects.
More memory used by the framework itself, in a basic Windows Forms application you are loading at least 3 assemblies no matter what is actually on the form.
mscorlib - houses the most basic classes of the framework
System.Xml - used to parse the application configuration files as well as the computer/user configuration files.
System.Drawing - as the name suggests the classes used for drawing are located here
and lastly: System.Windows.Forms - houses the Windows Forms classes
Looking at the file sizes of those dlls comes up to 5.42MB and that isn't including the various libraries used by the framework that make it all happen (fusion.dll, mscorwks.dll/mscorsvr.dll, mscorjit.dll, plus many others).
LynnSong wrote:
when enlarge to full screen(1400x1050), the memory use rises to about 15mb much
I don't think there is much you can do about this, 1400x1050 at 24bit color comes out at 4,410,000 bytes, if it is 32bit color then you're talking 5,880,000 bytes (approx. 4.2 and 5.6MB respectively).
What you can do is to take advantage of the IDisposable pattern when you can, all/nearly all of the System.Drawing classes offer it so you can free up valuable system resources ASAP.
C# offers a handy keyword which wraps IDisposable so if you need to make use of a Brush or a Bitmap within a single method you can use it and dispose of it easily.
For instance, if you need to draw something to your bitmap (using made up variables of course )
using(Brush brush = new SolidBrush(canvas.ForeColor))
using(Pen pen = new SolidPen(brush, canvas.PenWidth))
using(Graphics g = Graphics.FromBitmap(bitmaps[canvas.CurrentLayer]))
{
g.DrawLine(pen, lastPoint, currentPoint);
}
But what about the GC you may ask?
The GC works well, but you never know exactly when the GC will do its work so when you are dealing with a scarce resource such as system handles or database connections you are far better off doing your work and returning those handles/connections as soon as possible.
LynnSong wrote:
How is the underlying BitBlt you've mentioned worked underlying?
At the heart of GDI+ (System.Drawing stuff) and the .NET framework you still have Windows running it all so you can count on most things you doing going to the basic levels at some point in time.
In this case GDI+ sits on top of the basic routines in Windows for drawing, GDI. So when you make calls into GDI+ at some point in time it is probably going to hit GDI code.
In this case when you want to draw one bitmap onto another the GDI routine is called BitBlt . BitBlt's function is to take pixels from one bitmap and transfer them to another, but in order to do that it may have to do some math wizardry on the source pixels in order to get it into the same format as the destination pixels. For instance if the source bitmap was a 16-bit color image and the destination image 24-bit color it couldn't just copy the memory from the source and place it in the destination, it would need to convert the memory to account for the different color depth.
One way to speed up the call to BitBlt is to make sure the two bitmaps are 'compatible' so BitBlt can just copy pixels rather than having to convert the pixels one bitmap to another format before it can copy. This is what .NET does when you enable double-buffering; it makes sure the bitmap representing the screen and the bitmap representing the offscreen buffer are compatible.
Hope that makes sense, my explainations seem is rather bad today.
James
"It is self repeating, of unknown pattern"
Data - Star Trek: The Next Generation
|
|
|
|
|
Thanks for your reply I have almost settle the flicker problem and understand it quite clearly.
Yesterday when I want to make several kinds of brushes for my paint applicaiton using GDI+, I am stopped again. It seems that GDI+ hasn't packaged the windows API all around, and it doesn't fit well with a strongly interactive paint program like the one I am making.
There are two problems:
1 - brush with aplpha blending
whenever the user draw a freeline, it actrually had been drawn with mass 'slight streight lines' within MouseMove EventHandler(these streight lines joins points that MouseMove event passed in). So if the Pen is assigned a Color with aplpha blending, after each mouseDown state, every 'slight streight line' will partly cover the last 'slight streight line' that had been drawed. So after MouseUp state, the whole 'free line' has a quite ugly look, while what I want is a free line with sigle alpha value so it will has a clear and correct look. Another line in another mousedown-mousemove-mouseup rountine could cover the last 'free line' to form a effect that the intersected part of two 'free line' would be darkened, that's all right.
2 - eraser
I want to realize a eraser tool, it could set pixels on the bitmap buffer to transparent. I don't know how to. It seems that GDI+ donnot have a XOR operation to use, can I call GDI's XOR op? Is there someway easier to do this task?
Becasue this forum doesn't support image. Hope I've given a clear explanation of my problem.
If you want to discuss it further with me.Maybe I will email you some images of my tests or even my code.
3x a lot
|
|
|
|
|
LynnSong wrote:
It seems that GDI+ hasn't packaged the windows API all around,
GDI+ is not a GDI wrapper, it is a whole new implementation.
LynnSong wrote:
It seems that GDI+ donnot have a XOR operation to use, can I call GDI's XOR op?
GDI+ does not have one. You can call GDI's version by doing
hdc=Graphics.GetHDC();
Graphics.ReleaseHDC();
LynnSong wrote:
I want to realize a eraser tool, it could set pixels on the bitmap buffer to transparent.
Do you mean actually transparent, or do you want to make the area erased revert back to the original bitmap?
If you mean actually transparent, you will have to modify the Alpha channel of the bitmap.
If you mean revert to the original, then you can keep a copy of the original version of the bitmap and copy pixels from the original to the new version wherever the eraser is used.
But for either of these, you cannot really depend on the GDI+ brush class. You will have to modify the pixels by your own code.
If you want to just erase to the background color, just use a brush of the same color as the background (you likely knew this already ).
"Do unto others as you would have them do unto you." - Jesus
"An eye for an eye only makes the whole world blind." - Mahatma Gandhi
|
|
|
|
|
Actually there are two way of eraser I want to use,
One implement of eraser is used in a tool called selector, it select a rectangle or ellipse region of the view to give a clip region of the drawing. The user could drag out a rectagle or ellipse frame from a point, so I should 'erase' the former rectagle/ellipse repeatly. This could be worked out using XOR command .There is a example in 'C# Windows Programming', but the auther just said there is no XOR operation in GDI+, he use background color to 'erase' the former rectangle. But he didn't give a method using XOR.
the other implement of eraser is a true rubber for drawing. I have mentioned it in last reply. I want to make the points on bitmap operated by eraser to be totally transparent. Could it be done by Bitmap.SetPixel() method? or I should use the BitBlt to set the pixel to be transparent?
|
|
|
|
|