|
hi
new to using c# and datagrids and having a few issues. i have a couple of issues that would help me along.
i create a connection to my data, a dataadapter and then a dataset. i fill the dataset and then link my datagrid to it - this all works fine.
1. if the user double clicks on a cell/row, i want to open another form, passing info about what has been clicked. i can see how to pass the location of the click, but not the actual data itself. do i get this from the datagrid, or somehow from the dataset?
2. single/double-clicking on the grid should highlight the entire row, and not just the cell - is there a way of doing this?
many thanks!
barry
-- modified at 17:57 Wednesday 15th March, 2006
ps. also need an absolute reference to using datagrids - if it exists!
|
|
|
|
|
1. Have a look at the HitTest function. It takes a pixel coodinate and tells you which cell is positioned there. With this info you can use the indexer of the grid:
private void dataGrid1_Click(object sender, System.EventArgs e)
{
Point screenPos = Cursor.Position;
Point gridPos = dataGrid1.PointToClient(screenPos);
DataGrid.HitTestInfo hti = dataGrid1.HitTest(gridPos);
if (hti.Type == DataGrid.HitTestType.Cell)
Console.WriteLine("Contents: " + dataGrid1[hti.Row, hti.Column]);
}
You could also use the MouseDown event. In this case you would get the relative grid pixel coordinates right in the event arguments.
2. Have a look at the Select and Unselect functions from the datagrid.
|
|
|
|
|
Is there a way to capture mouse input on a form made transparent via the TransparencyKey?
The same problem, a different type of solution:
I have a custom user control designed that has a transparent background using the following technique:
protected override void OnPaintBackground(PaintEventArgs e)
{ //base.OnPaintBackground(e);
}
//add transparent property
protected override CreateParams CreateParams
{
get
{
CreateParams cp = base.CreateParams;
cp.ExStyle |= 0x20;
return cp;
}
}
This effectively stops the program from drawing the background of the control, so the painted object is drawn on a transparent background. However, when repaint the object in a different position on the control, the old image remains (because the background was not redrawn to erase the old image). The image changes with every mouse move, so I need an effecient way to erase the old image while preserving transparency.
A lot of methods to erase the old image, ie, erase the control and start with a clean slate, destroy the transparent effect.
Any thoughts anywhere?
|
|
|
|
|
Just a bump to make sure this post isn't lost forever.
|
|
|
|
|
Microsoft tries to helping object developers to close the gap between the relational world and the object world.And they call this "object modelling approach" linQ.
Using "custom business entities" in enterprise programming with linQ might seem the best thing after "sliced bread" but it isn't.
For my part I have written my Mappers and used commercial OR/M tools.
They mostly provide the same thing.
__Read the table from the database.
___Put that into a some HELPER thing.(DAO,DAL,ORM)
____Put that into your custom business object
_____Read from your business object.(if u can )
I see NO REASON for this "Unnecessary Pull and Push".
Can anybody explain me why we shouldn't use a typed dataset and go to OR/M?
I hear people saying Typed dataset is really concrete and not flexible to requirement changes.
U even might lose staff when u have to regenerate it.It is far away from perfect.But with typed dataset...
1.U can easily see design time errors + intellisense
2.They also provide rich data for paging,sorting and all kinds of UI staff.
3.For requirement changes,for performance and also for JOINS u should use VIEW's in database modelling.U can also use VIEW's with typed datasets in ADO.net 2.0.really nice graphical component.
On the other hand "object modelling approach" makes NO sense.As to access a group of data,u have to create a custom collection where it is the WORST thing u can do in the business software development in terms of performance.(Arrays are evil).Such as 12 million ORDERS,can have 55 million ORDER_DETAILS and 5 million CUSTOMERS and no LAZYLOADING can help u.Object models use RAM and CPU resources which are really important in terms of performance.if u have more complicated object graph it will be slower.Some might say lets take the performance parameter out and think without it,those people see a "butterfly effect" on their project when they realize that they got to rewrite the application all over again.
So I am really hard on OO but I must admit that Object oriented modelling is a nice theory.(...an idealism that does not work in practice).
if u really go into object domain modelling.U will end up with all these things which are really not neccessary...
U have to deal with
A.identity Maps(read objects only once)
B.UnitOfWork(Transaction mechanism for your objects)
C.Topological Sort(To have right insert ,update orders)
D.Mappers(OR/M buy or build it...MS did not do objectspaces which was a cool decision)
E.Repositories(common place to retreive your collections,if u can...)
F.Specification patterns / Query objects(filtering mechanism in objects,functors anonymous methods,POEAA)
G.Metadata Techniques.(some cool SQL reuse that makes the system slow,through code generation or reflection)
H.Design Patterns like FACADE to put everything under the rug...
Do u think is it worth it?
Also at the end of the day,u will face the fact that u need performance.(as the users say so it is Google time!,they just don't wanna wait,time is more valuable than money...)
Carefully looking at your UI code behind,u can easily find out that every user interface in your application has different aspects of data.And there is no ONE most important aspect.(Like u can't love your mum more than your dad...)
Object Models are not build to have many aspects of data as when designing them OO people don't think in a tabular way,they do think in a object way which is slow as their logic is eating RAM and CPU resources which are expensive.They are trying to reinvent the RDBMS idea at runtime.So every VIEW must be differently aligned for your business application user interfaces for performance.
Yes typed dataset approach is not the BEST, but people should also know what they are getting into OR/M world.
The Failure of OO developers is that they are tend to be see OO as a GOLDEN HAMMER.But the business software development has many aspects that,it is not wise to build an "OO Domain Model" where the world is consisted of objects.if u think about it tables are also everywhere in business enviroment.Table is an abstact logic to organize big amounts of business data.Like the simpliest example is;when u go to a train station , u never see OO chart of trains,the thing u see is a table with coloumns and rows telling u where to go and when to go.
I don't mean do not use OO, we should all use OO for some UI staff, but it is just no GOLDEN HAMMER that can solve every problem.LinQ on the other hand is nothing more than an OR/M tool replacing the old idea of objectspaces.(where is it again?...)
So why do I write all these staff , to discourge c# 3.0 team ?
OfCourse not!it is a really nice attempt to try to close the gap between the RDBMS and objects.But the practice and the theory doesn't have to match in daily life of a programmer.
Microsoft should invest time and money in typed datasets, not in OR/M idealism where there is no logical end.
PS: Myth: OR/M vendors might claim if u don't do OO , u will end up with a non adaptable code,spagetti code.
:=) Adaptability is in your head,not in your code...
|
|
|
|
|
Microsoft tries to helping object developers to close the gap between the relational world and the object world.And they call this "object modelling approach" linQ.
Using "custom business entities" in enterprise programming with linQ might seem the best thing after "sliced bread" but it isn't.
For my part I have written my Mappers and used commercial OR/M tools.
They mostly provide the same thing.
__Read the table from the database.
___Put that into a some HELPER thing.(DAO,DAL,ORM)
____Put that into your custom business object
_____Read from your business object.(if u can )
I see NO REASON for this "Unnecessary Pull and Push".
Can anybody explain me why we shouldn't use a typed dataset and go to OR/M?
I hear people saying Typed dataset is really concrete and not flexible to requirement changes.
U even might lose staff when u have to regenerate it.It is far away from perfect.But with typed dataset...
1.U can easily see design time errors + intellisense
2.They also provide rich data for paging,sorting and all kinds of UI staff.
3.For requirement changes,for performance and also for JOINS u should use VIEW's in database modelling.U can also use VIEW's with typed datasets in ADO.net 2.0.really nice graphical component.
On the other hand "object modelling approach" makes NO sense.As to access a group of data,u have to create a custom collection where it is the WORST thing u can do in the business software development in terms of performance.(Arrays are evil).Such as 12 million ORDERS,can have 55 million ORDER_DETAILS and 5 million CUSTOMERS and no LAZYLOADING can help u.Object models use RAM and CPU resources which are really important in terms of performance.if u have more complicated object graph it will be slower.Some might say lets take the performance parameter out and think without it,those people see a "butterfly effect" on their project when they realize that they got to rewrite the application all over again.
So I am really hard on OO but I must admit that Object oriented modelling is a nice theory.(...an idealism that does not work in practice).
if u really go into object domain modelling.U will end up with all these things which are really not neccessary...
U have to deal with
A.identity Maps(read objects only once)
B.UnitOfWork(Transaction mechanism for your objects)
C.Topological Sort(To have right insert ,update orders)
D.Mappers(OR/M buy or build it...MS did not do objectspaces which was a cool decision)
E.Repositories(common place to retreive your collections,if u can...)
F.Specification patterns / Query objects(filtering mechanism in objects,functors anonymous methods,POEAA)
G.Metadata Techniques.(some cool SQL reuse that makes the system slow,through code generation or reflection)
H.Design Patterns like FACADE to put everything under the rug...
Do u think is it worth it?
Also at the end of the day,u will face the fact that u need performance.(as the users say so it is Google time!,they just don't wanna wait,time is more valuable than money...)
Carefully looking at your UI code behind,u can easily find out that every user interface in your application has different aspects of data.And there is no ONE most important aspect.(Like u can't love your mum more than your dad...)
Object Models are not build to have many aspects of data as when designing them OO people don't think in a tabular way,they do think in a object way which is slow as their logic is eating RAM and CPU resources which are expensive.They are trying to reinvent the RDBMS idea at runtime.So every VIEW must be differently aligned for your business application user interfaces for performance.
Yes typed dataset approach is not the BEST, but people should also know what they are getting into OR/M world.
The Failure of OO developers is that they are tend to be see OO as a GOLDEN HAMMER.But the business software development has many aspects that,it is not wise to build an "OO Domain Model" where the world is consisted of objects.if u think about it tables are also everywhere in business enviroment.Table is an abstact logic to organize big amounts of business data.Like the simpliest example is;when u go to a train station , u never see OO chart of trains,the thing u see is a table with coloumns and rows telling u where to go and when to go.
I don't mean do not use OO, we should all use OO for some UI staff, but it is just no GOLDEN HAMMER that can solve every problem.LinQ on the other hand is nothing more than an OR/M tool replacing the old idea of objectspaces.(where is it again?...)
So why do I write all these staff , to discourge c# 3.0 team ?
OfCourse not!it is a really nice attempt to try to close the gap between the RDBMS and objects.But the practice and the theory doesn't have to match in daily life of a programmer.
Microsoft should invest time and money in typed datasets, not in OR/M idealism where there is no logical end.
PS: Myth: OR/M vendors might claim if u don't do OO , u will end up with a non adaptable code,spagetti code.
:=) Adaptability is in your head,not in your code...
|
|
|
|
|
erdsah88 wrote: LinQ on the other hand is nothing more than an OR/M tool replacing the old idea of objectspaces.
Nope. If you attended the last PDC, you'd know that the Linq folks as well as the big heads like Hejlsberg and Box figured out that, after much work with Object Spaces, they decidedly opted for a different approach, and that approach is LINQ.
Firstly, Linq is far more than database queries. In fact, it is general, unified data querying, whether that data is from a database, a collection, an XML file, whatever data you can dig up.
As for DLinq (that is, the database-bindings for LINQ), it is by no means a silver bullet. When combined with anonymous types, you've got yourself a powerful and simple way to query data from a database, but by no means is it a silver bullet. In fact, I'd hesitate to call it an OR/M tool, as it decidely has a different purpose. At the PDC, one of the folks from the audience asked a question similar to what you've said, asking how they can persist objects in a 1-to-1 fashion. The Linq folks responded that is not the purpose of Linq.
The ServerSide.NET has a good article on OR/M mapping[^]. One of my favorite quotes comes from that article, "ORM is the Vietnam of Computer Science; it's damnably easy to get into, and damnably hard to get back out of once you're in it, and you have a tendency to all of a sudden find yourself in the middle of a "situation" that's untenable and hard to live with. (For ORM, frequently this is complex querying, collections, and reporting.) I think Microsoft wants to make sure they've got a clear vision and possible exit strategy before they jump into this quagmire."
|
|
|
|
|
How to invoke from From1 application (it`s the main appliction) to another form (window) with for example cliking the button?
|
|
|
|
|
In your button's click event handler:
frmYourForm oForm = new frmYourForm();
oForm.Show(); // Will show non-modal
oForm.ShowDialog(); // Will show modal
Good luck!
- Doug
|
|
|
|
|
Thanks a lot
I discovered that code:
System.Windows.Forms.Form NewFormObject = new MyForm();
is valid too
|
|
|
|
|
hi all
i am working on multiple forms in my application. i want to close some forms during the course of the application. i create an object of the form and aply of the close method. but this does not seem to work becasue it closes that instance of the newly created object. how can i close the already runing instance of the form from another form?
please help
regards
vineet
|
|
|
|
|
When you create your forms, keep the reference of that form in your calling class. This is the object that you'll want to close. When you create a new variable, you are just creating a new form object, and it is unrelated to anything you already have open.
- Doug
|
|
|
|
|
Is there a way to convert a VB.NET 2005 application to a C#.NET application? Thanks.
|
|
|
|
|
There are a few VB -> C# converters out there. I don't know how much SharpDevelop's progressed, especially for .NET 2.0 but you may be able to convert the majority of it using this and then tweak the code manually.
I'm afraid that's the only recommendation I have for .NET 2, I know of a few others for .NET 1 though.
The most reliable method would be to convert it manually (unless it's several million lines long)
Ed
|
|
|
|
|
You are almost always better off doing such things manually. A construct that may be great in VB, could be a dog in C#.
Another option is shove all the VB.NET stuff into it's own assembly and don't worry about converting it.
Anyone who thinks he has a better idea of what's good for people than people do is a swine.
- P.J. O'Rourke
|
|
|
|
|
Step 1: Ensure the VB app compiles with Option Strict and Option Explicit.
Step 2: Use the automatic converter in SharpDevelop 2.0. (Project > Convert > From VB to C#)
Step 3: Fix any compiler errors introduced by the conversion (e.g. error because of case-sensivity)
Step 4: Carefully review the C# code
|
|
|
|
|
I have been hearing years can I convert from xxx to C, C++, and now C#. Now the issue at hand, a VB .NET application to be converted to C#. My initial inclination is to address the question of what is gained by converting to C#? Don't get me wrong, I prefer any C variant to any VB variant. BUT with .NET the choice of language is really distilled down to what style you prefer most: C#, VB, J#, I even hear there is a Python for .NET. At the end of the day, unless you have a lot of spare time converting an application to another for the sake of changing styles and format is a dubious endeavor. IF however you have a real need such as your company has forbidden VB then I would lean toward rewriting. Code generators and converters are decent when you get then tuned but let's face it if the whole point of converting the app is to get it in C#, I would not trust that to a conversion process.
Mike Luster
CTI/IVR/Telephony SME
|
|
|
|
|
I have been hearing years can I convert from xxx to C, C++, and now C#. Now the issue at hand, a VB .NET application to be converted to C#. My initial inclination is to address the question of what is gained by converting to C#? Don't get me wrong, I prefer any C variant to any VB variant. BUT with .NET the choice of language is really distilled down to what style you prefer most: C#, VB, J#, I even hear there is a Python for .NET. At the end of the day, unless you have a lot of spare time converting an application to another for the sake of changing styles and format this becomes a dubious endeavor. IF however you have a real world need such as your company has forbidden VB then I would definitely lean toward rewriting. Code generators and converters are decent when you get then tuned but let's face it if the whole point of converting the app is to get it in C# to improve the maintainability, I would not trust that to a conversion process.
Good Luck
Mike Luster
CTI/IVR/Telephony SME
|
|
|
|
|
There are a number of VB to C# converters out there, including ours (Instant C#). The code quality after conversion will be nearly identical to the code quality before conversion (after a few manual adjustments).
David Anton
www.tangiblesoftwaresolutions.com
Instant C#: VB to C# converter
Instant VB: C# to VB converter
Instant C++: C# to C++ converter and VB to C++ converter
Instant J#: VB to J# converter
Clear VB: Cleans up VB.NET code
Clear C#: Cleans up C# code
|
|
|
|
|
|
Hi all
i can set the region of the tab control, and also i can set the region of the tabpage to any shape i want
but i am wondering how to set the region of the Tabs itself
if u can help, plz show me the way , or give me a suitable URL
thx
|
|
|
|
|
I have created a class to hold some related data (ResultsOfTest). I am reading a record from an Access file, loading the data into the members of the ResultsOfTest class, then inserting into a database.
This works properly, except I am consuming massive amounts of memory. My understanding was the memory would be released when the new'ed object went out of scope (which should be at the end of the while loop). This doesn't seem to be the case, so I stuck in a garbage collection collect call every 1000 iterations. This doesn't seem to have any effect on the memory consumption.
while (readerAccess.Read())
{
ResultsOfTest testResult = new ResultsOfTest();
try
{
if (!readerAccess.IsDBNull(iTestIndex)) { testResult.TestID = Convert.ToDecimal(readerAccess.GetInt32(iTestIndex)); }
if (!readerAccess.IsDBNull(iEmployeeIndex)) { testResult.EmployeeID = Convert.ToDecimal(readerAccess.GetInt32(iEmployeeIndex)); }
if (!readerAccess.IsDBNull(iTestDateIndex)) { testResult.DateOfTest = readerAccess.GetDateTime(iTestDateIndex); }
if (!readerAccess.IsDBNull(iReasonIndex)) { testResult.ReasonForTest = Convert.ToDecimal(readerAccess.GetInt32(iReasonIndex)); }
if (!readerAccess.IsDBNull(iDeferredDateIndex)) { testResult.DeferredDate = readerAccess.GetDateTime(iDeferredDateIndex); }
if (!readerAccess.IsDBNull(iReadDateIndex)) { testResult.ReadDate = readerAccess.GetDateTime(iReadDateIndex); }
if (!readerAccess.IsDBNull(iResultIndex)) { testResult.TestResultCode = Convert.ToDecimal(readerAccess.GetInt32(iResultIndex)); }
if (!readerAccess.IsDBNull(iCommentIndex)) { testResult.Comments = readerAccess.GetString(iCommentIndex); }
iRecordsRead++;
}
catch (OleDbException odex)
{
logger.Error("Unable to read record from Access db. {0}", odex.ToString());
}
if (testResult.TestID > 0)
{
if (db.UpdateTestResults(ref testResult))
{
iRecordsWritten++;
}
}
if ((iRecordsRead % 1000) == 0)
{
StatusScreen.SetStatus("Garbage collection in process...");
GC.Collect();
}
}
What can/should I do differently, so the program will release the un-needed memory?
Thanks,
Glenn
-- modified at 10:57 Wednesday 15th March, 2006 (placed pre inside the code marker)
|
|
|
|
|
Glenn E. Lanier II wrote: My understanding was the memory would be released when the new'ed object went out of scope (which should be at the end of the while loop). This doesn't seem to be the case, so I stuck in a garbage collection collect call every 1000 iterations. This doesn't seem to have any effect on the memory consumption.
Garbage collection occurs when it needs to. When objects go out of scope they mearly become available for garbage collection.
If you are looking at the memory in the Task Manager then it will be showing the amount it has reserved from the operating system, not the actual amount in use. There are a number of performance counters you can look at for .NET applications. They will give you a more accurate picture of what is going on.
ColinMackay.net
Scottish Developers are looking for speakers for user group sessions over the next few months. Do you want to know more?
|
|
|
|
|
Colin Angus Mackay wrote: If you are looking at the memory in the Task Manager then it will be showing the amount it has reserved from the operating system, not the actual amount in use. There are a number of performance counters you can look at for .NET applications. They will give you a more accurate picture of what is going on.
Such as?
I let this code run yesterday (without the GC.Collect()) on about 60000 records. I logged each update, and noticed that while I was getting > 1 insert/second initially, by record 20000 insert time was reduced (as was machine response time) and by the time I got to record 32000, I was getting an insert every two-three minutes. Task Manager showed this process was using ~485M of memory. As soon as I killed the process, memory usage (again, using task manager) dropped almost immediately. I started the import again (skipping the already imported records), and I saw similiar results.
I'm open to any suggestion(s) that will allow this code to run efficiently.
--G
|
|
|
|
|
When you're done using the SqlDataReader, call Dispose() on it. This will release some unmanaged resources and may also allow some managed objects to eventually be freed. The same goes for your SqlCommand and your SqlConnection.
Another thing might be your call to db.UpdateTestResults(ref testResult). If you're storing your testResult somewhere, it won't be freed, obviously. So perhaps your large consumption of memory is due to having lots of ResultsOfTest objects lying around.
Go check out the CLR profiler[^], or use a tool like Ants Memory and Performance profiler[^].
Tech, life, family, faith: Give me a visit.
I'm currently blogging about: Moral Muscle
The apostle Paul, modernly speaking: Epistles of Paul
Judah Himango
|
|
|
|
|