|
Are you using DataSet as container? Do you know how to extract data?
int nCount = 0;
foreach (Control ctr in this.Controls)
{
if (ctr is TextBox)
{
TextBox tb = ctr as TextBox;
tb.Text = dataSet1.DataTable1[nCount].Text1;
nCount++;
}
}
This is an example how to do it. Based on how you read your data, it may be a different than this example.
I Hope I put you in the Right direction.
|
|
|
|
|
I believe this is a better way of checking an object's type as there is only the one cast then just a null check...
foreach (Control control in Controls)
{
TextBox textBox = control as TextBox;
if(textbox != null)
{
}
)
Dave
BTW, in software, hope and pray is not a viable strategy. (Luc Pattyn) Why are you using VB6? Do you hate yourself? (Christian Graus)
|
|
|
|
|
I have an XML Database that I am loading into a Dataset object in a C# 2008 Windows Forms project. I need to know how to access the individual fields and values. The actual database is not in my control, so I have to use it as given. And I need to be able to offer portions of the values for editing, while hiding others. So I need to how to access each individual field and value. Here is a very tiny mocked up version of the XML code:
<?xml version="1.0"?>
<database version="1.0">
<DatabaseItem name="FirstTable">
<DatabaseSubitem id="1">
<DatabaseField name="Id" value="1" />
<DatabaseField name="Freq" value="151955000" />
<DatabaseField name="Mode" value="Mode:Selective" />
</DatabaseSubitem>
<DatabaseSubitem id="2">
<DatabaseField name="Id" value="1" />
<DatabaseField name="Freq" value="151955000" />
<DatabaseField name="Mode" value="Mode:Selective" />
</DatabaseSubitem>
<DatabaseSubitem id="3">
<DatabaseField name="Id" value="65535" />
</DatabaseSubitem>
</DatabaseItem>
<DatabaseItem name="SecondTable">
<DatabaseSubitem id="0">
<DatabaseField name="access_p" value="500" />
</DatabaseSubitem>
</DatabaseItem>
<DatabaseItem name="ThirdTable">
<DatabaseSubitem id="0">
<DatabaseField name="access_Short" value="500" />
</DatabaseSubitem>
</DatabaseItem>
</database>
If there is an easier way than the Dataset object to process the database, please let me know.
Any help would be most appreciated.
Thanks,
Bruce
|
|
|
|
|
It will create table under dataset having three rows
(1) Id<br />
(2) Freq<br />
(3) Mode
You can access values easily by using
ds.Tables[0].Rows[0][0]
ds.Tables[0].Rows[0][1]
so on......
|
|
|
|
|
Is There any Other Way..!
System.Diagnostics.Process.Start(url + "?/" + txtFolderName.Text+"&UserName="+txtauthUserName .Text +"&Password="+txtAuthPassword .Text);
I also tried
System.Diagnostics.Process.Start(string,string,secure,string)
but not worked..!
Thanks..!
|
|
|
|
|
You are attempting to launch URL?
you need url to pass as Arguments not process name
System.Diagnostics.Process.Start("Chrome.exe", "http://www.google.com");
|
|
|
|
|
Saksida Bojan wrote: System.Diagnostics.Process.Start("Chrome.exe", "http://www.google.com");
I need pass username and password which is in windows authentication..!
like
System.Diagnostics.Process.Start("Chrome.exe", "http://www.google.com&username=xxx&password=XXX");
but in a secure way..!
|
|
|
|
|
So you wan't to pass username and password within URL
example:
http://userid:password@www.example.com
PS: I think I read somewhere, where IE8 disabled this kind of pass for security reasons. I can't confirm this statment
|
|
|
|
|
CoderOnline wrote: but in a secure way..!
That is not secured way. Only if site uses SSL. But it still won't prevent people to Hack using keyloggers and other tools. To be secured, Consumer need to make sure his/her computer software is up to date and using regular maintenance
|
|
|
|
|
Thanks For Your Info..!
I will Look into tht..!
|
|
|
|
|
hi
when i use this command for taking a picture :
wiaVideo[sendpic].TakePicture(out jpgFile);
the picture automatically save to this address:
C:\Documents and Settings\All Users.WINDOWS\Application Data\Microsoft\WIA
and after several use of program it gives this exeption:
system.outofmemoryexeption
how can i disable the picture automatically save in wia
thanks
|
|
|
|
|
hi all,
for compress the pdf file i use below code:
byte[] bufferWrite;
FileStream fsSource;
FileStream fsDest;
GZipStream gzCompressed;
fsSource = new FileStream(@"C:\Invoice.pdf", FileMode.Open, FileAccess.Read, FileShare.Read);
bufferWrite = new byte[fsSource.Length];
fsSource.Read(bufferWrite, 0, bufferWrite.Length);
fsDest = new FileStream(@"C:\Invoice.zip", FileMode.OpenOrCreate, FileAccess.Write);
gzCompressed = new GZipStream(fsDest, CompressionMode.Compress, true);
gzCompressed.Write(bufferWrite, 0, bufferWrite.Length);
fsSource.Close();
gzCompressed.Close();
fsDest.Close();
pdf file compress successfully but without extension(.pdf) .......where is problem?
|
|
|
|
|
zeeShan anSari wrote: pdf file compress successfully but without extension(.pdf) .......where is problem?
You named it .zip, just stop naming it .zip ?
On the other hand, renaming a gzip file to .pdf doesn't (to my knowledge) suddenly make it a pdf file, it just makes your computer think it is.
|
|
|
|
|
Thanks.........u r right
|
|
|
|
|
harold aptroot wrote: (to my knowledge)
Are you not sure about it ?
|
|
|
|
|
No, are you? Some PDF readers may (in theory at least) decide to detect it and decompress the file before loading
I wouldn't claim anything with full certainty without testing, and I'm not about to test every known PDF reader with gzipped pdf's
|
|
|
|
|
Today I have read CutePDF features, and it seems PDF supports commpresion. But it does not uses any zip, rar or any other container based compresion
|
|
|
|
|
I use the following code for a single file upload
FtpWebRequest ftp = (FtpWebRequest)FtpWebRequest.Create("ftp://servername/filename.txt");
ftp.Credentials = new NetworkCredential("login", "password");
ftp.Method = WebRequestMethods.Ftp.UploadFile;
StreamWriter sw = new StreamWriter(ftp.GetRequestStream());
sw.Write(fileContent);
sw.Close();
Now - how do I upload multiple files in a single connection? Calling FtpWebRequset.Create with a different file name erases all the info about credentials, method etc. so I must set these again, which is annoying, but I suppose that the connection must be established again, which is not only annoying but also ineffective. Is there any better way?
|
|
|
|
|
I have a system where I need to compare files to see if they are the same. Is doing something like CRC the fastest/best way? I could do a byte by byte comparison but that seems like it would be slower.
I'm going to be keeping a list of files in a database so if I could come up with a number I generate just once, that would be great because I could store it in the database and then compare from then on (I'll be adding files to the list as time goes on so I need to compare them as it goes.)
I'm writing in C# v2.0 (although if I have to I might be able to go to 3.5).
Any thoughts would be appreciated.
TIA - Jeff.
|
|
|
|
|
Using CRC is one possibly solution. But you need to know, that sometimes CRC could be same, with a different file content. CRC is the fastest solution.
jbradshaw wrote: I could do a byte by byte comparison but that seems like it would be slower.
Byte to byte is super slow. Compare with a blocks of bytes.
For example: Read first file with 1024 bytes, read second file with 1024 bytes, then compare. Newer do 1 Byte comparison, you would "kill system"
PS: The only Short cut can be by comparing length
modified on Wednesday, December 2, 2009 12:07 PM
|
|
|
|
|
It depends, if the files can be edited by a malicious user then a CRC will not be good enough since it's very easy to calculate collisions for them. SHA2 would be OK, but there is still a chance that two different files will accidentally have the same hash (which follows from the fact that there are more different files than there are hashes, since the hash has a small fixed length), that chance is rather small for meaningful files though.
If it's very extremely important that the files are actually the same (with zero chance on false positives) then there are no shortcuts and you'd have to compare them byte-by-byte (edit: but of course you should read the file block-by-block, as said above, it would be rather braindead to read just a single byte many times in a loop)
|
|
|
|
|
You have to define when files are the same to you.
In one way, if two files exist, they are always different: they may have identical creation date, modification date, length, and content; however when they have identical names, they are residing in different folders or partitions. So be more specific.
Once defined, you can perform identity checking by checking the attributes that are relevant to your definition; for content it is wise to calculate (and probably store) some kind of hash, their is an infinite number of definitions and algorithms; Windows Explorer itself holds one 32-bit CRC for file content; ZIP files hold another one. Hashes and CRCs will be identical when content is identical, and they are very likely to be different for different content; when that isn't good enough, you need to compare all the bytes.
|
|
|
|
|
A byte-by-byte compare will likely take less time than calculating two CRCs or hashes (especially if the compare returns false early), so the only gain is if you store the CRC or hash for later. But you need to be sure that the file hasn't changed since you calculated its CRC or hash.
jbradshaw wrote: keeping a list of files in a database
Are you storing the actual file content? Or just the path and other information?
If you only store the path I wouldn't trust that the file has not been changed (or even deleted) so I wouldn't bother storing the CRC or hash.
Personally, I store the file content and a SHA1 hash.
<Anecdote>
My GenOmatic[^] generates a file and I want to know if the new file matches the previous version. I considered storing a hash in the file and comparing, but quickly decided that it was too unreliable so I went with a string compare.
</Anecdote>
|
|
|
|
|
I know the files won't change once I have calculated the CRC for them.
All I really need is a unique number on the contents of the file so that later if I have two files, I can compare those numbers instead of having to do a byte by byte comparison.
Here's what I'm trying to do in a nutshell.
Somebody uploads a file to my website.
I store the file for long term storage ( just go with it ).
Somebody uploads another file to my website.
If the file is the same as any of the files I've already stored, don't store it again otherwise store the file.
Somebody uploads another file to my website.
If the file is the same as any of the files....
So really I want to make sure I only have distinct files on the server. I don't have a problem with it being uploaded multiple times, I'd just like to have it so that the file is retained(stored) only once.
I figured by doing some kind of CRC check, I could get the CRC when the file is uploaded and then check the DB for any other files with the CRC. If not, add it to the DB with the CRC and go one. If the file already exists, don't bother saving it and delete the file from the upload area.
TIA - Jeff.
|
|
|
|
|
Makes sense to me; I'd store a hash and file length.
|
|
|
|