|
Greetings,
i am using Visual studio, C# reporting with windows reportviewer control, which grab reports from *.rdlc file. When i start my program it shows all records accuratelly, but when i add some records by means of text boxes and re-open the form it does'nt shows new values.
When i stop the solution and again compile and run the application then the new entries also there.
Suggest a method how to re-bind or re-fill datasource or table adopter.
Best Regards
|
|
|
|
|
how to make a table 20 to 50 to a dial member other person
|
|
|
|
|
Your question is just too unclear.
If you want to use Hindi, then go to the GIT and try posting the same question in that language.
|
|
|
|
|
|
Done. Thanks. My 5 for the suggestion.
|
|
|
|
|
Further to my answer, post your question over here[^].
|
|
|
|
|
Anyone know if/how I can get the CPU Serial Number in C# 4.5 under Windows 8?
Thanks
Everything makes sense in someone's mind
|
|
|
|
|
Why would you want to? Everyone turns it off.
In fact, it's normally turned off by default and todays processors don't even implement it anymore because of privacy concerns.
|
|
|
|
|
|
The problem with using that is if the CPU ID is turned off, the field will be blank or say something generic. Manufacturers are not required to have providers fill in WMI objects.
Besides, Only a couple of manufacturers implemented CPU ID and AMD was NOT one of them.
|
|
|
|
|
As far as I know, Win32 is not available in Metro. You have to use WinRT.
What I need is a machine-specific unique Id.
Everything makes sense in someone's mind
|
|
|
|
|
There's no such thing in a single value. You'll have to build a value taking data from various parts of the machine and registry. Such as getting a hard drive serial number, maybe a CD/DVD drive serial number, chassis serial number, machine SID, ... If any of these values are missing, you still have other things that you can get. You combine all of this stuff together using some algorithm or scheme and you end up building what amounts to be a pretty unique identifier.
|
|
|
|
|
Ok, so...
1) Whatever sceheme I use cannot rely on removable devices.
2) Any examples of doing this in C# through WinRT?
Everything makes sense in someone's mind
|
|
|
|
|
WinRT?? Nope. Too new.
Can I share the example I have? Nope, 'cause it doesn't belong to me.
|
|
|
|
|
I am trying to use a regex expression to replace all of the LF end of line chars in a file with CRLF. I noticed that for some reason that when the LF is replaced with CRLF that the last char in the line is cut off. Is there something wrong with my syntax? For this particular file, all of the lines end with ! which is used as a segment delimeter. Thanks in advance for your help!
try
{
string data = null;
using (StreamReader srFileName = new StreamReader(FileName))
{
data = srFileName.ReadToEnd();
data = Regex.Replace(data, "[^\r]\n", "\r\n");
}
using (StreamWriter swFileName = new StreamWriter(FileName))
{
swFileName.Write(data);
}
}
|
|
|
|
|
you asked: Why is my regex cutting off last char when doing replace? Short answer: because that's what you asked it to do.
Longer (and more helpful) answer: Your Regex.Replace says "replace [any non-return char][linefeed] by [return][linefeed]", so of course it's eating up the character before the newline. What you need to do is a capture of the character so you can include it in the replacement expression.
Cheers,
Peter
Software rusts. Simon Stephenson, ca 1994.
|
|
|
|
|
KimberlyKetchum wrote: when the LF is replaced with CRLF that the last char in the line is cut off
That is more or less what you ordered, as "[^\r]\n" represents a single character that isn't \r, followed by \n , so assuming your input does not have any \r, then you are replacing each last char on a line by a \r.
A better approach would be a double replace:
- first replace \r\n by \n
- then replace \n by \r\n
The first step is a safety precaution in case the file already has \r\n somewhere.
BTW: why are you using a stream and its ReadToEnd method, that just doesn't make sense. Either you want a streaming operation (which would make much sense here), or you want all the text at once, and then File.ReadAllText() would be the obvious way.
Luc Pattyn [My Articles] Nil Volentibus Arduum
Fed up by FireFox memory leaks I switched to Opera and now CP doesn't perform its paste magic, so links will not be offered. Sorry.
|
|
|
|
|
The simple solution is:
data = Regex.Replace(data, "\r?\n", "\r\n");
The question mark means that the \r need not be present, but if it is, it is included in the match.
|
|
|
|
|
Got a SQL table that has 7MB of data. Gotta suck it all down and some off shore users are complaining about the performance. So I thought to compress the big column. If I dump out the entire column (400 rows) to a text file and run it through 7zip, it goes down to 200k which I'm happy with. When I do row by row... I'm getting compression, but I end up with 900k of data output??? If I write all my rows out to separate .txt files, I get 4 or 5MB of txt files. If I add them all to a 7z archive, the archive ends up to be 200k as well. So I know my compression code is working. Issue is why when adding them all to an archive is it 25% of the size? Is it doing larger scale RLE or something when you have a bunch of files in the same archive?
|
|
|
|
|
Compression can only work when the data is statistically unbalanced; when all byte values and byte sequences would have the same probability, then there would be no way to get any compression.
Now adding larger chunks of data is likely to result in more compression, as there would be more of a statistical trend, hence more opportunity to compress.
Also, a lot of compression schemes have some overhead with a more or less fixed size (think of it like a dictionary describing the code words that will be used), so more data typically results in a relatively smaller overhead. Part of the rationale here is for smaller amounts of data, the compression ratio isn't all that relevant.
Luc Pattyn [My Articles] Nil Volentibus Arduum
Fed up by FireFox memory leaks I switched to Opera and now CP doesn't perform its paste magic, so links will not be offered. Sorry.
|
|
|
|
|
Well, let me give you more info then:
* table has ~400 rows
* schema is int, int, varchar(80), varchar(256), varchar(max), varchar(80), TimeStamp
- varchar(256) column is mostly NULL
- last varchar(80) column is generally NULL, but occasionally may have a < 16 char
string in it...
- the varchar(max) column is what I am dealing with
* the varchar(max) column contains javascripts
* javascript size varies from about 700 bytes to 43k
* total size of all javascript written out to separate files is 4.8MB
* if I dump all those 400+ javascripts into a 7zip archive, the resultant archive is ~200k
* if I loop throw the rows and compress each column in memory and add up all the
compressed sizes, I end up with 800k in data
I was expecting to add up with something around 200k???
I realize there is probably some header info included, but over 400 files, that shouldn't account for 600k diff.
Thats why I was asking if you throw a bunch of files into the same archive it archives all the files as one chunk vs. individual files which results in it being able to compress "larger chunks" or whatever.
Trying to find a way to get rid of that extra 600k if possible.
|
|
|
|
|
SledgeHammer01 wrote: if you throw a bunch of files into the same archive it archives all the files as one chunk vs. individual files
I have never met a compressor that does that; in apps such as 7zip and WinZip, each file is compressed individually, and can be extracted on its own, even after deleting some or all of the others.
SledgeHammer01 wrote: if I loop throw the rows and compress each column in memory ...
How do you compress that? which classes and/or algorithms are involved?
Luc Pattyn [My Articles] Nil Volentibus Arduum
Fed up by FireFox memory leaks I switched to Opera and now CP doesn't perform its paste magic, so links will not be offered. Sorry.
|
|
|
|
|
7zip has a "solid" option that does that
|
|
|
|
|
Luc Pattyn wrote: I have never met a compressor that does that; in apps such as 7zip and WinZip,
each file is compressed individually, and can be extracted on its own, even
after deleting some or all of the others.
So I'm kind of assuming that if I can compress 400 .js files into a single .7z archive, that compressing 400 .js files into 400 .7z files should be roughly the same size?
Luc Pattyn wrote: How do you compress that? which classes and/or algorithms are involved?
I downloaded the SDK from here http://www.7-zip.org/sdk.html[^], the 9.22 version. I'm using the native C# version. This is the code I'm using to compress the text blocks.
foreach (SoftDataSet.tblScriptRow row in ds.tblScript.Rows)
{
int i = row.Text.Length;
a += i;
System.Diagnostics.Debug.WriteLine("BEFORE: " + row.Text.Length);
SevenZip.Compression.LZMA.Encoder encoder = new SevenZip.Compression.LZMA.Encoder();
SevenZip.CoderPropID[] propIDs =
{
SevenZip.CoderPropID.DictionarySize,
SevenZip.CoderPropID.PosStateBits,
SevenZip.CoderPropID.LitContextBits,
SevenZip.CoderPropID.LitPosBits,
SevenZip.CoderPropID.Algorithm,
SevenZip.CoderPropID.NumFastBytes,
SevenZip.CoderPropID.MatchFinder,
SevenZip.CoderPropID.EndMarker
};
Int32 dictionary = 0x00800000;
Int32 posStateBits = 2;
Int32 litContextBits = 3;
Int32 litPosBits = 0;
Int32 algorithm = 2;
Int32 numFastBytes = 128;
string mf = "bt4";
bool eos = false;
object[] properties =
{
(Int32)(dictionary),
(Int32)(posStateBits),
(Int32)(litContextBits),
(Int32)(litPosBits),
(Int32)(algorithm),
(Int32)(numFastBytes),
mf,
eos
};
encoder.SetCoderProperties(propIDs, properties);
MemoryStream ms = new MemoryStream(row.Text);
MemoryStream msOut = new MemoryStream();
encoder.WriteCoderProperties(msOut);
encoder.Code(ms, msOut, -1, -1, null);
System.Diagnostics.Debug.WriteLine("AFTER: " + msOut.Position + " " + (msOut.Position) * 100 / i + "%");
b += (int)msOut.Position;
}
System.Diagnostics.Debug.WriteLine("1: " + a + " 2: " + b);
}
If I use the above code to compress the 4.5MB single file, it does get down to 200k like the real 7zip app.
But with the above code doing each row by itself... 1: 4551167 2: 885162
So, somehow its getting 685k of extra crap or???
EDIT: FYI, writing out the properties to the output stream is only 5 bytes. So that only accounts for about 2k.
|
|
|
|
|
Thanks. I wasn't aware 7zip offered an API. I do see a dictionary and some "fast bytes" that tell me there is some overhead to be expected, a few KB wouldn't surprise me. I suggest you do perform the little experiment I described in another post in this thread.
Luc Pattyn [My Articles] Nil Volentibus Arduum
Fed up by FireFox memory leaks I switched to Opera and now CP doesn't perform its paste magic, so links will not be offered. Sorry.
|
|
|
|
|