|
Well its installing a editing software that Edits music. Theres a button that allows them to hear a sample of the program. which is what i need a code for the music playing in the installer. sorry that my wording is confusing a bit.
lekira
leKira is my Username here but I'm World Wide know as Naiakoa Call me by any of these.
|
|
|
|
|
lekira wrote: editing software that Edits music
Ah, that makes a lot more sense then.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
lekira wrote: Well its installing a editing software that Edits music. Theres a button that allows them to hear a sample of the program. which is what i need a code for the music playing in the install
Wait a minute. You say you've created music editing software, but you need code to play the music in the installer?? The code does change from what you're using to play the music in your software. Or did you create music editing software that can't play the music it edits?
|
|
|
|
|
lekira wrote: but i want it to like have a feature of playing music while its installing.
WHY?? Most installations don't take that kind of time. It's going to be a bunch of work for what kind of benefit?? What are you going play, elevator music?? You have to be VERY careful about the kind of music you're going to play. Why? Because the quality, or anoyance, of the installer is going to give your customers a first impression of your app before they even double-click the icon for it.
|
|
|
|
|
I already know about that... i need a code for when u click a button in the application that it starts the music. (which they click themselves) and another button that when clicked they can just do the installing without the music i don't force people to do stuff.
lekira
leKira is my Username here but I'm World Wide know as Naiakoa Call me by any of these.
|
|
|
|
|
Hello,
I'm trying to implement a client server system that will download a file(as a byte[] array trough a stream) from an ASP.NET server. I'm getting the file on chunks because as reported in many other places the server Respond.WriteFile and Response.TransmitFile are not reliable solutions for sending big files.
The problem is that when I run the server on local machine to test my system it works very good but when I test it on the site the clients receives every time in a few places in the stream also an array of zero's (about 1000 but not the same length every time) that are being randomly placed between the valid data. Since this happens only on a remote connection I'm thinking that data is being altered during transmition.
Should I check the data being transffered trough a system taht validates it like a checksum for every chunck sent and request that part again if it's invalid or there is another solution to this.
the server code:
int bufferLength = 2048;
Response.ContentType = "application/octet-stream";
Response.AddHeader("Content-Disposition", "attachment; filename=" + Request["document"]);
Response.AddHeader("Content-length", bytes.Length.ToString());
for (int i = 0; i < bytes.Length; i += bufferLength)
{
if (Response.IsClientConnected)
{
if (i + bufferLength < bytes.Length)
Response.OutputStream.Write(bytes, i, bufferLength);
else
Response.OutputStream.Write(bytes, i, bytes.Length - i);
Response.Flush();
}
else
{
break;
}
}
Response.Close();
client source:
HttpWebRequest request = (HttpWebRequest)WebRequest.Create("");
request.Accept = "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8";
HttpWebResponse webResponse = (HttpWebResponse)request.GetResponse();
response = webResponse.GetResponseStream();
bytes = new byte[Convert.ToInt32(webResponse.Headers["Content-length"])];
for (int i = 0; i < bytes.Length; i += AppContext.BufferLength)
{
try
{
if (i + AppContext.BufferLength > bytes.Length)
response.Read(bytes, i, bytes.Length - i);
else
response.Read(bytes, i, AppContext.BufferLength);
if (i + AppContext.BufferLength > bytes.Length)
control.ReportProgress(i + AppContext.BufferLength, bytes.Length - i);
else
control.ReportProgress(i + AppContext.BufferLength, bytes.Length);
}
catch (Exception ex)
{
exception = ex.Message;
}
}
response.Close();
|
|
|
|
|
No, the data is not altered during transmission, you are just reading it wrong.
The Read method returns the number of bytes that was actually read, and that can be less then the number of bytes requested. So, you have to take care of the return value from the Read method, and use that to determine if there is more data to read, and how much to advance the index for the next read.
You should read until the Read method returns zero. That means that it has reached the end of the stream.
Despite everything, the person most likely to be fooling you next is yourself.
|
|
|
|
|
Hi guys, How can i rename a file using dot notion. If i were to type the absolute path with directory name then file gets renamed but using like below this actually move away the file
string src=C:\work\..\..\File.txt
string dest=C:\work\..\..\Fil007.txt
File.Move(src, dest);
|
|
|
|
|
What makes you think this will work?
Why won't you use the full path, it must be available in io.fileinfo, why do you refuse to use it?
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Dude the reason i wanted to do this way is becuase I don't want to type the path. Think aobut it again who likes it, long, cumbersome and complicated. Don't u think it would be nice if we were to just leave on the system and focus on which file we want to rename or move rather than worrying about the absolute path. I wish microsoft will realize my this kind of path pattern.
|
|
|
|
|
Dave has given the best answer, why are you typing, why not a open file dialog. The idea that the dot notation should work is ludicrous, how is the system to decide if there are multiple folders in C:\work\ .
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Their is no simple way for you to accomplish this. If you wish to do this you will have to create a loop that will search 2 layers into every folder searching for the specified file. It is much easier to accomplish if you have the exact filename/file path.
Regards,
Thomas Stockwell
Programming today is a race between software engineers striving to build bigger and better idiot-proof programs, and the Universe trying to produce bigger and better idiots. So far, the Universe is winning.
Visit my Blog
|
|
|
|
|
That "dot" notation does not work how your example implies it does. The dots just mean one of two things. A path starting with a single dot, ".\" means "current directory". A path starting with two dots, "..\" means the parent directory to the "current directory".
In your case, the path starts at C:\work, then goes up one directory, to C:\, then up again, to C:\, then specifies the file, File.txt. The destination is also in the root, C:\Fil007.txt. So, what you essentially did, was rename the file.
Using "dotted notation" like this is not recommended because of the assumptions made by "current directory".
|
|
|
|
|
I have a chart control. But what I need is a powerful chart control.
It should show about 14.5 million points without performance problem.
Are there any chart control, that can show 14.5 million points ?
I try to use normal chart control, it takes a long time to refresh.
I try to use directx to draw lines from 14.5 million points, it takes
also long times.
Does anybody know how to handle 14.5 million points ?
|
|
|
|
|
I think you have a design problem, placing that many points on a chart is well pointless. Surely you can summarise the information to a smaller dataset without compromising the accuracy of the display.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
I disagree. I've run into a similar problem doing geostatistics. Try this scenario:
You have 1000 well logs with readings every 1/4 foot and an average interval of interest of 1000 ft (about 4000 points per well and about 4,000,000 points all together). Now you want to calculate the vertical variogram, so you define lag categories every 1/4 ft, giving lags of 0.25, 0.5, 0.75, 1.0, etc. up to the maximum, say 1000 ft, or about 4000 categories.
To calculate the variogram, for each lag category, you take all possible combinations of points in each well that are separated by the given category distance range, sometimes the categories overlap, so there may easily be more points than you might expect, calculate the square of the difference, sum them all, then divide by the number of points in each lag category. It's a standard calculation. That, of course gives you an average variance vs. lag distance, the definition of a variogram, which is very smoothed.
Now you want to see how much smoothing has been done and if there are some outliers that may be screwing up the calculations or represent bad data. The standard means of doing that is by plotting the variance of the individual points in a variance cloud. The total number of points is about 1000 * (4000 + 3999 + 3998 + ...) = 1000 * 3000 / 2 = 2,000,000 points. You graph the variances of each combination of points vs. lag distance to generate the variance cloud, so you end up having around 2,000,000 points on the graph.
Of course, that's all well and good, except when I find a strange outlier point, I want to know where it came from, so I need to be able to identify which points came from which wells at what combination of depth, so I can go back to the raw data and see if it makes sense.
In this example it was only 2 million points, but I can easily see where you can get many, many millions if you go to overlapping lag ranges, where point combinations may be in more than one lag range, and/or three dimensions rather than two.
To do that I use my own graphics routines and some heuristics to try to figure out which areas of the graph will be totally covered with points and only draw them once. In the case in question, I have no idea if that's possible. Another potential trick is to do the calculations only once (fast) and save them in memory (fast) so you can generate the graph without recalculating anything. That of course involves a trade-off between memory and speed, which may or may not be an option. If you start swapping memory to disk, it may be even slower.
Unfortunately, I haven't found a great way to speed the point selection from the cloud up. My approach is to select an area on the cloud plot, then search through all the points to find out which ones fall within that area. If it has to hit the database every time, it's just slow, but it beats the heck out of doing it by hand!
CQ de W5ALT
Walt Fair, Jr., P. E.
Comport Computing
Specializing in Technical Engineering Software
|
|
|
|
|
Walt Fair, Jr. wrote: My approach is to select an area on the cloud plot,
I'm not debating the need for multi million data points I have an issue with attempting to display them. While I did not (and don't want to) grasp all of your explanation I gather that 1 point is useless as you need to select and area of the cloud to work with.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
OK, you did say that displaying that many points was probably a design problem, right? So how would you suggest designing a system with multi-millions of points and a need to show them to the user so that they can pick 1 or a few? How would you summarize the info and still allow single points to be selected without sacrificing speed? I'm certainly interested in learning something here and if it works, I can apply it immediately.
Mycroft Holmes wrote: I gather that 1 point is useless as you need to select and area of the cloud to work with.
No, I'd prefer to pick just 1 point, and find out where it came from, but the best I've been able to figure out how to do is pick a small area and then let the user refine from there.
CQ de W5ALT
Walt Fair, Jr., P. E.
Comport Computing
Specializing in Technical Engineering Software
|
|
|
|
|
While I don't do a lot of work with charts I do a lot of point analysis. Our people are not interested in points that conform to the rules, they are after the exceptions, we spend most of our time refining the rules so the exceptions are made available to the user for further analysis.
Walt Fair, Jr. wrote: without sacrificing speed
Bloody big machines and some horrendous indexing on the tables. Some of our analysis runs will take 30 min to 3 hours. It is a constant battle to get the best performance from the hardware and database, we are forever tweaking a process, changing an index. We have found that doing basic, regular maintenance on the database can give us excellent benefits. We do not have trained DBA so it is up to us developers to do the best we can.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Mycroft Holmes wrote: we spend most of our time refining the rules so the exceptions are made available to the user for further analysis.
Wish that were possible!
Mycroft Holmes wrote: We have found that doing basic, regular maintenance on the database can give us excellent benefits.
Yep, same here. Except the stuff I do is supposed to be interactive. Sometimes we go for a cup of coffee while waiting to interact, though ...
CQ de W5ALT
Walt Fair, Jr., P. E.
Comport Computing
Specializing in Technical Engineering Software
|
|
|
|
|
Two problems: Reduction and summary. Pick your partitioning technique (ie: quadtree for your 2D data) and then summarize distribution/density at node level. Then when you render you can perform appropriate culling for visibility and render summary/points.
Selection from a set of millions of points, and then putting them on the screen, is a fairly common problem in game graphics (and then we draw triangles, texture/light/mangle them up with pixel and vertex shaders and still manage to hammer them out at 60fps)
|
|
|
|
|
stancrm wrote: It should show about 14.5 million points without performance problem.
First, how many pixels can you monitor show? (1280x1024 = 1.3 million) You simple can't show all those points unless your monitor supports resolutions that no video card can generate.
Second, I don't care which charting library you use, you WILL see a lag graphing 14.5 million points.
|
|
|
|
|
I know, if I show all those 14.5 million points, I see only a green rectangle..
But my customer want it.. It's hard to explain it..
|
|
|
|
|
Good luck with that. The only thing that has a chance of that kind of performance is DirectX and a good video card. Treat the graph like an object in a video game and you might get what you want.
|
|
|
|
|
I have also tried using directx. About 2-3 millions points is OK, after that you cannot see the lines anymore, you can only see the background. The lines are gone. I think directx has also a problem with that.
|
|
|
|
|