|
Thanks OriginalGriff for the information.
I altered my C# code to make the corrections you made but it's C# is indicating errors in ClickCounter() next to the word public [method must have a return type] and also an error in ClickCounterBox [The name ClickCounterBox does not exist in the current ccontext].
What must I do to correct this?
Brian
|
|
|
|
|
1) Read my code again, and compare it to yours!
2) Check your Design view and see what you named the textbox you are trying to display in.
Sent from my Amstrad PC 1640
Never throw anything away, Griff
Bad command or file name. Bad, bad command! Sit! Stay! Staaaay...
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
Hi OrginalGriff.
I'm still getting an error on the "ClickCounter()" code [next to the word public]
I've made the changes and this is my updated code.
namespace Click_Counter
{
public partial class Form1 : Form
{
private int Clicker = 0;
public Form1()
{
InitializeComponent();
}
public void button1_Click(object sender, EventArgs e)
{
ClickCounter();
}
public ClickCounter()
{
Clicker++;
ClickCountBox.Text = "You clicked the button " + Clicker.ToString();
}
}
}
Brian
|
|
|
|
|
Member 14154627 wrote: public ClickCounter() You must declare a return type in all methods other than constructors and destructors as OriginalGriff already showed you. It should be:
public void ClickCounter()
|
|
|
|
|
|
Thanks for the suggested book.
I managed to download a copy of it and it looks useful.
Brian
|
|
|
|
|
I think I have solved the problem by adding the word "void"
public void ClickCounter()
I remembered that you need to add the word void if it does not return a value.
All seems to work OK now thanks.
Brian
|
|
|
|
|
Well done!
You should be able to work most of those problems out just by looking at the error messages that VS gives you and a little thinking. But we all make these mistakes from time to time...
Sent from my Amstrad PC 1640
Never throw anything away, Griff
Bad command or file name. Bad, bad command! Sit! Stay! Staaaay...
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
what is best alternative to microsoft bot framework?
I mean bot developed by microsoft bot framework must be deployed on azure, which is paid.
What are the free alternatives,
I mean I want to develop bot in dotnet and host it on my own server/hosting environment.
cheers
--nitin
=====================================================
The grass is always greener on the other side of the fence
|
|
|
|
|
You will most likely find the answer by using a search engine: Google or Bing being "the best" (whatever that means).
|
|
|
|
|
AFAIK, there is no free alternative; it would be rather expensive to maintain a network like that and open it up for free.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
Hello Team,
Good Morning!! We are using C# Code that was converted from Java which we have used in Script Task in SSIS in order to compare 2 report files and based on missing units we are trying to generate those and place in one folder.
We have 4 Variables that we have created in SSIS package and calling them in Script task.
It gives an error as below :
- [ Dts Script Task has encountered an exception in user code ]
- [ Cannot load script for execution ]
Need some help in this regards.
|
|
|
|
|
This is not a good question - we cannot work out from that little what you are trying to do.
Remember that we can't see your screen, access your HDD, or read your mind - we only get exactly what you type to work with.
We have no idea what the java code looked like, what the resulting C# code looks like, or what your script is doing! In short: we have no information at all, and no access to your system to get it...
Sent from my Amstrad PC 1640
Never throw anything away, Griff
Bad command or file name. Bad, bad command! Sit! Stay! Staaaay...
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
UPDATE #2:
After going back to my original test number, and decoding it (literally) bit-by-bit, in each of the formats, before and after conversion, I'm baffled. The float ->double conversion does exactly what I would do.
This makes it even more difficult to explain the outcome of my earlier test programs. The conversion itself seems accurate, but I'm clearly missing something. So, for now, I'm abandoning this post.
I'll come back and update it when I answer my own question. Though, that will probably be after I write an article explaining the ridiculous trivia of exactly what C# does with each of the floating-point formats. After which, I've really got to get a life
UPDATE #1:
After further consideration, I reassert that there is something uniquely, and inexplicably, terrible about the float ->double conversion!
Some have suggested that this was something inherent with how floating point numbers are stored and not something uniquely terrible about float ->double conversions. After a little bit of convincing, I concede that my original description did not exclude this possibility. While I intentionally chose a number with six decimal digits of precision (the limits of IEEE 754 binary32), perhaps unintentional bias led me to choose numbers that were particularly susceptible to this issue.
So, to disprove my original premise, I wrote a new test program (included at the end of this post). This program generated random numbers with between one and six digits of precision. To avoid bias towards any one of the three floating-point formats, it calculates the "ideal" text from an integral value using only string manipulation to format it as floating-point.
It then counts the number of times the ToString for the assigned values (decimal , double , and float ) and converted values (decimal ->double , decimal ->float , double ->decimal , double ->float , float ->decimal , and float ->double ) differ from this ideal value.
After running the program for 1 million cycles, the results were as follows:
decimal: 0
double: 0
float: 0
decimal->double: 0
decimal->float: 0
double->decimal: 0
double->float: 0
float->decimal: 0
float->double: 750741
I reassert that there is something uniquely, and inexplicably, terrible about the float ->double conversion! As mentioned, the code is included at the end of this post.
ORIGINAL:
OK, this one drove me nuts!
I was developing some test cases and took a short cut. I used a float to represent some simple test data. I made the assumption that, since it has the least precision, anything simple that can be accurately represented in this format could be accurately represented in double or decimal (following conversion).
Now, to be clear, the value I chose (123.456 ) was only 6 significant decimal digits. This is well within the limits of the IEEE 754 binary32 / single format for accurate representation.
So, I went on my merry way and a bunch of my tests failed! The specific tests that failed all involved float to double conversions.
Imagine my surprise! I hadn't intended to test C# itself, but it appears I had. Again, the chosen value (123.456 ) is well within the required IEEE 754 accuracy limits: 6 decimal digits for single and 15 decimal digits for double.
So, I conducted a bunch of tests. Of the six possible floating-point conversions (decimal ->double , decimal ->float , double ->decimal , double ->float , float ->decimal , and float ->double ), only float to double yielded unexpected results. Comically, even casting the float first to decimal and then to double yielded correct results.
So, I'll easily work around this problem, but this seems to be a bug!
Is anyone familiar with the exact technique C# uses to convert these values? Is it a hardware instruction (on most platforms) or is there code? If the former, then maybe different platforms would yield different results?
Basically, I want to figure out who to blame (Microsoft or Intel)
In case anyone thinks I'm imagining this issue, below is the test code I used to determine the exact characteristics of the problem. After each Console.WriteLine , I include a comment with the output it produced.
using System;
namespace FloatingPointConversion
{
public class Program
{
public static void Main(string[] args)
{
decimal decimalValue = 123.456M;
double doubleValue = 123.456D;
float floatValue = 123.456F;
Console.WriteLine($"{nameof(floatValue)} = {floatValue}");
Console.WriteLine($"{nameof(decimalValue)} = {decimalValue}");
Console.WriteLine($"{nameof(doubleValue)} = {doubleValue}");
Console.WriteLine($"(float){nameof(decimalValue)} = {(float)decimalValue}");
Console.WriteLine($"(double){nameof(decimalValue)} = {(double)decimalValue}");
Console.WriteLine($"(decimal){nameof(doubleValue)} = {(decimal)doubleValue}");
Console.WriteLine($"(float){nameof(doubleValue)} = {(float)doubleValue}");
Console.WriteLine($"(decimal){nameof(floatValue)} = {(decimal)floatValue}");
Console.WriteLine($"(double){nameof(floatValue)} = {(double)floatValue}");
Console.WriteLine($"(double)((decimal){nameof(floatValue)}) = {(double)((decimal)floatValue)}");
}
}
}
using System;
namespace FloatingPointConversion
{
public class Program
{
private static readonly decimal[] Divisors = new decimal[] { 1, 10, 100, 1000, 10000 };
private static readonly string[] UnexpectedLabels = new string[]
{
"decimal", "double", "float", "decimal->double", "decimal->float", "double->decimal",
"double->float", "float->decimal", "float->double"
};
private static readonly int[] UnexpectedCounts = new int[UnexpectedLabels.Length];
private const int MaximumSixDigitNumber = 999999;
private const int NumberOfAttempts = 1000000;
public static void Main(string[] args)
{
RandomTest();
for (int index = 0; index < UnexpectedLabels.Length; index++)
Console.WriteLine($"{UnexpectedLabels[index]}: {UnexpectedCounts[index]}");
}
private static void RandomTest()
{
var random = new Random();
for (int attempt = 0; attempt < NumberOfAttempts; attempt++)
{
int intValue = random.Next(MaximumSixDigitNumber + 1);
int shift = random.Next(0, Divisors.Length);
string idealText = GetIdealText(intValue, shift);
decimal decimalValue = intValue / Divisors[shift];
float floatValue = intValue / (float)Divisors[shift];
double doubleValue = intValue / (double)Divisors[shift];
if (decimalValue.ToString() != idealText)
UnexpectedCounts[0]++;
if (doubleValue.ToString() != idealText)
UnexpectedCounts[1]++;
if (floatValue.ToString() != idealText)
UnexpectedCounts[2]++;
if (((double)decimalValue).ToString() != idealText)
UnexpectedCounts[3]++;
if (((float)decimalValue).ToString() != idealText)
UnexpectedCounts[4]++;
if (((decimal)doubleValue).ToString() != idealText)
UnexpectedCounts[5]++;
if (((float)doubleValue).ToString() != idealText)
UnexpectedCounts[6]++;
if (((decimal)floatValue).ToString() != idealText)
UnexpectedCounts[7]++;
if (((double)floatValue).ToString() != idealText)
UnexpectedCounts[8]++;
}
}
private static string GetIdealText(int intValue, int shift)
{
string text = intValue.ToString();
if (shift == 0)
return text;
return TrimTrailingZeros(text = shift >= text.Length ?
"0." + text.PadLeft(shift, '0') : text.Insert(text.Length - shift, "."));
}
private static string TrimTrailingZeros(string text)
{
int length = text.Length;
while(text[length - 1] == '0')
length--;
if (text[length - 1] == '.')
length--;
return length < text.Length ? text.Substring(0, length) : text;
}
}
}
modified 4-Mar-19 0:36am.
|
|
|
|
|
Float (and Double) types are not directly convertible to Decimal due to the fact they are are held as binary values. See What Every Computer Scientist Should Know About Floating-Point Arithmetic[^] for the full explanation. Unless you specifically need to use floating point types (e.g for statistical analysis etc.) then you should stay well clear of them. For financial applications always use integer or decimal types.
|
|
|
|
|
UPDATE: Seems I'm simply wrong on my point below. I was misled by some less than clear wording in one part of the specification. That said, I still find it odd that float ->double is the only one of the six possible C# floating-point conversions that is consistently less accurate in its apparent result. I tried a bunch of different values with this same consistent outcome. Something seems wrong to me.
I do understand your point. It is a great generalized warning for those unwilling to learn the specifics of the precise floating-point data type they are using.
If I were using something other than an IEEE 754 binary32 data type (float ), or using more than six significant decimal digits, I would completely agree with you. However, since neither of these are the case here, I respectfully and completely disagree.
The behavior is simply inconsistent with the IEEE 754 specification. The specification of binary32 provides 23 explicit bits for the significand. This provides accuracy for a minimum of six significant decimal digits. The specification explicitly details that the representation of this number of significant decimal digits (six or less) must be exactly accurate.
In the case of binary64 , which has 52 explicit bits for the significand, the specification requires accuracy to a minimum of 15 significant decimal digits.
modified 3-Mar-19 10:45am.
|
|
|
|
|
It depends on the number you start with. Since accuracy is not guaranteed with floats and doubles, you can get inconsistencies, because the number will often be an approximation. Floating point's strength is (was) its ability to represent very large or very small numbers with reasonable, but not absolute, accuracy. For most business applications it should, as I suggested earlier, be avoided like the plague.
Eric Lynch wrote: those unwilling to learn the specifics of the precise floating-point data type they are using. Or in many cases (see QA) those who are still being taught to use it.
|
|
|
|
|
Yeah, we cross-posted. I had already retracted my...I'll call it "point"? Regrettably, I misinterpreted the IEEE 754 specification. I can only wish that I realized my mistake before posting
That said, I still suspect there is something less than ideal ocurring with the float ->double conversion. I'm working on some code to more rigorously explore the issue. I'll either prove or disprove my suspicions. I'll come back later and post whatever I find.
|
|
|
|
|
I just tried something similar in C++ and it produces the correct results. Now I am
|
|
|
|
|
I finished my new tests and updated my original post to include the results. I stand by my original premise. There is something uniquely, and inexplicably, terrible about float ->double conversions.
The test randomly generated 1 million numbers with between one and six digits of precision. Of the six possible floating-point conversions, only the ToString for float ->double ever differed from the "expected" text. It did so a whopping 750,741 times. I beleive this is what they call "statistically significant"
|
|
|
|
|
OK, this is sadly nearing obsessional I've gone through the effort of decoding every dang bit in the IEEE 754 formats.
Near as I can tell, the float ->double conversion is doing absolutely what I would expect of it. Though, it is still yielding a result that has the appearance of being worse.
I'm currently baffled...maybe double.ToString() is the culprit? Maybe its something else entirely?
I'm going to give this a whole lot more thought tomorrow...at a decent hour. Though, since it has no practical impact on anything I'm actually doing, I should probably let it go. Regrettably, intellectual curiosity has a firm hold of me at this point
Below you'll find the output from my latest program, where I enter the text "123.456". "Single" / "Double" are conversions from the results of decimal .Parse . "Single (direct)" / "Double (direct)" are the results of float .Parse /double .Parse . The remainder are the indicated conversions of the results of a float .Parse .
Single: Sign=0, Exponent=6 (10000101), Significand=7793017 (11101101110100101111001)
123.456
Single (direct): Sign=0, Exponent=6 (10000101), Significand=7793017 (11101101110100101111001)
123.456
Double: Sign=0, Exponent=6 (10000000101), Significand=4183844053827191 (1110110111010010111100011010100111111011111001110111)
123.456
Double (direct): Sign=0, Exponent=6 (10000000101), Significand=4183844144021504 (1110110111010010111100100000000000000000000000000000)
123.456001281738
float->double: Sign=0, Exponent=6 (10000000101), Significand=4183844144021504 (1110110111010010111100100000000000000000000000000000)
123.456001281738
float->decimal->double: Sign=0, Exponent=6 (10000000101), Significand=4183844053827191 (1110110111010010111100011010100111111011111001110111)
123.456
After this much effort, I guess I'll eventually be forced to write an article on every useless bit of trivia I can find about all of these formats
|
|
|
|
|
To add to Richard's comments, you might get a surprise if you print the values to more digits of precision. There are literally billions of distinct numbers which print as 123.456 to 3 decimal places.
Software rusts. Simon Stephenson, ca 1994. So does this signature. me, 2012
|
|
|
|
|
Based on my misinterpretation of the IEEE 754 specification, which I foolishly shared in a response to Richard, it seems today is a day full of surprises for me
That said, I would not expect converting from IEEE 754 binary32 to binary64 would make things apparently worse. I tried a bunch of different values. Of the six possible floating point conversions this one consistently yields the worst apparent outcome.
If anything, I would have expected converting binary64 to binary32 to have the worst apparent outcome.
|
|
|
|
|
I conducted some more rigorous experiments. This explanation doesn't match the experimental evidence.
I now feel somewhat certain, based on evidence from a million randomly generated numbers, that there is something uniquely, and inexplicably, terrible about float ->double conversions.
I qualify it with "somewhat", because I refuse to be completely wrong, about the same thing, twice in a single day
|
|
|
|
|
This discussion from 2011 looks relevant:
What you get in the more precise representation (past a certain point) is just garbage. If you were to cast it back to a float FROM a double, you would have the exact same precision as you did before.
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|