In C/C++, char is one byte, and whether it is signed or not is up to the implementation. (In my experience, if I'm remembering right, it is usually unsigned.) If you are relying on particular values, for example checking a range that goes across 128, use signed char or unsigned char. See
this article[
^].
In C#/.Net, char is a distinct
two byte character type. Any arithmetic on it implicitly casts it to a numeric type so the concept of sign is not relevant. See
here[
^] and
here[
^].
Java is similar to C#/.Net.