Click here to Skip to main content
15,867,765 members
Please Sign up or sign in to vote.
1.00/5 (2 votes)
See more:
i read in a book char take 1 byte on a 32 bit computer and its range is -128 to 127.

Is -128 or 127 is ascii value for a character? If Yes, Then 127 is ok, but What is -128. Please anyone make me understand.

Regards,
Arshad Alam
Posted

Basically 1 byte means 8 bit and every bit can store either 1 or 0. In case of un signed numbers first bit is used to store either + or - . There for the min value that can be store in 1 byte is
-1111111(in binary. i.e 7 1's and first bit for -) and max is +1111111(in binary). Now if you convert then you will your answer.
 
Share this answer
 
v2
Comments
CS2011 20-May-11 11:39am    
Univote ? Care to explain why ?
R. Erasmus 24-May-11 2:56am    
My 5, good answer.
CPallini 24-May-11 3:54am    
In two's complement representation (the one usually adopted), minimum number is 10000000b, i.e. -128d (the number 11111111b is -1d, see http://en.wikipedia.org/wiki/Two's_complement) .
As mentioned by John and Bob, a character does not have a sign. In general character values start at zero for the NULL character and increase to whatever maximum can be held in the defined character type. In the type you mention here (one byte) the maximum value is 255, which allows for a total of 256 characters (the ASCII set). Unicode (default for C# and Java) uses two bytes giving a total of 65536, and multi-byte characters take up to four bytes, if I remember correctly. This paper[^] provides some more detailed information.
 
Share this answer
 
Comments
Albert Holguin 21-May-11 11:19am    
someone came around downvoting, here's my 5
Richard MacCutchan 21-May-11 14:23pm    
Thanks, but it's probably nobody of significance. Frankly I've given up worrying about such people, but I'll try and remember to return the favour when the occasion arises.

Here's an ASCII table[^], and as others have already stated, characters are unsigned.
 
Share this answer
 
v2
Comments
Richard MacCutchan 24-May-11 5:01am    
I have no idea why this was downvoted, it's a useful piece of information for any beginner who is struggling with character values. Added a 5 to compensate.
Albert Holguin 24-May-11 9:13am    
thanks, its that univoter that went around
In C/C++, char is one byte, and whether it is signed or not is up to the implementation. (In my experience, if I'm remembering right, it is usually unsigned.) If you are relying on particular values, for example checking a range that goes across 128, use signed char or unsigned char. See this article[^].

In C#/.Net, char is a distinct two byte character type. Any arithmetic on it implicitly casts it to a numeric type so the concept of sign is not relevant. See here[^] and here[^].

Java is similar to C#/.Net.
 
Share this answer
 
Comments
Richard MacCutchan 20-May-11 8:44am    
C++ handles Unicode characters just as well as C#. It can also handle multi-byte characters as required for some of the Asian languages.
BobJanova 20-May-11 9:15am    
Yes, but not with the char type. wchar_t in C++ is a two byte character type, I believe.
Richard MacCutchan 20-May-11 9:56am    
You are right of course, but I think programmers now need to know about Unicode and Multi-byte.
arshad alam 20-May-11 9:35am    
maximum range is ok, but what is -128 minimum
CPallini 24-May-11 3:48am    
It doesn't depend on the implementation: 'char' is a signed 8-bit integer, 'unsigned char' is a unsigned 8-bit integer.
In C and C++: - (Depending on computer architecture)

Table uses size in bits:
Data Type  LP32  ILP32  ILP64  LLP64  LP64 
 char          8      8      8      8     8 
 short        16     16     16     16    16 
 int32                      32     
 int          16     32     64     32    32 
 long         32     32     64     32    64 
 long long (int64)                 64   
 pointer      32     32     64     64    64 

In C#

List uses size in bytes:
sbyte  - 1
byte   - 1
short  - 2
ushort - 2
int    - 4
uint   - 4
long   - 8
ulong  - 8
char   - 2 (Unicode)
float  - 4
double - 8
bool   - 1


ALSO: check out http://en.wikipedia.org/wiki/Signed_number_representations[^]

AND: CS2011's solution about "signed number representation" for all you'll ever need to know.

ALSO: The sizes I mentioned above are very dependant on the computer architecture and programming language.

READ: http://www.unix.org/whitepapers/64bit.html[^] for more info on the size/architecture topic.

POINT BEING: The size topic is a big one and there is alot to take into consideration, hence the low scores on everyones ratings. The best thing that you could do is to print the size of the particular data type out to the screen and to do some tests for yourself.
 
Share this answer
 
v13
Comments
Albert Holguin 21-May-11 11:19am    
someone came around downvoting, here's my 5
R. Erasmus 23-May-11 2:45am    
Thanks.
Go here:

http://en.wikipedia.org/wiki/Integer_(computer_science)[^]

What you're talking about is a signed integer (-128 - 127).

A char type is actually represented by (can be cast to) an UNsigned integer (0 - 255).
 
Share this answer
 
v4
Comments
arshad alam 20-May-11 9:36am    
i am asking what is means of -128
Albert Holguin 20-May-11 10:19am    
that question doesn't make sense
#realJSOP 20-May-11 11:12am    
Google is your friend.
CS2011 20-May-11 11:26am    
Check my answer.Hope it helps you understand
Albert Holguin 21-May-11 11:18am    
someone came around downvoting, here's my 5
In Java as all characters a Unicode, they are represented by integer values.
 
Share this answer
 

This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)



CodeProject, 20 Bay Street, 11th Floor Toronto, Ontario, Canada M5J 2N8 +1 (416) 849-8900