Why are so many of the numbers I see signed when they shouldn't be?

by prelic   Last Updated July 17, 2017 02:05 AM

I see and work with a lot of software, written by a fairly large group of people. LOTS of times, I see integer type declarations as wrong. Two examples I see most often: creating a regular signed integer when there can be no negative numbers. The second is that often the size of the integer is declared as a full 32 bit word when much smaller would do the trick. I wonder if the second has to do with compiler word alignment lining up to the nearest 32 bits but I'm not sure if this is true in most cases.

When you create a number, do you usually create it with the size in mind, or just create whatever is the default "int"?



Answers 1


Using signed 32 bit int "just works" in all of these cases:

  • Loops
  • Integer arithmetic
  • Array indexing and sizing
  • Enumeration values

It's an easy choice to make. Picking any other integer type would take consideration that most people don't want to take the time to make. Standardizing on a common integer type makes everybody's life a bit easier. Most 3rd party libraries default to using signed 32-bit integers, so choosing to use other integer types would be a hassle from a casting/converting stand-point.

Samuel
Samuel
July 17, 2017 02:04 AM

Related Questions




Nesting class and enums types

Updated September 07, 2017 11:05 AM