Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
int vs long when sizes are identical and issue with int32_t (from rule 6.3) in rule 10.1 example
#1
Rule 10.1 contains the following example:
Code:
s32a = s16a + (int32_t)20000; /* compliant */
Rule 6.3 suggests this definition of int32_t:
Code:
typedef signed int int32_t;
The addition expression is a complex expression per 6.10.5.
The underlying type of s16a is int16_t.
The underlying type of (int32_t)20000 is int16_t because this is an integral constant expression and the "actual type" is int per 6.10.4:
Quote:If the actual type of the expression is (signed) int, the underlying type is defined to be the smallest signed integer type which is capable of representing its value.

The underlying type of the sum is then int16_t (not int32_t as the example seems to assume).
int16_t is being assigned to int32_t which requires a conversion to change the underlying type, but this is a violation of rule 10.1 because the sum is a complex expression. Thus the example appears not to be compliant.

Is this example incorrect when int32_t is int?
It appears that the behavior would be different if int32_t were long because the integral constant expression rule would not apply, is this intentional? How should this potential difference in the definition be handled?
If int and long are the same size, does conversion between int and long violate rule 10.1 because it only allows conversions to wider types and not to types of the same size?
Reply


Messages In This Thread

Forum Jump:


Users browsing this thread: 1 Guest(s)