Welcome toVigges Developer Community-Open, Learning,Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
2.1k views
in Technique[技术] by (71.8m points)

How to detect wrong subtractions of signed and unsigned integer in C++?

I have legacy code performing a subtraction of signed int with an unsigned int and a cast of the result to a float. It was giving expected result with Visual Studio 6 to 2013. With Visual Studio 2017 (15.6.3) the result is not the expected one. I have simplified the code to this:

    unsigned int uint = 10;
    signed int sint = 9;
    signed int res = sint - uint;
    float fres = static_cast<float>(sint - uint);

res value is -1 with all the VS I have tested. With VS 2013 and before, fres value is -1. With VS 2017, fres value is 4.29496730e+09, that is to say UINT_MAX. I have found here that the fres result in VS 2017 is the one conforming to the C++11 standard (if I correctly understand). VS 2017 compiler is not issuing any warning on this.

How can I detect all the occurrence of such a bad subtraction in my code base?


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

MSVC is not able to detect this even with /W4 /c or /Wall and additional linter is required, e.g. clang-tidy is detecting this (courtesy to Stephen Newell).

When using g++ compiler, you are looking for -Wsign-conversion compiler option.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to Vigges Developer Community for programmer and developer-Open, Learning and Share
...