Sign and magnitude
A signed integer representation using one bit for sign and remaining bits for magnitude. Sign and magnitude is simple conceptually but requires special arithmetic handling. Largely superseded by two's complement.
Real World
Early 1960s IBM mainframes used sign-and-magnitude representation, but engineers found the two-zero problem (both 00000000 and 10000000 meaning zero) caused comparison bugs, prompting the switch to two's complement in later designs.
Exam Focus
Compare sign-and-magnitude to two's complement by highlighting the double-zero problem and the extra hardware needed for subtraction.
How well did you know this?