In binary mathematics, a negative number must be represented by a special notation because numbers range from 0, to 2 to the power of the number of bits used to represent the number (minus one.) Negative numbers are represented by using signed notation. Remembering which numbers are signed and which are unsigned is the programmer's job, and the results of failing to make the distinction vary significantly depending on whether the number is negative at the time.
This is because signed integers are negative when the most significant bit is set, with the remaining bits specifying the value of the number. Allow me to illustrate the difference:
Unsigned: 11111111 binary = 255 decimal
Signed: 11111111 binary = -1 decimal
Negative one? What gives? Negative numbers are stored in a form called 2's complement. The 1's complement is simply the inverse mask of the binary number - where there is a 1 put a 0, and where there is a 0 substitute a 1. 2's complement involves adding 1 to the 1's complement. This is what causes the number to flip around from the maximum positive value to -1, and continue going down. Most processors have a "carry" bit that lets you know when your most significant bit has flipped to a 0 due to addition, and it can be used to avoid this problem - so when you see a number wrap around, the programmer is stupid twice.
Signed and unsigned integers are the same number until they are larger than half of the largest number represented by the number of bits in the number minus 1: 127 for 8 bit numbers, 32767 for 16 bit, and so on. After that point the number "wraps around" to a negative number. This is what causes the score in some video games to become negative when your score becomes too high; even though there is no condition in which the average video game displays a negative score, the programmer used a signed integer. This sometimes simplifies programming because some processor architectures do not provide instructions for dealing directly with a mix of signed and unsigned integers.