CS 208: Computer Organization & Architecture

Problems: floating point numbers, fast adders, and byte order

You may work with your classmates on these problems. If you work closely with another person, feel free to submit your work jointly. Please, don't submit work from more than two people. Submit as PDF via Moodle.

  1. (P&H 5th edition, problem 3.22) What decimal number does the bit pattern 0x0C000000 represent if it is an IEEE 754 single precision floating point number?
  2. (Problem 3.23) Write down the binary representation of the decimal number 63.25 assuming the IEEE 754 single precision format. (Give your final answer as an 8-digit hexadecimal number.)
  3. (Problem 3.24) Write down the binary representation of the decimal number 63.25 assuming the IEEE 754 double precision format. (Give your final answer as a 16-digit hexadecimal number.)
  4. Write down the binary representation of the decimal number 3.7 assuming the IEEE 754 single precision format. (Give your final answer as an 8-digit hexadecimal number.)
  5. The following questions concern IEEE 754 32-bit floating point numbers. When I refer to a "representable number," I mean a number whose exact value is one of the values that has a 32-bit IEEE 754 representation.
    1. What is the smallest positive integer that is not representable?
    2. What is the largest representable positive integer?
    3. What is the smallest positive normalized number?
    4. What is the largest positive denormalized number?
    5. In one of the textbook's earlier editions, there is a paragraph describing a pre-IEEE 754 floating point processor that lacked a guard bit. The paragraph describes a common trick used by programmers that involved writing "(0.5 - x) + 0.5" instead of "1.0 - x" in FORTRAN or C programs to "compensate for the lack of a guard digit." Explain in detail why the lack of a guard digit makes the sensible "1.0 - x" fail while "(0.5 - x) + 0.5" succeeds.
  6. Suppose I store a letter to my sister in an ASCII file called letter.txt, and the letter begins "Dear Jody". Now suppose I write a C program that (1) opens letter.txt, (2) reads the first four bytes into an int variable k, and (3) prints k as a decimal number (via the statement printf("%d\n", k)). If I am using a computer with an Intel Pentium processor (little-endian), what output does this program produce? If I am using a computer with a Motorola 68060 processor (big-endian), what output does the program produce? (These chip examples require you to travel back to 1993 when Java was young and games on CD-ROM were the hot new thing. These days, most processors are either little endian or configurable to go either way ("bi-endian").)