Home

HOMEWORK

  1. Quantitative Principles (due Mon, Sep 26, 11:59pm)
    • Please submit your answers on Sakai.  You will find the questions by clicking the “Tests & Quizzes” tab in the left pane.  Some clarifications are noted below, but there may be other clarifications noted/updated in Sakai as needed.
    • Do the following textbook questions:
      • Exercise 1.1:  There are several different empirical models for calculating yield.  If you use Dingwall’s equation (Lecture 1, slide 43), assume alpha = 3 or 4.  Specify in explanation your alpha, and if  you used a different model (e.g., Bose-Einstein, specify N).
      • Exercise 1.4
      • Exercise 1.7
      • Exercise 1.9:  Assume (unrealistically) that the system consumes the same amount of power while idling as while computing.
      • Exercise 1.11:  Failures in Time (FIT) represents number of failures in a billion (109) hours.
      • Exercise 1.13:  For part (b), the “weighted average” means weighted arithmetic mean.
      • Exercise 1.14 parts (b) and (c) only.
      • Exercise 1.15
      • Exercise 1.16:  Assume that floating-point operations and data cache accesses consume 20% and 10%, respectively, of the program’s execution time in the original scenario before the enhancement.
      • Exercise 1.17:  In part (c), only the first application is being parallelized; in part (d), only the second one.
  2. Memory Hierarchy (due Wed, Oct 19, in class)
    • Do the following textbook exercises:
      • B.1, B.2, B.5, B.12
      • 2.8, 2.9:  The tool CACTI is available online.
      • 2.11
  3. Pipelining and Dynamic Scheduling (due Mon, Nov 14, in class)
  4. Instruction and Thread-Level Parallelism; Storage Systems (due Mon, Dec 5, in class)

LECTURES

  1. Introduction:  Fundamentals and Trends (Aug 24, 29)
  2. Quantitative Principles (Aug 31, Sep 7)
  3. Introduction to Caches (Sep 12)
    • Reading:  App. B.1-B.2
  4. Cache Optimizations (Sep 14, 19)
    • Reading:  App. B.3, Ch. 2.1-2.2
  5. Cache Coherence (Sep 21)
    • Reading:  Ch. 5.1-5.4
  6. Virtual Memory (Sep 26)
    • Reading:  App. B.4, B.5, 2.4
  7. Pipelining 1:  Basics (Sep 28)
    • Reading:  Appendix C.1
  8. Pipelining 2:  Structural and Data Hazards (Oct 3)
    • Reading:  Appendix C.2-C.3
  9. Pipelining 3:  Control Hazards (Oct 5, 10)
    • Reading:  Appendix C.2-C.5
  10. ILP 1:  Branch Prediction (Oct 10, 12)
    • Reading:  Ch. 3.3, 3.9 (pp. 203-206)
  11. ILP 2:  Scoreboarding (Oct 17)
    • Reading:  Appendix C.6-C.7
  12. ILP 3:  Tomasulo’s (Oct 24)
    • Reading:  Ch. 3.4-3.5
  13. ILP 4:  Compiler Techniques (Oct 26)
    • Reading: Ch. 3.1-3.2
  14. Multiple Issue and Speculation (Oct 31)
    • Reading: Ch. 3.6-3.10
  15. Special Topic:  Emerging Technologies of Computation (Nov 7, 9)
    • Scroll down for a reading list by topic
  16. Thread-Level-Parallelism and Multiprocessors (Nov 14)
    • Reading:  Ch. 3.12, Ch. 5.1
  17. Vector Processors (Nov 16)
    • Reading:  Ch. 4.1-4.3
  18. Storage and I/O (Nov 28)

 

Emerging Technologies of Computation: Reading list

Fall 2016 Course Website