[3/29/99] =============================================================================== * defstruct example - just go over this with explanations: -------------------- ;;; The toplevel structure, useful for adding information that appears on ;;; all components, such as name, serial number, etc. (defstruct ) ;;; Define a generic function object with one input. ;;; Note that this does nothing with the types! (defgeneric (compute-output (c ))) ;;; A subclass that has a constant boolean output. (defstruct ( ) (value )) ;;; The value of a signal component is its constant value. (defmethod (compute-output (s )) (get-signal-value s)) ;;; Another object - a wire object. (defstruct ( ) (input )) ;;; Its output is simply the output of its input. (defmethod (compute-output (w )) (compute-output (get-wire-input w))) ;;; Now another gate - an and gate. (defstruct ( ) (input1 ) (input2 )) ;;; And the way that we get its output... (defmethod (compute-output (a )) (and (compute-output (get-and-gate-input1 a)) (compute-output (get-and-gate-input2 a)))) ;;; Test that this is all working: (compute-output (make-and-gate (make-signal #t) (make-wire (make-signal #t)))) (compute-output (make-and-gate (make-signal #t) (make-wire (make-signal #f)))) ;;; Now, instead of implementing an or-gate, we can simply generalize: (defstruct ( ) (op )) (defstruct ( ) (input1 ) (input2 )) (defmethod (compute-output (b )) ((get-op-gate-op b) (compute-output (get-binary-gate-input1 b)) (compute-output (get-binary-gate-input2 b)))) ;;; This is the way we make an and gate - note that "and" is not a function. (define (make-and-gate x y) (make-binary-gate (lambda (x y) (and x y)) x y)) ;;; An or gate is similar. (define (make-or-gate x y) (make-binary-gate (lambda (x y) (or x y)) x y)) ;;; And now a unary gate: (defstruct ( ) (input )) (defmethod (compute-output (u )) ((get-op-gate-op u) (compute-output (get-unary-gate-input u)))) ;;; A not gate ("not" is a function). (define (make-not-gate x) (make-unary-gate not x)) ;;; And now we can test a composed implication circuit: (define (make-implies-gate x y) (make-or-gate (make-not-gate x) y)) (define (test-imp b1 b2) (compute-output (make-implies-gate (make-signal b1) (make-signal b2)))) (test-imp #t #f) (test-imp #f #t) -------------------- =============================================================================== * Notes about defining methods: - Each generic function must have exactly the same number of arguments. You will get an error if you try to define a method for a generic with a different number of arguments. - It looks like you can define any number of methods, and they get added to the generic function object. This is true, but in some cases you don't add methods but you overwrite them. Each method is characterized by the types of its arguments. Here is an example: -------------------- (defmethod (f (x )) 1) (define ) (defmethod (f (y )) 2) -------------------- In this case, the second defmethod form deleted the first method, and added the second. This is because and are the same clase, which can be tested with (eq? ) yielding #t. =============================================================================== * We saw the calculation of the "times" functions running time (if they didn't see them, write them now): -------------------- (define (times a b) (cond ((zero? b) 0) (else (+ a (times a (dec b)))))) -------------------- -------------------- T(0) = c T(b) = T(b-1) + c = T(b-2) + 2c ... = bc => "times" has a running time of O(b) -------------------- "fast-times" had a better run-time: -------------------- (define (double n) (* n 2)) (define (halve n) (/ n 2)) (define (fast-times a b) (cond ((= b 0) 0) ((even? b) (fast-times (double a) (halve b))) (else (+ a (fast-times a (- b 1)))))) -------------------- -------------------- T(0) = c T(b) = T(b/2) + c if b is even > 0 T(b) = T(b-1) + c if b is odd -------------------- [The following is not necessary to go over, maybe just mention that it's here if people want more examples.] In the lecture, we saw the easy case, where b is a power of 2. -------------------- b = 2 ^ (log_2 b) => T(b) = T(b/2) + c = T(b/4) + c + c = T(b/8) + c + c + c ... = T(1) + (log_2 b) * c = T(0) + (1 + log_2 b) * c = (2+log_2 b) * c = O(log b) -------------------- Big-O is worst case analysis: We see that best case is O(log b), if every call to fast-time results in the even case. But we really should consider the worst-case - what if b is not a power of two? In fact, what if b was chosen so that every other call (can't have more than every other call) to fast-times resulted in the odd case (try 255)? Here we have the worst case scenario. b starts out as odd, and every division by two results in an odd number. -------------------- T(b) = T(b-1) + c = T((b-1)/2) + c + c <(b-1) is even> = T(((b-1)/2)-1) + c + c + c = T(((b-1)/2-1)/2) + c + c + c + c ... = T(1) + log_2(b) * c + log_2(b)*c = T(0) + c + 2*log_2(b) * c = O(log b) -------------------- So, in the worst case, T(b) costs twice as the best case, because at least every other call is to even number - so we still have an overall complexity of O(log n). =============================================================================== * That trick for fast programs can be applied in many places, even in the "repeated" function - replace this: -------------------- (define (repeated f n) (if (zero? n) identity (method (x) ((repeated f (- n 1)) (f n))))) -------------------- by: -------------------- (define (fast-repeated f n) (cond ((zero? n) identity) ((even? n) (fast-repeated (double f) (halve n))) (else (compose (fast-repeated f (- n 1)) f)))) -------------------- only now "double" operates on functions (try guessing its value): -------------------- (define (double f) (lambda (x) (f (f x)))) -------------------- =============================================================================== * A quick review + explanations on the meaning of complexity: First of all, remember that the complexity is an estimation of the order of growth - you shouldn't bother yourself with minore things as constants. There is a big difference between a program running double the time of a different program, but this does not change the running time too much (wait two years and you'll have 10x faster computers...) Complexity is the "order of growth" - an indication of the _change_ in resources implied by changes in the problem size. O(1) - Constant growth: resource requirements do not change with the size of the problem. O(log n) - Logarithmic growth: _multiplying_ the problem size implies a _constant increase_ in the resources. For example: if Time(n) = log(n), then Time(2n) = log(2n) = Time(n) + log(2), and Time(6n) = log(6n) = Time(2n) + log(3), etc. O(n) - Linear growth: Multiplying the problem size _multiplies_ the resources by the same factor. For example: if Time(n) = Cn then Time(2n) = 2Cn = 2Time(n), and Time(4n) = 4Cn = 2Time(2n), etc. Hence, the resource requirements grow _linearly_ with the problem size. O(n^c) - Polynomial growth: Multiplying the problem size _multiplies_ the resources by _a power_ of that factor. For example: if Time(n) = n^c, then Time(2n) = (2n)^c = Time(n)*(2^c), and Time(4n) = (4n)**a = Time(2n)*(2**a), etc. Linear growth is a specific case where c=1. O(C^n) - Exponential growth: Any _constant increment_ in the problem size, _multiplies_ the resources by a constant number. For example: if Time(n) = C^n, then Time(n+1) = C^(n+1) = Time(n)*C, and Time(n+2) = C^(n+2) = Time(n+1)*C, etc. These explanations are the intuition behind the patterns we saw in class: Pattern 1 T(n) = T(n-k) + c T(0) = c is O(n), where k, c are constants Pattern 2 T(n) = T(n/k) + c T(0) = c is O(log n) Of course, we have: O(1) < O(log n) < O(n) < O(n*log n) < O(n^c) < O(2^n) Programs with exponential run-time and more are officially considered a nightmare. ===============================================================================