HANDOUT

* Mathematical Induction

 

Today's topics:

* Iterative and Recursive Processes

* Induction as a reasoning Tool

Part of a theme: FORMAL TOOLS for *understanding* our programs

 

 

----------------------------------------------------------------------

We have seen

* how to write Dylan expressions

* how to evaluate them (Substitution Model)

* how to define procedures, even recursive (=self-referential) ones.

Today, we dig a little deeper:

1. Processes generated by procedures

2. Correctness of the values they compute

Start developing models for reasoning about whether a procedure

computes the desired answer, building on the Substitution Model.

 

----------------------------------------------------------------------

 

 

Here are two kinds of multiplication procedures that compute a*b by

adding a b times (note type <integer> of b):

(define (times-1 <function>)

(method ((a <number>) (b <integer>))

(if (= b 0)

0

(+ a (times-1 a (- b 1))))))

(define (times-2 <function>)

(method ((a <number>) (b <integer>))

(bind (((iter <function>)

(method ((c <integer>) (result <number>))

(if (= c 0)

result

(iter (- c 1)

(+ result a))))))

(iter b 0))))

 

BIND, is used for binding local variables in general (not

just to methods). It temporarily associate name(s) with

values, within the body of the form. BIND is a SPECIAL FORM, which does not

follow the normal evaluation rule.

 

Note: ITER's first argument counts down from b by 1, while it's second

argument counts up from 0 by a.

----------------------------------------------------------------------

 

Let's trace through a computation:

(times-1 6 3)

(+ 6 (times-1 6 2))

(+ 6 (+ 6 (times-1 6 1)))

(+ 6 (+ 6 (+ 6 (times-1 6 0))))

(+ 6 (+ 6 (+ 6 0)))

(+ 6 (+ 6 6))

(+ 6 12)

18

There are a whole slew of DEFERRED OPERATIONS,

* All the +'s that haven't been done yet.

On the other hand,

(times-2 6 3)

(iter 3 0)

(iter (- 3 1) (+ 0 6))

(iter 2 6)

(iter 1 12)

(iter 0 18)

18

First arg counts down by 1, second arg counts up by 6.

Note that there are no operations waiting to happen on return.

 

----------------------------------------------------------------------

Both times-1 and times-2 are *syntactically* recursive procedures

-> they refer to themselves in the text (= code) of the procedure

BUT:

times-1 generates a RECURSIVE PROCESS:

-> Each call generates deferred operations.

-> This means it uses more space the longer it runs,

* Which will eventually destroy it.

times-2 generates an ITERATIVE PROCESS

-> No deferred operations

-> Constant amount of space -- no operations waiting to happen

times-1 uses the system to keep track of intermediate *computations*.

times-2 uses an explicit STATE VARIABLE (result)

-> keeps track of intermediate values.

KEY POINT:

times-2 is TAIL RECURSIVE

-> The last thing it does is call itself,

-> and there's nothing left for it to do once that call returns.

* That means, that you don't need to return the value to the

previous call of iter! You can just return it back to iter's

caller, times-2.

[Note: something can be tail-recursive without calling itself directly!

Discussed in section, w/ bind-methods]

Some languages optimize tail-recursive calls so that no extra space is

used for each recursive level (when it’s tail recursive). NOODLLE does

this if you select the "Tail recursion" button. You should do this so

your programs run efficiently!

 

(Semi-formal) definition:

A function is tail recursive if when you evaluate it under the RSM the expressions you evaluate take a constant amount of space (left-right).

----------------------------------------------------------------------

 

 

Dylan has *no* special iteration constructs:

while, loop, for, etc.

We just use tail recursion to generate iterative processes

 

----------------------------------------------------------------------

 

 

Next problem:

How do we determine whether times-1 computes the right answer?

There are far too many possibilities for us to check them all.

Does it always work? NO. The function clearly loses for b<0.

We'll use MATHEMATICAL INDUCTION and the Substitution Model to reason

about values.

KEY IDEA: Show the equivalence of a Dylan program/expression

and some mathematical statement about its value (a ``specification''

or ``contract'' that describes what the program computes). Then can

think about the program in terms of its contract, rather than its

implementation. Allows us to think more abstractly, and also to

depend on the given implementation to meet this abstraction.

Here's induction:

* *The* basic proof method for CS

* Wake up each day and wonder, "what am I gonna do induction on

today?"

* Induction almost exactly matches recursion.

First look at case of N = whole numbers = {0, 1, 2, 3, ...}

Suppose that we have some property P[n] which we could ask of a

whole number

e.g.,

P1[n] is "n is even"

P2[n] is "n is the product of some number of primes"

P3[n] is "n is the sum of four squares"

and we want to prove that P holds for all n's.

a. BASIS or BASE CASE:

Prove that P holds for 0.

(the smallest element in the set N).

b. INDUCTION: (to be precise, weak induction – we’ll do strong in section)

Prove for any m in N that, IF P holds for m

THEN P holds for m+1 as well.

 

Notes:

The basis shows that P holds the smallest element (or elements)

of the set, which is 0 for N, but other things for other sets.

So,

a. gives us P[0]

b. means P[0] => P[1], so we have P[1]

b. means P[0]/\P[1] => P[2], so we have P[2]

...

 

CONCEPTUALLY:

"Climbing a ladder":

The basis step shows we can get to the bottom step of the ladder.

The induction step shows we can get from one step to the next.

"Knocking over dominos":

The basis step: first domino falls.

The induction step: if N'th (and all previous) falls, so does N+1'st

Induction has a recipe. We expect to see it in your proofs if you want full credit!

INDUCTION RECIPE:

* What variable n are you doing induction on?

* What is the property P[n]?

* Prove base case, typically P[0]

* Assume P[m], prove P[m+1] (sometimes we write this with n instead of m)

----------------------------------------------------------------------

See the handout on induction for some examples of mathematical induction

problems, and also for some other examples of inductive reasoning about

recursive procedures.

----------------------------------------------------------------------

 

 

Now, let's try an inductive proof that (times-1 a b) = a*b for b >= 0.

[The equivalence of a Dylan program and a mathematical statement.]

Note: induction on b, not a (why?)

Note:

* You will be asked to do this on prelim #1 and on the final.

* Your proof must use both the induction hypothesis and the

substitution model to be valid (why?)

>> Gory detail just to show we *can* <<

Look at the function. It even *looks* like an induction:

(method ((a <number>) (b <integer>))

(if (= b 0)

0 ;; <- Basis, when b=0

(+ a (times-1 a (- b 1))))) ;; <-- Induction step,

defining in terms of times at

smaller args.

VARIABLE: b, whole number

P[b]: (times-1 a b) = a*b

BASIS:

(times-1 a {0}) by the substitution model is

(if (= {0} 0) 0 ...) is

{0}

and that is right, as a*0 = 0.

INDUCTION:

Assume that (times-1 a b) = a*b

Show that (times-1 a b+1) = a*(b+1)

(times-1 a b+1)

==>

(if (= b+1 0)

0

(+ a (times-1 a (- b+1 1))))

==>

;; b+1 can't be 0, since b is a whole number

(+ a (times-1 a (- b+1 1)))

==>

(+ a (times-1 a b))

==>

;; by induction hypothesis

(+ a {a*b})

==>

a+(ab)

=

a*(b+1)

So, we've just shown that (times-1 a b) computes a*b for any b>=0 and

any number a.

Note that we just obtained infinitely many results!

----------------------------------------------------------------------

 

 

This degree of effort is excessive for all but the simplest programs.

We will look more later at checking that a procedure's computation

meets certain criteria -- "specification"

* Saying what you want is a real challenge.

Automatically generating such "correctness" proofs is an active area

of research.

* Important for critical applications

- Medicine -- people die when X-ray machines die

- Aircraft control

- Reactor control

- Banking

- ...

 

----------------------------------------------------------------------

 

 

TODAY'S BIG IDEAS:

* A syntactically *recursive* procedure can generate either a

recursive or an iterative (=tail-recursive) process.

The issue is, are there deferred operations?

* Induction is used to prove things about "inductively defined sets"

like the whole numbers.

* Induction together with a model of evaluation (the substitution

model) can be used to show that a procedure meets some spec, that

is, "is correct"

New special form: BIND