Using Programmed Instruction

Helen Lester Plants

Department of Theoretical & Applied Mechanics, West Virginia University

April 13, 1967


Most of us are aware that we are working in a difficult era in engineering education. The information explosion demands that we teach more material and the population explosion demands that we teach more students, but the number of credit hours in the curriculum and the number of working hours in the day remains uncharged. As a result we all teach more less thoroughly, and give more students less individual attention than we did ten years ego. We all worry about it, more or less, but nothing much gets done about it.

Perhaps the solution for our problem is to realize that our old educational methods, excellent though them may have been, are no longer adequate, and to devise and investigate new methods to augment or replace them. One such new idea in education is programmed instruction.

Programmed instruction has been around long enough that most people have heard a bit about it, but have never actually come to grips with producing it or using it. At West Virginia University we have done both. We have found programmed instruction to be effective - sometimes amazingly so - and we have found that it has given us new insight into the problems and techniques of education.

Programmed Learning : An Evolving Definition

Before we can do much talking about the use of programmed learning materials it would be well to-establish firmly what we are talking about. It is fairly easy to find definitions of programmed learning in the literature - too easy, in fact, since one finds definitions rather than a definition. As a result, two people discussing programmed learning are often talking about slightly different things. In fact. the very definition of programmed instruction seems to be evolving as our knowledge of programming increases.

Apparently the first generally accepted notion of programmed learning was that is was some sort of educational scheme making use of a set of rules, and even gimmicks. Consequently, anything that was written in a certain format - usually question and answer was automatically tagged programmed material. Some of it was programmed, and some wasn’t.

The basis of the early programs was behavioral psychology - actual laboratory experiments to see whether teaching affected the behavior of subjects in a way which could be objectively measured. The "rules" and format of early programs grew out of such tests as the most effective methods for affecting behavior were identified. Thus we began to hear about immediate confirmation, active responses, short learning episodes and all the rest. This was also the era of much hardware in the form of teaching machines and very peculiar looking books, and the people said, and to a certain extent still say, "So this is programmed instruction."

It is true that most programmed instruction has some of the previously mentioned characteristics. However. the point which determines whether or not a piece of material is a program is whether or not is whether it sets out to change the learner's behavior in a specific objectively measurable fashion. If it tries to do that, it is a program. If it succeeds, it is a good program. Its format is irrelevant.

Another fairly common notion of programming is that it is a method of manipulating the student's behavior so that his energies are channeled into the correct paths. This view of programming sees it as a long series of actions on the part of the student which are closely supervised by the programmer. This supervision may be quite sophisticated, taking advantage of the newest in audio-visual aids and latest thing in computers as the student is led along an educational path tailored to his exact needs. The display of cleverness in supervision can be quite dazzling and impressive so that many people think of programming as a sort of remote control teaching process.

Certainly some programs make use of just such techniques, but it is not the techniques that make them programs. Unless it can be proven that specific objectively measurable changes in behavior will take place as a result of the course of action, it is not a program.

Parenthetically, an example of something that looks like a program but isn't is the famous dialogue between Socrates and the slave boy which results in the boy having proved the Pythagorean theorem. As you probably remembers, Socrates, by carefully constructed questions, elicits each step of the proof from the boy. In doing so Socrates certainly makes use of the latest ideas about eliciting the desired response and reinforcing it. He controls the subject's responses masterfully and there are many who would say, "Aha, the first recorded instance of programmed instruction." Unfortunately, it is no such thing. Socrates omitted a post-test. To have produced a program he would have had to have shown that at the end of the dialogue the boy was able to do something, i.e. produce an unprompted proof of the Pythagorean theorem - that he had not been able to do before. Socrates, of course, would have been horrified at the suggestion that he was teaching the boy anything, and indeed he wasn't. Looking at the dialogue from the point of view of the classroom teacher most of us will agree that the slave boy would have flunked the post-test flat. Since no learning was demonstrated the dialogue was not a program.

So the question remains, "What is programmed instruction?" The answer that currently seems best and that we find most useful is that programmed instruction is instruction designed to achieve specific scholastic goals and proven by test to do so. The goals must be specified in such a matter that it can be proven that they have been attained.

Objectives and Post-Tests

The specification of the goals of programmed instruction involves two stages - definition of objectives and design of post-tests. Before going further it would be well to take a closer look at objectives and post-tests.

The objectives of a program are, rather obviously, what the programmer hopes to teach by the program. Objectives, however, must be clearly defined in terms of observable actions on the part of the student. We cannot say we want the student to understand a certain body of material. We must say that we wish him to demonstrate his understanding in a specific fashion. We could require that he work problems involving specific tasks; we could require him to write definitions or proofs; we could ask him to fill in blanks; or we could ask him to do all at these things and more. The important thing is that the objectives be stated in terns of actual measurable actions on the part of the student and the: these objectives be such that all observers will be in agreement as to whether or not they were met.

An example of an unsatisfactory objective might be "The student will know the alphabet." How do we know he knows it? If he doesn't choose to demonstrate his knowledge we can never determine what he knows. Furthermore you may feel that your child knows his alphabet because he can recite it; but his teacher may feel he does not know it because he writes several letters backwards. Thus, if we are not careful to spell out our objectives in precise behavioral terms, different observers may not agree as to whether or not the objectives are met.

Satisfactory objectives for the same bit of instruction could be "The child will recite the alphabet." or "The child will properly identity all letters of the alphabet when they are shown him." or "The child will, upon hearing a letter named, write the script character for it." Any of these might be construed as "knowing the alphabet" although they require very different skills. It is evident to any observer whether or not any of these has been met.

Properly defined behavioral objectives are half the battle in any piece of programmed instruction.

After the objectives come the post-tests. Post-tests are simply the instruments by which the programmer measures the degree to which the program has succeeded in meeting the objectives. Post-tests can be very simple or very complex but to be valid they must correspond to the objectives of the program.

There are two types of tests which are called post-tests. This is unfortunate because they are quite different.

The first type might be called a design post-test and is very detailed. It tests the student on every step of a learning sequence. The simple ideas which are included in the final objective are as carefully examined as are the final objectiveitself, with the result that if the student fails to learn it is possible to see exactly where the program was inadequate. If one gave a post-test consisting of complex problems and the student solved them all, he could be assumed to have mastered all the simple ideas as well as the complexities. But if he missed all the problems it would be impossible to tell where his trouble lay. The elaborate post-test used in the design of a. program is a diagnostic test to determine where the program does not teach. The design post-test tests the program.

After a program has been proven by reiterated testing with a design post- test, a shorter post-test becomes practical. It is now possible to give a few critical. items and by the student's satisfactory performance on these items to inter that he is capable of all the work that preceded them. These are the post tests that are usually included with published programs and they differ little from ordinary tests. The program's capacity to teach has already been proven. The point at issue is whether or not the individual student has learned. The final post-test tests the student.

The final post-test can lead to misleading conclusions if the type of error made is not given careful consideration. Sometimes programs are better than the post-tests make them look at first.

An example of this sort of thing occurred on the last test given our programmed dynamics sections. The subject matter of the test was force, mass and acceleration relationships for rigid bodies. The results of the test were disappointing The grades were not really bad hut were certainly not as good as we had come to expect.

The individual units had post-tested reasonably well. An error analysis of the tests showed us what hat happened. Almost every student in the class was either unable to draw correct free-bodies or to calculate moments of inertia for geometric bodies. Some could do neither. Of course these topics had been assumed prerequisite knowledge since they had been taught in statics. The errors stemming from the actual subject matter of the programs had not been very numerous at all.

The Design of Instruction

Programmed instruction is designed instruction with the word "design" used in the engineering sense. The programmer attacks the problem of educational design exactly as he would a problem in machine design.

First the engineering designer defines as carefully as ha can the function of the thing which he is designing. What will it do? Next he sets up a set of performance specifications. It must do exactly this under exactly these conditions. Computed stress must be less than this, deformations must not exceed that.

The educational designer does the same thing. First he decides exactly what his program will teach. He determines exactly what he wants the student to be able to do when he finishes the program. He defines this terminal behavior as carefully as he can and calls it his objectives. In determining his objectives he makes certain that he sets them up in such a way that their accomplishment can be demonstrated by actual test. Next he turns his attention to the construction of his post-tests. These are the specifications he is going to meet. He is going to develop a program such that when a student has completed it he will be able to pass the post-tests. Nothing less can be considered a satisfactory design.

When the engineering designer has decided upon function, his next step is to actually design his device. The next step for the educational designer is to write his program. As the engineering designer calls upon his knowledge of mathematics and physical science and incorporates previously proven elements into his design, so the educational designer calls upon his knowledge of psychological principles and makes use of teaching techniques of proven efficiency. It is at this point that both designs begin to take on familiar shapes. The car begins to look like other cars and the program begins to take on the format we associate with programs.

The engineer's next step may be to build and test a prototype. If it does not fulfill its function and meet his specifications, he will redesign and test again and redesign and test again until it does meet them. He will examine each part of his design - each bearing and bushing - until he is certain that each component is functioning properly.

The programmer does exactly the same thing. He tests his design by trying it on a student and then post-testing the student. If the student fails the post. test so does the program. It must be rewritten and rewritten again until each frame produces the desired response and until the post-test is passed.

When the engineer has achieved a machine that works well under laboratory conditions he its ready for a field test. If the results are favorable it is, at last, time to go into production.

The last step for the programmer is the same. He has produced an educational design that meets specifications when used on individual students or on a class or two. Now he must put it into a large scale class-testing phase, preferably in several schools. If it still meets specifications, if the students are still passing the post-tests, then it is ready for publications.

As engineers we are all aware that such a design procedure works for engineering problems. It works just as well for educational problems. A program that has been properly designed, tested, and validated will teach what it sets out to teach. Students will pass the post-tests and the objectives will be met. You can rely on it.

Using Programmed Instruction as a Supplement

For the last five years we have been using programs in undergraduate mechanics courses at West Virginia University. We have found out that they do work and we are finding out how to use them to beat advantage.

Our first attempt at programmed instruction was a branching program on writing shear and moment equations for beams. In retrospect it was a pretty poor program It was given to the students to use for home study in place of homework problems. Considering that it consisted of some seventy pages, the instructor was somewhat reluctant to face the class on its next meeting. However, instead of being disgruntled the students were delighted. So was the instructor! The students were actually asking questions about the material customarily covered in the third lecture on the subject. They already knew the first two lectures!

From that day on, the pattern for adjunct programs was set. We had discovered a way to teach drill topics efficiently. We no longer needed to use class time for repetitious work designed to build skills in problem solving. Neither did we have to tunnel through mountains of homework to determine how well the skills were developing. We could simply hand out a program, tell the students when it would be due and give a ten minute quiz on that date. The performance on the quiz would be high and the students would know the material well enough to go on.

Figures 1A and 1B show the results of post-tests on two adjunct programs. The actual tests given were similar to but not identical with those shown on the figures. Since three different sections were involved it was not practical to use identical tests. However, we saw practically no variation in grades from section to section.


Figure 1A. Post-Test Scores on Mohr's Circle for Stress. Three T-Th sections in 1964-65


Figure 1B. Post-Test Scores on Mohr's Circle for Strain. Three T-Th sections in 1964-65


We immediately identified the drill topics in Mechanics of Materials, programmed them, and applied the time that we thus saved on other work. We found that for the first time in several years we were actually getting through all the material in the course outline.

We also made our first discovery about using programs. It was simple. You post-test. If you omit the quiz over the unit on a specified area, the students will either put off doing the unit indefinitely or just plain not do it. The quiz doesn't have to be hard, it doesn't have to be long, it just has to be inevitable. Furthermore it should be taken from the exact material covered in the unit and it should count on the student’s grade. (We usually count it as a homework problem.) This is necessary to make the student feel that doing the programmed work is important and a real part of the course, not just something thrown in as an after thought.

The programs that were developed and used in the Mechanics of Materials course were adjunct programs. That is, they were designed to be used as supplements to a regular textbook but not to any particular text. A good program of this nature should fit in and be helpful when used with any standard text in the area. Neither does the use of such programs require any special lecture technique. To get the greatest advantage from the programs, however, the lecturer must learn to trust them, and not feel that he must cover the programmed material in his lectures.

Programming an Entire Course

Nothing succeeds like success and before long students were asking for more programs, particularly in Dynamics. At about that same time we became a part of the ASEE programmed learning project whose purpose was to produce engineering programs. Consequently a start was made on programming dynamics.

The same approach was tried that we had been using in mechanics of materials. Certain topics were selected and programs were written and tested. This time we employed a linear program with occasional branching loops. Once again, the students were delighted but this time the programmer was not. Having learned a bit more about interpreting post-tests, it was impossible to avoid a very painful conclusion. The reason for most of the errors on the post-test lay not in the programs but in the material the student was supposed to know before he began the unit. The only sensible thing to do was to start at the very beginning and do the whole thing.

Consequently in October 1965 we began to look toward an entirely programmed dynamics course. In the spring semester of last year we had programmed enough to teach the first seven weeks. Last fall the programs ran into the twelfth week. This semester, by making use of some material from Dr. Clyde Work of Michigan Technical University, we were able to offer a completely programmed course.

This is how we do it. At the beginning of the term the programs, twenty of them this term, are checked out to the students. Each student also receives a schedule of post-tests that tells him when each unit will be post-tested. On the appointed day the post-test is given, the papers are collected, and the post-test discussed. The post-test grades are posted as quickly as possible. At appropriate intervals, hour tests are given. The hour tests aim at determining whether or not the students are integrating the ideas from the individual units into a coherent whole. Any student whose average at the end of the term is 75 may take a C without final examination. If his average is 85 he may have a B without a final. All students with averages below 75 must take the final and any student who thinks he can get a better grade by taking the final may do so. It is not possible to make an A without taking the final.

As it stands it is a pretty uneven affair. Some programs have been class tested and revised three times and are getting pretty good. The newest are being used for the first time in our classes right now. (Work, however, has used the latest units in his classes and found them fairly effective so that all material currently in use at WVU has been class tested somewhere prior to its use in this term's class.

We have discovered teaching with programs is very different from teaching a course with adjunct programs. It is exceedingly effective, but it changes the teacher’s role completely.

We had our first indication of its effectiveness rather early in the game. In selecting the material to program the writer at first chose not to cover those topics which had seemed to be rather effectively covered in lecture in the past. In other words, she taught a series of programs pieced together by what she considered her best lectures over rather easy topics. In due time there was a test. To the teacher's embarrassments, practically all points lost on the test were on material from the lectures. By the next term the gaps were filled with programs.

At about this time it became possible to make some comparisons of programmed and unprogrammed classes. Rather than attempting to use a current class as a control, the current classes were compared to past dynamics classes. The figures show some of our results. By going through the file of old tests we were able to reuse a test of some years before and compare the results. Considering the large number of tests we had to choose from, we considered the likelihood of A student having actually studied the test he took to be quite small. The tests were probably slanted somewhat against the current students since they used some unfamiliar terms of phrase and symbol that were the legacy of wherever text we were using the first time the test was given.


Figure 2. Comparison of scores on matched tests.


Figure 3. Comparison of scores on matched tests.


Since our first all-programmed class is still in progress it is not possible to give comparative data on it. However, the next figure shows a comparison of class averages on hour examinations between the partially programmed classes and the average of all unprogrammed sections taught by the writer since we began teaching vector dynamics. Each class took four hour exams during the semester. An interesting thing to note is what happened to the classes when the programs ran out.

 


Figure 4. Average Examination Scores of Programmed and Unprogrammed Classes


In addition to the sort of data shown in the figures we learned a lot of things that cannot be reduced to numbers.

We have found that programs do not effect any saving in the student's time. In fact he probably spends a bit more time on his dynamics than he would otherwise. However, the time is spent more effectively since he does not make numerous unsuccessful tries at the same problem as he probably would on the ordinary lecture homework schedule. Since he is constantly reinforced by getting right answers he puts in the time fairly willingly.

The programs seem to have done most for the worst students. The students with the lower grade point averages often report very long times spent on the units. However, by being able to go slowly, they do learn the material as is evidenced by the post-tests. Apparently if a student has sufficient intelligence to have passed the pre-requisites and is willing to invest the time he needs to work through all the programs he can be assured of passing dynamics. It appears that the poor student is not so much incapable of learning as he is incapable of learning fast enough to keep up. The programs fix that if he wants them to.

One early discovery was that given a textbook and programs, students will generally ignore the textbook. Our early programs in dynamics were considered as adjunct programs to be used in addition to a textbook. Quite soon it was apparent that the students were not using the text at all. If they were taxed with this they readily admitted it, saying they didn't have time to do the text and the programs. Since the programs were designed to take two to three hours per assignment this sounds quite sensible except when one realizes that the reading assignment took only ten to fifteen minutes. We had expected the students to use the book as a concise summary of the programmed material but we couldn't get them to do so.

Another thing we observed was a sort of inertia effect. Most students seem to have a deep distrust of anything new and rather resent the first few programs, feeling perhaps, that they are being used as guinea pigs. After working through a couple of programs their suspicions are allayed and almost all become mildly enthusiastic about them. A few get to be real fans. Comments that have gotten back to us range from, "They are awfully long and tedious to work through but it you do, you can't help learning it." to "The thing I can't get over is how much I actually know."

There are a few, however, who are never converted but remain solidly against programs to the last. One investigator at another school has told us that he has found such holdouts to be either at the top or the bottom of the class. That makes good sense and could be explained easily. The only trouble is that in our experience the "holdouts'' have been rather firmly in the middle of our C range. Our feeling is that it is probably simply a matter of individuality. These hold-outs run about one to a section and since they do presentable work despite their feelings in the matter, there doesn't appear to be much reason to worry about them. After all there is probably one student in any section who can' t stand his particular teacher anyway.

Shortly after our discovery that the programs were replacing the textbook we discovered they were also replacing the lecturer. That was unnerving, to say the least. The old lecture starter, "Any questions?" drew a blank. No one ever seemed to have any. It the lecturer insisted on explaining anyway, the class attitude was one of polite patience; they already knew the material. Post-testing bore them out. They did know it.

So what does the lecturer do in a programmed class? The answers very.

Dr. Work, in his programmed dynamics course at Michigan Technological University, has one answer. He simply meets his programmed classes to give tests and return them. Otherwise his students are on their own. He has not yet reported his results in detail but was finding them quite satisfactory when last heard from.

At West Virginia University our approach has been a bit different. After taking care of the details of post-testing, the instructor simply goes a bit beyond the material in the programs, offering different viewpoints and applications, demonsstraying more difficult topics and so forth, There is sort of a gentlemen’s agreement that this is enrichment material and not to be tested on since it is outside the objectives of the course as set down in the programs. This sort of class has resulted in some rather spirited discussions and seems to be of considerable interest to both the students and teacher.

This format seems to make for excellent student-teacher relationships. The writer, for instance, finds that she now knows almost all her students by name without particularly trying to do so. Also many more students take advantage of her office hours than in past times. All in all, we seem to do a better job of teaching because we know the minimum necessary information has already been taught. That frees both teacher and students to consider those things which interest them most.

Next fall we plan to try a new way to teach with programs. Rather than the usual three-times-a-week schedule, the class will meet weekly for a three hour session. During each session we will post-test the assigned programs, have a discussion period and spend some time on a simple dynamics experiment or two. We hope to save some of our starting and stopping time and also to foster a healthy, hands-one experimental attitude in our students. We feel that this may result in more effective use of class time than that we are now making.

(Note: This experiment was never conducted.)

We have found out a lot more about the effectiveness and ineffectiveness of various teaching techniques than we can possibly discuss today but all in all we have been greatly strengthened in our notions of the effectiveness of programmed instruction.

Programs for You?

Perhaps some readers would like to try a bit of programmed instruction on their own classes and are wondering how to get started. The best course would probably be to start just as West Virginia University did with an adjunct program or two. Find an area in the course where it seems that a program would help and try one out. It will probably save a couple of lectures so that the teacher will find him self looking around for more programs, just as we did.

In selecting a program, the starting point for the teacher is exactly the same as for the would-be programmer - he must define his objectives. He must decide what he really wants his students to be able to do when they finish his course and what he considers satisfactory proof of that ability. If any of his objectives fall in the concrete area of knowledge or information, rather than the more abstract realm of wisdom, such objectives are fit topics for programmed learning.

Having gotten his own personal objectives firmly in hand, the next thing for the teacher to do is to hunt for a programmer who has exactly the same objectives. He won't find one. Just as with textbooks, the only program that exactly suits any teacher will be the one he writes himself. But, just as with textbooks, he may find one that fits his needs well enough.

How does one tell if a program is what he needs? Look at the post-tests the author provides They will probably be composed of the terminal questions from his design post-tests, although he may actually include his design tests. At any rate, if his post-tests look about like the test questions you usually ask, then you can be pretty sure his objectives coincide quite well with yours.

How can one be sure a given program will teach? One can't but there are some things to look for. First of course; the very fact that the author suggests post-tests implies a certain confidence on his part in the ability of the students to perform satisfactorily on them. Better yet, the author may actually publish validation data with his programs. Or he may not, since it is not always published even when available. If it is not published as a part of the program, the overall results of validation tests may be described in the foreword or preface of a published program. If he can find no evidence of. an effort at validating a program the teacher’s attitude should be quite skeptical, but if its objectives coincide well with his, it is probably worth a limited try. It has been the experience of several workers that even rather bad programs often teach fairly well and are welcomed by the students.

The one thing this writer would not suggest doing in deciding about a program is to try to read through it. Programs are not meant to be read. They are meant to be done by naive subjects. That means that they progress by such baby steps that they are pure agony to read if you already know the material. It seems that it is almost a truism that in order to be good a program must appear trivial - to an expert in the field, that is.

Most of the so-called trivial frames will have been inserted, however, to help the student fill in details which tests showed were not nearly so evident as the programmer had previously thought them. In one of the writer's programs, one particular frame seemed the obvious out-growth of the one before it - until the first trial run. The subject failed to see the relation. A couple of frames were added to clarify the relation. The next subject failed to see the relation. A couple of frames were added and it was tried again. Again the next subject failed. This was kept up until it had been tried five times and nine frames had been added to clarify the relation that had seemed obvious to the teacher. Now it was obvious to the student and painfully so to the teacher.

One may look at a few frames of a program to get the feel of it, but he should never try to read it through. He'll never make it and he'll never use a program. He should rely instead on what can be found out about the objectives of the author and how well he was able to accomplish them.

If any one does decide to try out a program And wants to know how well it taught, let him look carefully at the post-tests his students write He should note the exact kind of errors his students make and see whether they indicate a poor knowledge of the programmed material or of the material pre-requisite to the program. He may find himself programming the pre-requisites.

Available Programs in Mechanics

This is a particularly good time to get started with programs since the ASEE is, at this moment, actively soliciting volunteers to aid in the validation of the programs produced as a part of its Programmed Learning Project. Details of the project and information on how to obtain programs may be obtained from the February 1967 issue of the ASEE Journal.

In the specific area of mechanics there are about thirty different programmed units in various areas of dynamics. Dr. Work has done eight to ten of these and the writer has done the remainder. There is some overlap as each author is aiming at a fully programmed dynamics course of his own. Each topic is developed in such a way that you may use just one or several as you choose.

All these programs are unpublished but are expected to be available through the ASEE There its also at least one published program covering some topics from strength of materials. Several publishers have plans for programs in statics.

Conclusion

The claims made for programmed learning are many. In our experience they are mostly true.

Students, by using programmed materials have been able to perform very well in our mechanics courses. Each is able to work at his own pace, review as much as is needful and be confident of his results. If he is willing to invest as much time as the programs require he can be virtually assured of a passing mark.

For the teachers the advantages have been a considerable reduction in routine work and a certainty of a reasonable performance by the students. It has freed the faculty to devote class time to those non routine elements of teaching which characterize education at its most interesting and best.

Those of us who have used it, both as students and teachers, at West Virginia are convinced that programmed instruction is the educational design of the future!