Article: Exploring students’ understanding of the concept of algorithm: levels of abstraction

How do we know if our students are beginning to think like a computer scientist? Is it possible to distinguish abstraction levels in their thinking about core concepts? Moreover, which research methods are best suited to investigate these topics?

To answer such questions, Computer Science Education (CSE) research can build on the research experience and the theories of the closely related field of Mathematics Education. In the framework of Skemp and his successors in Mathematics Education research [1] a level of abstraction has several interpretations. As in the research described in [2] we will focus on the interpretation called the process-object duality“>a next level is reached if a student can interpret processes and relations between objects, as a new kind of objects. The process or relation becomes an object itself. If a student has reached a certain level of thinking, the lower levels still exist and are incorporated; lower levels can be evoked if necessary.

We constructed a questionnaire containing seven items. The starting item is given as follows:

0. Give your definition of ‘algorithm’.

The other six items have another format. There is a general introduction: Mark whether you agree or disagree with the following proposition and give a supporting argument. So, only ‘agree’ or ‘disagree’ is not sufficient. If  necessary the option ‘both are possible’ can be chosen, provided that an argument is given. Only if you cannot answer the question because of lack of knowledge, then choose ‘I don’t know’. All items are followed by the four alternatives Agree, Disagree, Agree and disagree are possible, I don’t know, and space for supporting argumentation.

It is mainly the supporting argumentation that is used for further analysis, less then the choice of the answer itself. This method differs from ‘normal’ use of MC-questions. The six propositions are given as follows:

1. An algorithm is a program, written in a programming language.

2. Two different programs written in the same programming language can be implementations of the same algorithm.

3. The correctness of an algorithm generally can be proven by testing the implementation on cleverly selected test cases.

4. A suitable quantity to measure the time needed for a certain algorithm to solve a certain problem is the time needed in milliseconds.

5. The complexity of a problem is independent of the choice of the algorithm to solve it.

6. For every problem it is possible that in the future algorithms are discovered which are more efficient by an order of magnitude than the algorithms known by now.

We expected to find four levels of abstraction in the answers:

1. Execution level: the algorithm is a specific run on a concrete specific machine; its time consumption is determined by the machine.

2. Program level: the algorithm is a process, described by a specific executable programming language; time consumption depends on the input.

3. Object level: the algorithm is not connected with a specific programming language; it can be viewed as an object (versus process); while constructing an algorithm the data structure and the invariance properties are used; meta properties such as termination and ‘patterns’ (algorithmic modules) are relevant; time consumption is considered in terms of magnitude of order as function of the input.

4. Problem level: the algorithm can be viewed as a black box; the perspective of thought is ‘given a problem, which type of algorithm is suitable?’; problems can be categorized to suitable algorithms; a problem has an intrinsic complexity.

Questionnaires were gathered several times from Bachelor students of three year groups during the same year. About 200 students participated in total. Several raters scored and discussed student argumentations of a random sample until scoring agreement was acceptably high. The answering levels on the various items were combined per student to a median student answering level (we could not assume equal distances between level scores).

The main findings were:

·  The argumentations as measured with the constructed instrument were mainly at level 2 and 3, only a few on 1 or 4.

·  Within a students’ year group the answering level generally increased during the year.

·  For successive year groups the answering level was generally higher.

·  Little relation was found between subject test results and answering level

·  The reliability of the instrument was good (the scores given by several raters correlated well enough).

An important validity question remained: did we really measure abstraction level of thinking? The argumentations we analyzed consisted of a few lines of text. Could it not be that students sometimes only reproduced standard definitions, giving the false impression that they really did understand the terms used? It is this kind of question that typically asks for qualitative research, because it is the thinking process one is interested in.

The results are too premature to conclude that teachers should take the specific algorithmic thinking level we found into account more. The validity question posed above needs to be investigated. Further research into algorithmic thinking levels would also be required – at other institutions and at other moments in the curriculum – as well as research into other computer science concepts, and the relations between thinking levels for different concepts.

References

[1] Hazzan, O. Reducing Abstraction Level when Learning Computability Concepts. Proceedings ITiCSE, Aarhuus, 2002, pp. 156-160.

[2] Tall, E. & Thomas, T. (Ed.). Intelligence, Learning and Understanding in Mathematics; a tribute to Richard Skemp. Post Pressed, Flaxton, 2002.

Author 1: Jacob Perrenet [email protected]
Author 2: Jan Friso Groote [email protected]
Author 3: Eric Kaasenbrood [email protected]

Article Link: http://portal.acm.org/citation.cfm?id=1067467&coll=ACM&dl=ACM&CFID=72174581&CFTOKEN=78331598

Back to 2006 Fall Issue Vol. 2, No. 3

Search AREE content