Evaluation of projects and MSc thesis

The result of the project and MSc thesis work is evaluated according to the criteria that is normally used for scientific technological work as conference contributions, journal papers and similar. The technological and scientific challenge posed by the task should be taken into account, as well as which results that may be expected within the time equivalent to the ECTS credits load of the work.

Criteria

  • Scientific and technological challenge and results
    • Challenge: Are the task application of know methods on a standard class of problems, or are any of these new? Have similar tasks been dealt with before? What are the requirements for abstraction and innovation? A simple task shall never be given a top grading.
    • Substance of the results: Is something implemented and working? Are simulations carried out and/or analytical results obtained? Has the student produced a specific, unambiguous design or is a thorough trade-off between alternatives based on sound criteria performed? It should be taken into account that working towards concrete results may require time to get equipment, software, etc. configured and up and running.
    • Broadness: What are the requirements for maturity and overview of the field? How broad is the issues addressed and what is its scientific/technical/disciplinary span?
  • Significance and originality
    • Novelty: Does the work give new facts, ideas or insights? Are there innovative elements?
    • Relation to the state-of-the-art: Has the candidate shown sufficient insight into and overview over the problem domain? Does the manuscript include representative references to other work within the domain? Is the candidate able to put his/her own work into a wider context and the work of others? The references and bibliography are important in this context.
    • Utility: Is the work practical or theoretical useful? With respect to this criteria, be aware that it is the relevance of the work with respect to the formulation of the task that is evaluated, not the relevance of the (given) task itself.
    • Autonomy: What has the candidate achieved by him/herself from the given task/problem? To get a proper evaluation, the sensor must be informed about how the problem formulation has evolved though the work and the support given to the student.
  • Methodological quality
    • Method: Is any method used in the work and (if relevant) is there an explicit choice of method? Are suitable formalisms chosen and used?
    • Methodological reliability: Are the methods used and the investigations done sufficient to support the conclusions?
    • Logical consistency: Are there contradictions in the report? Do the observations support the conclusions? Are there alternative interpretations of the observations? Is the basis for the conclusions (e.g. the observations) complete?
    • Procedural quality: Is the working procedure well-documented and is any quality assurance with respect to the work/results carried out? When the task is to develop a (software) system, the SW quality (wide sense; documentation included) should be taken into account.
  • Presentation
    • Structure: Is the report written in a manner that makes it easy for the reader to get an overview over starting point/objectives, what is done and the conclusions/results, and to maintain this overview throughout the reading. Does the report contain the necessary elements as abstract/summary, table of contents, introduction, etc.
    • Clarity: Is it easy/possible/difficult/impossible to follow and understand what is written? Are proper references given to from where information is obtained? Are the references in the list complete?
    • Information density: Is the relation between content and volume satisfactory? How long is it between the “golden nuggets”? Are information about details unnecessary for progression of the reading put into appendixes?
    • Style: Does the candidate make a distinction between essentials and details? Is the work boring to read? Is there an overview chapter/section (cf. structure) which makes the work more available?
    • Illustrations and tables: Are illustrations and tables clear, reasonably “self contained” and informative? Is there a duplication between text and illustrations/tables. Could some of the text be better presented/conveyed in the form of illustrations or tables?

The evaluation

All experience shows that the final evaluation is best based on an overall judgment of the quality of the work. We recommend that equal weight is put on the four main groups of criteria

  • Significance and originality
  • Scientific and technological challenge and results
  • Methodological quality
  • Presentation

There is no similar recommendation for weighting of the sub-criteria.

These recommendations are embedded in the evaluation form that should always be used in the evaluation of an individual project or thesis at the Department of Telematics.

With respect to the sub-criteria such as challenge, novelty and utility, it is mainly the responsibility of the supervisor/tutor to ensure that the problem/task formulation is relevant and that it will be possible for the candidates work to meet these criteria. The candidate is normally not in a position to do this when the problem/task is chosen.

It is advised that censor and supervisor prepare individual independent evaluation forms before the censoring meeting. Signed final forms, or if preferred a common form, shall be handed in to the department administration as the result from the grading. The supervisor may use the form in his/her feedback to the student.

The description of the grades for master's theses is available on the intranet. Read more...

The grade scale using percentage points is as described below. This scale is applicable to all thesis submitted after March 31 2014.

GRADE Percentage range
A 89 - 100
B 77 - 88
C 65 - 76
D 53 - 64
E 41 - 52
F 0 - 40

Quality assurance

The evaluation should be quality assured according to the following procedure (in Norwegian).




2014/04/03 17:39, Laurent Paquereau