Conditions and recommendations for the use of AI-based applications in teaching and assessment

For a long time, the creation of knowledge in an academic context was a task reserved for humans. Embedded in the rules of good scientific practice, scientists still strive for knowledge today. Data, information, the results of reflections or scientific discussions are examined from different perspectives, critically questioned or falsified, with the aim of arriving at new knowledge. At universities, knowledge production and knowledge transfer go hand in hand. The exchange between people is just as much a basis for this as the confrontation with information.

In scientific work, various tools are used to facilitate, accelerate, increase the volume of observation and improve the work. In the technical field, applications have recently been added that make use of artificial intelligence. These represent a new quality in the development of digital tools that will change academic work due to their comprehensive functionality and pose new challenges for the institution of higher education. In addition to possible disruptive effects, the use of these new AI tools contains diverse potentials in the area of study and teaching, e.g. when chat bots can be used as idea generators, personal tutors or structuring assistants (further didactic tips and links can be found on the AI overview page of the teaching service). Higher education communities worldwide are called upon to find a constructive way of dealing with these innovations.

As a first step, the following recommendations are intended to provide legal orientation for the teaching and examination context. All other dimensions related to the use of AI-based tools will be discussed elsewhere and will need to be considered in an ongoing work context over the next few years.

Please note: In order to be able to meet the expected further developments of artificial intelligence offers as directly as possible, the correspondingly listed aspects of this website will be further processed and updated on a regular basis.

AI use in courses

  • General information

AI use in assessment

  • General information
  • Originality
  • Declaration of Originality
  • Making AI content identifiable
  • Responsibility for AI content
  • Agreement on AI arrangements for exams
  • Three variants for agreements on AI arrangements for assessment
  • Assessment
  • Contact persons

AI use in courses

Integrating AI into teaching and studies can be meaningful and profitable, especially with regard to the development of a working culture and the correct handling of these innovative technologies. For example, AI can be used as a tool, addressed as a learning object or reflected upon against the background of the subject. Courses are individually designed by the teachers in terms of content and didactics to meet the specific competence goals. Accordingly, the teaching and learning objectives aimed at in the various courses are diverse. Whether and if so, how the achievement of the teaching-learning objectives is promoted or prevented by the use of AI-based applications must therefore be decided individually for each course by the respective teacher for the concrete teaching context. It must therefore be clarified in each subject discipline and for each course how AI can be used as an aid and to what extent the possibilities of artificial intelligence are suitable as a learning object.

It must be kept in mind that the use of AI-based applications cannot be required of students at Leuphana at this point in time, as only tools that have been checked for IT security, data privacy and other requirements (e.g. the terms of use of the AI provider) in accordance with privacy laws and are consequently provided centrally may be used in teaching. Leuphana is working on creating opportunities in the future so that AI-based tools can be used meaningfully by teachers and students.

Information on didactic application examples, good practices as well as the limits of AI tools can be found on our website on the topic of AI in university teaching. There you will also find an overview of regular further education/training/workshops offered by various service institutions at Leuphana, where teachers can also exchange ideas with each other.

AI use in assessment

Texts and other works (products), such as pictures, videos or music, which are created by students as part of their studies, are products of a complex learning process and often results of (initial) academic work. The way in which learning processes are mastered and results are produced are subject to criteria of probity and ethically correct action (cf. Rules of Good Academic Practice at Leuphana).

The presence and availability of AI has an impact on the design of assessment requirements. It is recommended to design assessment requirements and tasks in such a way that they can only be processed with sufficient quality by the persons to be assessed. This should reduce or at best avoid uncertainties and the need for clarification in assessment processes.


The principle for assessment performances is that they are to be performed by the person to be assessed independently and exclusively with the use of approved aids. The possibility of using AI tools must therefore be considered in connection with the required originality of the work performed. Whether and how AI-based applications may be included in this context depends, on the one hand, on the permissibility defined by the examiner and, on the other hand, on the type of use and - in accordance with good scientific practice - on the creation of transparency in their use. In summary, it is about ethically correct behaviour and taking responsibility for one's own learning process in the light of the assessment requirements.

Having one's own work - and especially work that is submitted for assessment - done exclusively or predominantly by others (humans or digital tools) violates these criteria and is therefore inadmissible. Accordingly, such an examination performance would be assessed as 'not sufficient' (5.0) and, in the case of an ungraded examination or study performance, as "not passed"" (e.g. RPO-BA §16, para. 4, sentence 1). At the same time, the use of new digital technologies in the sense of good academic practice combined with comprehensive acknowledgement in individual sub-processes and for feedback in manageable sub-tasks or in the discussion of individual aspects that contribute to the examination of content is conceivable in principle. In this way, new insights can be gained in the research and orientation phase or feedback on products can be used for revisions. Insofar as these refer to the editing and revision of one's own performance, this is to be regarded as a component of academic work and can contribute to its professionalisation.

Declaration of Originality

With the declaration of originality, authors of academic papers confirm that they have prepared them themselves - in accordance with the principles of good academic practice. The requirements for the declaration of originality, which students must enclose with the assessment, are defined in the Framework Examination Regulations (Rahmenprüfungsordnung, RPO) § 7, Paragraph 9:

In any written work [...] that is not written under supervision, all passages that are taken verbatim from publications or other sources must be given the necessary identification. The citation must be given in direct connection with the quotation. The citation must be given for the analogous transfer from publications or other sources. The written work must contain the signed declaration that the work - in the case of group work, the part of the work marked accordingly - has been written independently and that no sources or aids other than those indicated have been used, and all passages in the thesis which have been taken verbatim or in spirit from other sources have been marked as such.

The § 7 para. 9 (RPO) cited here also applies to the use of AI tools. Unlike traditional sources, for which standardised citation methods exist, it has not yet been uniformly clarified for AI applications how AI should be acknowledged.

A particular challenge in the identification of AI applications and their use is that it is not possible to reproduce AI outputs and thus directly verify the source, since the outputs of AI applications always differ in form, content and scope, even with the same request (prompt).

In order to ensure verifiability nevertheless, examiners can ask their students to attach e.g. the corresponding references/reference texts of the AI, i.e. the input and output of the AI application used, to the assessment in order to check the correct integration and application of the AI and to be able to establish assessability of the assessment.

Responsibility for AI content

As with other academic sources, the adoption of AI-generated content (e.g. texts, images, programming codes) is the responsibility of the respective user (student or teacher). The user must check and/or verify the quality independently and cannot blame the information source for possible errors in the content.

Agreement on AI arrangements for exams

In order to make the possibilities and conditions for a voluntary use of AI tools in the examination performance by students transparent, teachers are recommended to agree with their students at the beginning of the respective course on the regulations for the use of AI-based applications (in short: AI regulations for assessments) in the examination belonging to the course and to make them transparent.

The aim of the AI regulations for assessments is to create the greatest possible transparency and clarity for teachers and students regarding the use of AI-based applications right at the beginning of a course.

Teachers describe the AI arrangements they have chosen in relation to the teaching-learning objectives. It is recommended to explain the AI rules to students in the course and to give students opportunities to comment. This also serves as a joint exercise and culture building of teachers and students for a responsible and honest handling of AI in teaching and assessments.

Three variants for agreements on AI arrangements for assessment

In order to develop regulations for the application of AI tools, the following three basic variants can be used. These are suggestions that can be selected and adopted, but can also be tailored, extended or modified to the respective subject discipline or course.

  •  Variant 1 can be used for events that do not impose any restrictions on the use of AI-based tools.

→ Template for Variant 1

  • In variant 2, the use of AI-based applications is permitted or excluded for the accomplishment of certain subtasks. To define the subtasks, the following actions can be distinguished, for example:
    • Knowledge-generating actions: Research, reading, finding topics, developing questions, acquiring and concretising knowledge
    • Text-structuring activities: Outline and structure of the work
    • Linguistically-oriented actions: Formulating, translating and linguistically revising one's own text.

→ Template for Variant 2

  •  In variant 3, the use of AI-based applications is not permitted in principle, as they counteract the achievement of the teaching-learning objectives.

→ Template for Variant 3


The assessment and evaluation of a performance must be carried out by teachers themselves. Examiners can make use of human and technical assistance in the assessment of performances, provided that the limits of assessment, copyright and data protection law are observed. It is important that examiners understand these preliminary corrections and finally assess the performance themselves and take responsibility for it.

Contact persons

  • didactic questions on the use of AI in teaching and examinations: Teaching service
  • administrative and legal questions on the use of AI in examinations: Contact persons in the Student Service (in German).
  • conflicts and problems: Ombudspersons for students and teachers