Access keys | Skip to primary navigation | Skip to secondary navigation | Skip to content | Skip to footer |
Problems viewing this site

Academic Integrity and Artificial Intelligence webinar

The QCAA developed this webinar to help teachers and schools deepen their understanding of academic integrity and academic misconduct given the recent developments in AI. The webinar focused on ethical scholarship in the Queensland Certificate of Education (QCE) system.

The webinar provides school leaders with further approaches to complement the guidance in the QCE and QCIA policy and procedures handbook v4.0.

The webinar includes presentations from:

  • Associate Professor Christine Slade, an expert in academic integrity at the University of Queensland
  • Scott Adamson, Dean of Teaching and Learning and Principal’s Delegate at All Hallows’ School.

The topics covered were student motivation to act with integrity or misconduct, the effect of generative AI on student decision-making and a school-based response.

Introduction

Jo Butterworth
Executive Director, Curriculum Services Division, QCAA

Welcome, everyone, and thank you for attending this Academic Integrity and Artificial Intelligence webinar.

Before we begin this afternoon, the QCAA would like to acknowledge the traditional custodians of the land on which we meet today. We pay our respects to their elders and their descendants who continue cultural and spiritual connections to country, and we extend that respect to Aboriginal and Torres Strait Islander people here today. We thank them for sharing their cultures and spiritualities and recognise the important contribution of this knowledge to our understanding of this place we call home.

Today's webinar is the first in a series of three webinars exploring the opportunities and considerations of responding to and using generative AI technology. By the end of this webinar, we aim to have developed an understanding of ethical scholarship and why recent developments in artificial intelligence present new opportunities and considerations for schools.

We are joined this afternoon by Dr. Christine Slade, Associate Professor at the University of Queensland's Institute for Teaching and Learning Innovation and expert in academic integrity. Christine will share some of the contemporary research associated with ethical scholarship that is important to understand as we respond to the use of artificial intelligence in our schools. With her colleagues, Christine has contributed to UQ’s response to developing an academic integrity action plan and delivered the TEQSA-funded National Academic Integrity workshops. TEQSA is the Tertiary Education Quality and Standards Agency, which you will hear Christine refer to a few times today.

We will also hear from Scott Adamson, Dean of Teaching and Learning, as well as principal’s delegate, chief and lead marker and confirmer. Scott has a keen interest in digital pedagogies, change management and e-learning and is responsible for his school's assessment policy.

Since November last year, we've heard a lot about the opportunities and risks associated with generative AI, in particular ChatGPT. ChatGPT was the fastest growing app in history, reaching over 100 million users by January of this year, and it's now over 1 billion users. In comparison, TikTok took nine months and Instagram two and a half years to reach 100 million users. ChatGPT and other emerging technologies have opened a myriad of possibilities that may revolutionise the way we work and study. We know that ChatGPT is a chatbot that can have human-like conversations where it can provide feedback on a report, Bing Chat can summarise an online document for easier reading, and Bard can write an email in a style of choice.

So while the emergence of generative AI presents new opportunities to support teaching and learning, it has also renewed our focus on academic integrity and assessment practices. For the QCAA this is a key area of interest as we seek to ensure that all students, schools and parents maintain confidence that our young people see fair recognition for the work they have done. To support schools across Queensland, we have new artificial intelligence guidance documents on our website, and we've updated both the student and the teacher academic integrity courses.

This webinar provides another opportunity to help us understand how to support our students as we respond to emerging AI in the context of academic integrity. After we have heard from both of our presenters, we will have time we hope to address a few questions at the end. Now it is my pleasure to hand over to Christine.

Presentation 1

Christine Slade
Associate Professor, Institute for Teaching and Learning Innovation, University of Queensland

Thanks, Jo. So it's really good to have everyone here. So I actually am the Lead of Assessment and Academic Integrity in the institute I've been in, and this year has certainly been a roller coaster ride of understanding.

So I have been conscious that while I'm actually an academic and I can access a whole lot of literature, etc, that perhaps you can't in your schools, I was trying to provide you with some access. So the slide is talking about the resources that are available from the presentation. So I've been able to gather together those that are open source and put them in a Google Drive folder which is accessed from that or the QR code, so you can do that now if you want to, or by the URL, the short URL.

Okay, next slide, please. So the first thing I wanted to say I guess is what is academic integrity. I mean, it's a term that we use quite freely and it can be confusing sometimes with either the values, so the values of honesty, trust, fairness, respect, responsibility, encouraging learning. So that aspirational side. And also the things that we're doing, which we would call assessment security, that is toughening our reporting requirements around our assessment so that we can limit student cheating, or know that cheating has occurred, and for a long time there it was quite confusing, just to use that one term: academic integrity. And we can be thankful to Professor Phill Dawson from CRADLE and Deakin who introduced the idea of separating it so that we have different terms for different types of functions that we're doing so they can help you with your articulation in discussions around this topic.

And I guess the second thing I wanted to say about this is that academic integrity is only situated when students are studying, so for your students it's quite a long time, perhaps for ours it might only be three or four years. But that is not excluded from their own personal or professional integrity. So I see them now, probably even more so with genAI coming on the scene, that they should be integrated — that they should be seen as one whole, I guess, literacy or ethical position that we need to have. Thank you, next slide.

So what are the challenges? Well you can see from the slide, we have a few. So in higher education, we probably have maybe six or seven that we have listed in our policies. But I did have a look at the QCAA handbook, which I think actually nicely articulates all the different types of behaviours that we can see as academic misconduct. So if you haven't looked at something like that for a while, have a little read because you have things in there like specifically talking about cheating in supervised conditions, so obviously you think immediately of exams.

Collusion, well, that's been around for a long time, you know, students doing something together and tipping over the collaboration mark, and actually falling into submitting the same types of work.

Contract cheating, we talk about that a lot in higher education, and I'll explain why soon, but you also have that in the handbook as well. So contract cheating is actually paying someone or getting somebody or a third party, so you could think of genAI if you put something into chat, and then take that whole answer and make it the answer to your assignment and you submit that, then you're submitting it as somebody else has done the work for you. That's what we call contract cheating and it can be family members that can be, you know, fellow students, but the one we are particularly concerned about in our field is the online cheating sites, and I'll talk a bit about that in a little while.

Obviously, copying work is something that's easy to do or the temptation is, easy data fabrication, giving or receiving unauthorised information, so try to get the quick look before the exams open. Identity impersonation is quite serious obviously, and then we have the perennial plagiarism, thank you.

So I won’t labour this one, but these come from the TEQSA. So Jo's mentioned who TEQSA is, it's the higher education regulatory body. We did national workshops for them across the country in 2019–20, and there was some very good information about academic integrity in those slides and this is one of them, I have given you a link to that in the resource folder.

But just to so you can see we've had some sensations in the sector, in particular, probably the most pronounced one was the MyMaster scandal where the public press announced to everyone one morning that they had information of over 1000 higher education students having used that contract cheating or commercial cheating site and paid people to go swipe their assignments and that just led into a whole lot of other things happening and awareness about the challenges.

TEQSA so began, felt the need then, to actually make everyone be regulated around what we report because we needed to understand what students were getting information about, how we're recording things, etc. and so on, and you know, we've had one case of exam impersonation from SBS. So you can have a look at these slides later there will be a copy. So all to say genAI is very sensational, but we have had a few others previously, that probably actually are more on the negative side, whereas genAI has a very positive side as well, thank you.

And just to add to that, because you notice that slide only went to 2018, of course 2022 was the big year of Dall.E 2 better imaging, GitHub Copilot, you know for the coders, and of course, we know all about what we are learning all about ChatGPT and it was because of that public accessibility that it became such an alarming sensation, thank you.

What are the risks? There's quite a few risks of course. Universities, schools should be worried about their reputation you know, they have a big scandal, it doesn't look good for new students or for the existing students. We have underqualified graduates in society, so for us, that's a serious thing. And you hear the stories of the engineer who can't build a bridge or the doctor who's not trained to be able to deal with patients, so that's a big concern.

Inside our institutions, we have threats to the culture of honesty. So we don't want to always harp on about dishonesty because we have many, many honest students and we want to respect them for that culture that they have. It does create an impact on the morale of academics, so they do find it difficult that their students are cheating and not learning as such. And it also takes quite a bit of work for them if they have to investigate any of it. And of course, there's equity issues as well. So the question really comes down to: is the student that's done the work the one who should be getting credit for it, so are they the same person? Thank you.

And now I was asked to talk a little bit about who cheats, and these lists that you can see are really from the research, it's not me just making up a list, because I know some people don't actually like the first one they see there, which is males or young students. But this is what researchers have been investigating and Tracy Bretag, who was a very famous academic integrity researcher in Australia, she did a very large study and these are some of the things that came out of that and you'll see her name there in several of them.

So for various reasons, which I'll talk about, perhaps later, male students tend to cheat more and younger students. We know for example, if you look down to engineering students, that's often the cohort. They do a lot of group work so they fall into collusion. They talk a lot with peers and they can tend to cheat. Business students also were another category and international students and note there it's not we're targeting international students, but we're conscious that the language barriers are often the problem that they have rather than the culture from which they come. Thank you.

So there's lots of reasons as to why students cheat or plagiarise. And the first slide there is about psychological state. So what is it about that person that might give them the inclination to cheat? And there's, again, a few things there, low conscientiousness, so we would say maybe lazy. Anxiety, I think is a big one these days. Low self-control, so they can't plan what they're going to do, they don't you know, prioritise. It's not only the lazy or maybe the students that can’t organise themselves. There's also students who are highly competitive, that want to get better marks as well. Impulsivity, low confidence or poor resilience, so there's quite a bit of research there on that too. But thanks Anthony.

The next slide is probably the more encouraging and this would be the majority of students. So why do students decide not to cheat? Next slide. And this comes from an Australian study, it is in your open source resource list that I put in the folder. And these are the students, and I'd say the majority of students, they know why they study.

So for ours, I guess they know why they're at university, perhaps in schools that might be a little bit more difficult, but they do know what they want to achieve. They also have a morality or a compass inside of them that knows what is a good thing to do and what isn't, and the norms. They appreciate the norms of the society in which that happens. They have respect for their teachers. So they're, you know, they're getting I suppose, having a relationship in some ways with that teacher and so therefore, then not afraid to ask for support. And the other interesting one was that they could get extensions if they needed them, so extensions on submitting their assessment.

They did fear detection and consequences so they could see the consequences of the actions and weren't stuck in the moment. And they didn't really trust anybody else to do their assessment task for them. So therefore, because of all of these things, they didn't see opportunity to cheat. Thank you.

Now, I just wanted to point out in this couple of slides, this is not every student, but it is an interesting thing that we have published about in the fact of situational ethics. So yes, we must have hardened cheaters, have probably come through your institutions as well as other places to us. Or they you know, they develop these habits and they will cheat and see the opportunity to cheat everywhere they can.

But we are more concerned, I guess, in some ways around a bigger body of students that might be persuaded to cheat. So in the example that's on this slide, basically, it's when no absolute conviction between right and wrong is evident. And depending on the pressure that's placed on that student or the type of cheating that's presented to them, they may choose to cheat. And there's an example there from a paper that talks about first year engineering students and they could see, 52 of them were asked, what do you consider these behaviours cheating: copying homework for another student? Definitely, 96% of them said yes, that's definitely cheating. But when they were asked, what about accessing an online solution manual to do homework problems? Only 2% said that was cheating. But you can see how easy that would be to just copy the answers.

Next slide, please. So what is artificial intelligence? So the other concept I was going to talk about is vulnerability of students to cheating messages and we just need to be mindful. That's the process of, you know, investigation and perhaps consequences that are imposed on students should be educative for them. So let's just turn to artificial intelligence. The little square there or rectangle ‘natural language process’ is really the only part of artificial intelligence that we're looking at right now. So the natural language processing models, and of course, the most common one of those is ChatGPT. So that is sort of what set off all of this sensationalism. Thank you. Next slide.

And this is my own observations, because I've been doing this work now in the genAI spaces, since the beginning of the year. And honestly, I think for all of us, everybody involved, you know, the teachers at the schools, the principals, the policy makers, all of us in higher ed, it's like being trapped in a complicated maze sometimes and you just wish there was one simple way out, or driving in the rain, you just can't quite see what you'd really like to be able to see, a clearer picture of how we're getting through, or we're climbing this mountain and it just keeps going and going and going.

But what I want to stress here, is that that we are all in the same space and so therefore, we are facing the understanding that trying to grapple with that understanding of the concepts and the knowledge around genAI all together. And what I would call that space from my background is ‘liminality’, so the uncertainty of being stuck in that space. And I want to give you confidence today that the responses will be clearer over time, especially if we all work together and share our knowledge. Next slide please.

And the other way I think that would be helpful for you is to find ways to maintain currency. So the pace of all this is moving very fast, but like you're at the webinar now, so hopefully we can give you something that will help you understand how to address this in your own context. But there's other ways, meeting with colleagues, reading, whatever works for you, that doesn't take too much of your time to be able to maintain your currency of what's actually happening around this space. And then just, I think the last slide is a reminder that I have put in resources for you in the folder and again, if you want to do it now, you can click on that with the QR code or with the URL. Thank you.

Transition

Jo Butterworth

Thank you, Christine, for sharing your research and, I guess for us in particular, highlighting the importance of viewing academic integrity holistically. I love the way you talked about integrity, being academic, professional and personal. And I think that sets a lot of the context for all of us in our educational settings too. And even before we hand over to Scott, even to mention that now we will appreciate the opportunities I think you spoke about around generative AI, but it's also important to understand what motivates our students sometimes during that learning journey, and therefore knowing how that we might respond, as teachers as well, to guide them. So now we welcome Scott Adamson who is going to share with us the work he has led in his school to prepare some guidelines that will support students and teachers to safely and ethically use AI for its educative purpose. So thank you Scott, and I will hand over to you now.

Presentation 2

Scott Adamson
Dean of Teaching and Learning, All Hallows’ School

Many Thanks, Jo. And thank you, everyone for joining us this afternoon. I do appreciate the time that you're devoting to this. And look, I just want to say at the outset that I'm very glad to share our school's response thus far as we've been working through and engaging with generative AI. It's not so much as an exemplar, but rather it's a collection of points for consideration for you, as you grapple with this significant disrupter in education. And it's quite clear that it's not going anywhere, anytime soon. So I'm going to talk to you a little bit about our responses, our approaches, the writing of our guidelines, and maybe some of the supporting timeline. So I'm hoping you can take some value from that.

There is no denying that generative AI is here to stay and it's permeating everything that we do from how we live and work and learn and interact with each other. But our challenge really is about how we learn about it, how we embrace the affordances and become knowledgeable about the limitations that go along with it, to then be effective and ethical users of this.

So we're really thinking about the purpose that we have, there is behind the actual uses of this technology. One might actually say that us as teachers have a moral imperative of how to educate our students to work with this, given that this society that they are walking into when they leave our school gates at the end of year 12 is really one where they'll be living inside of the world of generative AI.

I just want to talk about on the next slide, thanks, Anthony, the nature of how students might receive it if teachers are thinking about, you know, so our first reactions may have been that we're just assuming students are going to use this to cheat and that's certainly something which they would take issue with. And in particular, they want us to know that their motivation is about learning, for the vast majority of them, so I'm just warning people not to get caught up in the hype about understanding what our students actual motivations are, because they are here to be able to learn.

One of the questions that we really got when we were thinking about our school culture is about the classroom community, the thoughtful assessment design and we know that QCAA has a range of resources for us to consider about that. But we were thinking a lot about how we collaborate — collaborate inside of our classroom and in our community — how we've developed that academic culture, how we've had both vertical as well as horizontal support, supports between students, as well as with our teachers.

We've been looking at the how we've planned to provide the time inside of class for students to be grappling and completing assignments and having that one-to-one and face-to-face feedback with our teachers, as well as just growing in their confidence and competence. So they are not feeling some of those pressures.

So as Christine spoke of before, for students to make poor choices about cheating, it's really about motivation, which may include fear and a lack of time and a disinterest but also means and opportunity, and ChatGPT and other generative AIs provide a lot of those means and opportunity. But the motive is the thing that the students don't want us to be questioning, when in fact, they do want to learn, they're human beings and one of those insatiable things about human beings is we all want to learn.

So we also looked and spoke of a bit of using thoughtful dialogue with our students in class and thoughtful language. So praising results is one thing, praising effort is another and then better still probably praising resilience when the students have had something difficult and then pick themselves back up or have to work through something. So that disposition is something we've been focusing on.

We've also recognised those points of student overload. This is something else that Christine spoke to before is just, you know, from providing the time, inside-of-class time, for this work to occur. It also helps us know our students, but also that opportunity for extensions, when students have gotten themselves in a bit of a pickle and they have to address that in some way, how we sort of respond to that compassionately and give them an opportunity to further engage rather than go on making a poor choice. Thanks Anthony.

Now, just a little bit of context, I wanted to look into some of the learning supports and the attributes of a range of things that students have been using for a long time. So collusion is nothing new. Neither is plagiarism. I remember when the internet was first a thing, when I was at high school myself at the time, and that was thought to be the end of everything in terms of democratisation of knowledge and people not having to learn to remember anything anymore. There were similar concerns when Wikipedia came out, there were similar concerns when Encyclopedia Britannica I'm sure.

So educational institutions and communities have been grappling with these things over a long period of time. And there was a time and a place for all these different learning supports and attributes. And whether it's Google Bard or Bing, or ChatGPT, generative AI has a role in there, but it also has limitations that go with it.

So I think it's just important to recognise with our staff just one thing in a whole plethora of different learning supports which are there and can be used for good rather than evil. We’ll pop over to the next slide, thanks, Anthony.

This is a curve that you perhaps have seen before, may be familiar with it, it's the Dunning Kruger curve. And it's a little bit like that being in that pit of proximal development and learning, being in the learning pit. Perhaps you can identify where you are on this. Or maybe if you haven't seen this before, you might immediately recognise it when you've seen something new and you've had to work through something. You've seen a bit about its affordances, you've had to learn a bit more and as you learn more about it you realise just how little you do know about it and how it operates and how it works and the implications of it. And you go down this deep path of trying to determine how does this fit in my work, in my principles, in how I actually work with my students. Considering those different elements helps us drive, it's kind of like that I see it akin to working from that being unconsciously incompetent and developing enough understanding to then become consciously competent in the use of it, the application of it inside of classrooms and with students.

So perhaps you can identify on there, maybe coming out the other side of it now and you're ready to just delve into it and actually have those conversations with students. But just I guess the point is, is a reality check for us all in that all of our teachers need to move through that process and they may not realise this at first and we might find that reactive fight or flight sort of reaction from some of our teachers when they first encounter it. Thank you Anthony.

Now some of the reflections on initial implementation of ChatGPT might be things like this, not trusting and immediately, realising that actually so much of it is about effective prompts, and if you are able to write effective prompts, you're able to get more out of the tool itself. And quality prompt writing is one of the first areas that teachers and students may focus on developing and then the third point is about understanding what the value is of it and actually knowing that this is something that we all need to be approaching and to be working with. Right thanks, Anthony.

So ultimately, we arrived at a position after many months of talking through comparing, we made a chat line within our staff, we had teachers and leaders all interacting, sharing understandings, readings, and their explicit sort of scenarios that they tried to put through it, to then be able to develop what our approach was. And really it became this one, we want to recognise that it has a use, we need to be critical in its use, it's part of digital literacy. Academic integrity is heavily involved in it as well. It's a conversation with that together. So if we want to look at using this we need to look at how we use it ethically, responsibly and effectively inside of our classrooms.

Now I just have a couple of slides here with some approaches for teachers and approaches for students. There's probably nothing new here that you haven't seen before. And as you'll see, I've tried to identify our different authors that some of this information’s been borrowed from. And this is just a subset of them as well, but you may have seen some of these before. Certainly the recent ACARA submission into the inquiry on the use of generative artificial intelligence in the Australian education system. This identifies that teachers may mitigate risk by not only modelling the appropriate use of AI, but also amplifying the messages of engaging with AI for beneficial purpose and the critical evaluation of risks when choosing to do so. So we're not just using it for using sake, we're actually using it as a tool for learning further. And critical evaluation is one of those very aspects of what makes humans human inside of there. There's a little saying just because you can doesn't mean you should and I think that's something for us to keep in mind when we're working with the potential of generative AI, that we would like to select the right tool for the purpose. And we've always done that as teachers, it's part of what we actually do.

If we have a look at some of the approaches for students. Thanks, Anthony. You'll see here that, you know, typically you'd probably work with your teachers in the first instance and then start dipping your toes into the water with working with students in that way. So here's a range of different opportunities to be able to use it. There are immense numbers of different ways that people go about it and having those open conversations with our teachers, it's amazing the creativity they come up with in different applications you hadn't thought of. And often it's the contrasting, the contrary, the flawed responses, which are the great learning tools. So it's not just about perfection, it's about finding the areas that you can actually have good learning and teachable moments.

Now the final part of my presentation, I'd like to talk just more about our approach and our guidelines that we've written. One thing that I've been reflecting on in the last couple of days in preparation for this is just being not letting good be the enemy of great in that, you know, we could always write better and better guidelines and engage with it better when we have more information. But at some point in time, we've all got to just jump in with both feet. So I'd be just recommending that actually having the conversations, starting the ball rolling is well worth doing and bring people on the journey with you know. No-one professes to have all of the answers about where this is going. I mean, actually, prior to November of last year, a lot of us didn't see this quite just around the corner like it has been.

So our approach as you can see there, has been about trying to empower students to effectively use AI. And we've been trying to bring our teachers and our students on that journey. If we look at some of the actual elements of the guidelines, like I'm not going to share a seven-page set of guidelines with you now, but what's most important is some of the different elements that go into it, including that rationale and the moral imperative for use. So some of those quotes I've used before have been there to help guide where we're going. And then from there, some sample approaches for teachers and students, greater amounts of context about how generative AI might sit in amongst all the other tools that students and learners can use. But I guess every one of the elements that you see inside of here has purpose inside of the guidelines.

We've also at that same time, one of the supporting dependencies, you'll see there is the breaches of academic integrity guidelines. So we updated those at the same time, thinking in particular about that point of collusion, thinking about our points of plagiarism, and in particular the element of significant contributions of help from other sources. And that's why we actually included using an agent such as generative AI inside of there. So we've been using it and having students quote that or recognise it, if they are going to go to that as a source of any sort, it's really just taken as a personal contribution.

And in terms of supporting timelines, just to be able to give you an idea of the sort of time involved in working this through with our staff. It's really one way we've been working at it for eight, nine months now. And over that period of time, we've taken the time to bring people with us. So we've really looked at trying to be collaborative, to share a range of opportunities for people to engage, to familiarise, we've had a few departmental meetings, we've had several full staff meetings, engaging with it. We've had champions and we've had some people who are working to help share best practice. So it is moving to that sort of phase of wider implementation now after engaging with it for about eight months. Do we have everybody on the journey? Probably not. Do we have most people on the journey? I would say yes. And from the get go, we just took the time to determine what our rationale was about how we would approach it as a as a school.

So those are some of the key points about our guidelines and considerations that have led to where we are and how we're supporting this at the moment. I've got two more slides. One here is just about, this is a conclusion from our guidelines. And you'll see here our key points include, you know, AI applications can enrich student learning. Our schools need to adopt an open mind and adaptable stance towards AI. The development of digital literacy is one which is encompassed with academic integrity and it's a responsibility of all of our teachers. And that these guidelines are provided for teachers and students to be able to use AI responsibly. So those are some of the key elements there.

Then look for my final slide, I just want to talk a little bit about the knowing your why. So standing back, like any technology, digital or not, we need the pedagogy there to be taking the lead. We don't want the cart before the horse. So we do want to think about what the purpose is of our learning, becoming competent users and providing opportunities for our students to actively engage with it, be critical with it and to grow to be competent with it as well. So that's the sort of conclusion of the points that I'd like to talk to and to be able to share. I'm very happy to respond to things inside of the Q&A, and I can see Brendan's question there. In terms of context or information about our school, I'm in an independent school. So we don't have some of the other supports of other areas, that we've been working with writing our own guidelines, and leaning heavily on the QCAA, who's now been providing a range of different guidelines like that. I am in a metropolitan school, but I think the way that I want to talk to this, is kind of school agnostic. These are just some reflections for me as a leader working with a range of teachers, some of whom are famously on board with this and all over, other ones coming on the journey with everybody else and they've all worked through that pit of learning on the way. Okay, so those are my key points. And I do thank you for your time, and I'll pass back to Jo, thank you.

Q&A

Jo Butterworth

Thanks Scott, that was very insightful and we appreciate and I think the chat window has indicated that too. We are very appreciative of the fact that you've been so willing to share that information with everyone. I, in particular, loved hearing about how you have considered staff and students I guess, that is the question too. We are all going to be using generative AI into the future by staff and students. So developing guidelines like that to assist your community and develop that culture of academic integrity is really important. And also really appreciate your focus on empowering students in that space. So thank you very much. We now have a small window of time that we've allowed for just to ask some questions of Christine and Scott. So there's a couple of questions that have come through even prior to the meeting. And I'm more than happy for people to actually add questions in the chat window, we can ask them as well. I've taken a couple of the questions that have been asked even in the lead up to this meeting just to start off. And Christine I guess the first one actually is a question for you. And Christine, I'm happy for you is your video is on. We've been talking about academic integrity for a while and I think you've also highlighted that the work around the universities have been talking about it for some time. But I guess the question is how has generative AI required us as educators to think differently about, I guess, the authentication strategies that we use with students?

Christine Slade

Very good question, thanks. I see a lot of the authentication strategies as similar to contract cheating at the moment. Like if you use Turnitin or a data matching service, you can usually see plagiarism quite clearly. So it's very easy to see what a student’s done. But if you take something that's personalised, which Gen AI is and contract cheating, ghost writing is, that they bought it then that's much more difficult. And so we use, and I must say on the TEQSA side, they also have help for this. But they have, we discovered over time with contract cheating, there were properties about the document. There were things about the actual showing of the document itself that indicated there was something wrong and that took collaborative effort from you know, researchers etc understanding it and I can see genAI is a bit like that.

So if you take for example, people discovered early, though I'm sure it will be corrected later, that referencing was a problem for checks, and therefore it was making them up things like that. Other examples I've heard from people it's like it's too generic. So it actually doesn't understand the discipline area that you're asking the students to write to. In time, I think there'll be other properties. You know, there could be it just doesn't make sense of some of the things. It's you know, and it depends, as Scott said, on how good the student is at prompting. So if given all those things, I think, you know, this mainstream authentication that you can do and usually it's around, adding something or taking something away. So you think about I want to know, this student has actually done this, you can allow some time in class to start that piece of work, or you do more drafting than we do, you know, hand in a draft. So you're getting an idea, one the students started on time without panicking and you know, they haven't left it to the end. You're also seeing about this style, you actually have much more advantage than we do in knowing your students because we have classes up to 1000 students. So you know, things like that, checking in on them about the rationale of what they've written. There's a whole lot of stuff out there. So we think it's similar to that, I think that will help people get the clues about the types of things that give you an indication that it's not authentic to that student. I know that doesn't answer all your question.

Jo Butterworth

But it is true. We've had, we have, authentication strategies, I guess in their handbook, and it's something that we talk about a lot with schools and schools talk a lot with us as well. So no, that was that was a very good answer. One of the — I guess, resources that we do provide schools and I might pass this question on to Scott — one of the resources is the academic integrity course. And it's obviously become a big discussion piece within schools about how we use this as a resource to build that understanding of academic integrity. Scott, am I able to pass this question over to you in terms of how you use that course within your setting?

Scott Adamson

Yeah, absolutely. Thank you. In terms of using the academic integrity course, which is inside of the student portal, we've had the students sign up for that. We actually register our students in Year 10, so we know that they're actually registered inside of the portal. Then in Year 11, at the very start of the year, we open it up for them. I actually do a little assembly presentation with the Year 11s about the importance of academic integrity. I'm currently working on how that will look now with OpenAI or Google Bard or Bing, etc. all being there together. So I'm going to cover both of those together. And then I put it open for a period of about three weeks where I ask students to take the time to work through those courses over that time, and I just do a weekly monitor until I'm satisfied that I've got to see all of those students. I can't profess and say I've got every single one of them. And there are some of them that need more chasing than others. And that's, that's a natural part of when you work with a cohort of students. But it's something we take very seriously and we impress on them, take the time to have that assembly at the start of your leaving. And then we know that we've got that covered. And we can refer to it again if there's any issues that creep up or anyone needs particular reminders. We've got that as another point to go back to.

Jo Butterworth

Thanks Scott, we really appreciate that too. It's certainly a conversation for the earliest stages of units 1,2,3 and 4, particularly seniors, but I guess even from the QCAA perspective, looking at we're looking at resources, because we know that that education piece about academic integrity still needs to come far earlier than that, Year 11 and 12.

I guess this might be a Scott question as well, too. We obviously have even parents make contact with us, but some teachers have also talked about parents making contact with schools and wanting to be part of that education process. So I just wonder from either Christine or Scott, some ideas about how we might engage parents in this space and in this conversation and what they need to know.

Scott Adamson

That's great. Well, I'm happy to talk to that person and pass over to Christine if she has any further comment as well. I would just say here parents are going to be coming through the same pathway as we as teachers have, in that they at first they might be, if we're hearing the narrative of what you read in the newspaper, then it's all doom and gloom, very much so. So what we really recognise is that they also are coming on that journey as well. So a bit of a drip feed and newsletter articles, some really good, pointed examples about how these actually can be applied in positive ways, as well as some of the fallibility. So I think some of those great examples of how it can be used with teachers, and used with students can help allay parents fears about that as well. We haven't actually gotten to the point of having a parent information session at the moment, but we have had that drip feed of information going to our parents in that way. So we're partway down that journey and we're wanting our students to be very much the ambassadors of how that looks and to be able to have those sensible conversations at home to about it..

Jo Butterworth

Well, and that's about empowering the students, isn't it?

And just before you leave us, Scott and Christine, I wonder if you can just share with everyone. Perhaps a couple of takeaways from this Scott, your work around the guidelines and Christine your work around the research around academic integrity. A couple of takeaways for the group to go forward from this point and use this information. Scott, I might hand over to you first.

Scott Adamson

Okay, thank you, Jo. I would say just on reflection some of the things that I've taken away with this is that you know, that statement I said before about don't let good be, don't let great, be the enemy of good. You have to make a decision jumping, start engaging with it and encourage others to as well. No one's going to have the perfect solution. The perfect response. And it is a changing landscape. Even if we look at what's happened since November, and the different iterations of the of 3.5, the GPT 4 and where it's going, it's about to explode into essentially everything. My understanding is it's inside of Snapchat now as well and most students don't know how to actually turn that off and they’re actually inviting generative AI into conversations. So it's, it's out there and we've got to learn to deal with it. Actually, we have that moral and imperative to deal with it. So it doesn't mean that's the end of learning. So my big takeaway really is, it's like any other tool, we use it for its purpose, identify, know enough about it to know what the purpose where it can be applied, where it can't be applied, knowing its failings as well as affordances where you're best equipped to be able to do that. So let's encourage the conversations. Thank you.

Christine Slade

It's good. I agree. I don't think we should be fearful. I think we need to be confident, quietly confident. It's a bit like COVID with Zoom and how many of us have the story of what we did wrong and everyone was very accepting, you know, became much more accepting that we were all learning together. I think have the coffee with the person who might know a bit more or who can show you how to start. And you know, it's always an interesting exercise to try first, who am I ask Chat? who are you? And see what it comes out with? Often it's not correct. And the other one is, have a try of one of your assessment tasks and put it in and see if it can answer it. They're quite interesting tasks to do, but it's all collaborative. We're all learning together. So you know, get with somebody else or get your group in your discipline area of the school and start working together. Don't be intimidated to have a go.

Summary

Jo Butterworth

Thanks, Christine. And Scott, they're great words of advice. And I guess it's a perfect segue as well to just remind people that we are, if you might remember right back to the very first slide, we're doing this webinar in a three-part series. So the first one, the next one we're going to talk about assessment design, and what we need to consider in that space around generative AI so I hope people can join us in that space as well.

I would like to thank Scott and Christine for their generous time this afternoon, for sharing their wise words and practice with us this afternoon. I hope people have been able to take some of those key takeaways from the webinar today.

I probably just take two really important things away too, that I've jotted down as you've been speaking, and the first one is knowing our students. Scott and Christine both spoke about knowing our students, particularly as generative AI will emerge and I guess the rapid pace of how it is changing. The most important thing is to empower the students, know those students that are in our classroom.

I guess the other one is around the academic integrity. And while that is probably a word that as students come through the senior years, it might be unfamiliar in some contexts, but just to be able to safely and ethically use some of these tools that really provide an opportunity, as well as being really aware of the risks in that space is what we really want to encourage, I guess coming out of this webinar and moving forward. Thank you for joining us, everyone and we look forward to seeing you at the next webinar. Thank you very much.

More information

Academic Integrity and Artificial Intelligence webinar, August 2023

Other academic integrity resources

Back to top