The issue with coursework is that how on earth do you prove that the student actually knows anything if they are allowed to use every tool available? Whether we like or not, some jobs require people to actually know stuff. I wouldn’t want my teacher checking AI for the answer to everything or a surgeon to google how to fix me in the middle of surgery. Exams have their place, especially as coursework has issues with cheating, which is why coursework was removed from a lot of subjects in the GCSE reforms.
I think education needs to adapt, but it needs to be extremely careful with how it does it.
I was going to post almost the exact same thing. There are definitely ways you can use these models to make education better not worse.
And if people will be using this sort of AI in their working life which looks likely, then education should be encouraging its use and getting ahead of understanding how people can use it.
University essays aren’t really about knowing things, in my opinion. They are, or should be, more about conducting unique research to explicate a point of view or conclusion.
ChatGPT responses are always generic. There’s lots of things a human can do to turn a generic response into something unique and valuable that only a human could produce.
I mean I agree it has to be carefully done, but I think there are ways. And as you say, where knowledge needs to be remembered it should be tested by exams rather than coursework, I think that’s really always been the case (this was a basic part of assessment method theory when I studied that 15 years ago)
This is so bizarre because if you look at the evolution of a lot of the Open University’s courses and curriculums, exams are being phased out completely in favour of assignments.
Just take a look at the relatively new cybersecurity track:
Not a single examination on the required components for the degree.
Go back a decade and the then equivalent courses for the newly designed ones on this degree mostly featured an examination at the end of each course.
I suppose at that level, there are probably fewer resources available to cheat and is probably more reflective of how you’d use that in a job (computing is one subject where coursework is probably the best way to assess it). For GCSE maths, you would put it into an online calculator and get an answer. Even a lot of stuff at uni maths can be done like that.
For subjects like English, it isn’t difficult to find good quality essays about whatever you want to talk about either and copy them.
Which makes sense. In many fields, remembering things is not very important anymore because the knowledge is easily available. Ability to understand, analyse and apply the knowledge is more important. Being in an exam room cut off from search engines, your previous work, chat rooms where you can discuss with peers etc doesn’t really replicate a skill set that will be useful in a professional environment.
Unfortunately the problem with schools is that the Tories just love wrote memorisation. Every now and again the Times publishes a an article about how ‘shocking’ it is that most students can’t put a date on the start of the Boer War or whatever.
Actually choosing the right assessment method for something is really complicated, but the open university (who I have studied with) are excellent at it.
To be fair, the GCSEs and A-levels did increase the amount of application of knowledge so it wasn’t just remembering facts, although you wouldn’t get far if you didn’t know anything.
I’m pretty involved with some of the lower level maths stuff at the OU, and those are all assignment based too (the new ones), so no exams. I think some have little online computer marked exams, which honestly are easy to cheat at, but aside from getting another human familiar with the curriculum and a good mathematical understanding, I’d be very surprised if you could cheat the assignments.
Now I’m not sure yet what Chatgpt can do in this regard yet, but the assignments are wolframalpha-proof at the very least, given what’s being tested and what’s asked.
I do maths and almost all of it is through exams and class tests. The mathematical computing modules are either 100% coursework, or mostly coursework with a 12 hour exam (or assignment, not really sure) at the end. Due to the level of computational knowledge involved, it really isn’t difficult to cheat on these.
We do also the odd module with 10% or 20% coursework, some of which you can cheat on as they are just “write your answer here” and a computer will mark it.
Yeah, the OUs are a bit more involved than just the answer. In fact the correct answer counts for very little in terms of mark share.
If your methodology is sound, and your working is in the right ballpark, and presented correctly, you’re 80% of the way there, and so you’ll get most of the marks.
I’ve personally been pushing quite heavily with what little influence I have to ditch exams in math. I think it’s a bad way to do things for almost everything, mostly because I was a very gifted math student, and although earning a first class degree with distinction in later life, I left high school with a B because the exams foiled me.
I think modules like the OU’s MU123 set a better standard for how math should be taught and assessed. I’m a bit biased, but I think it’s fantastic.
Also. I just asked it to write an essay on something I wrote an essay about at Uni:
“Write an essay for a philosophy degree explaining how Schopenhauer‘s theory of Will challenges Kant’s subject/object distinction”
The result is frankly terrible. Poorly formatted, in some places outright incorrect and most importantly completely misses the point. Maybe you need to be a lot smarter with prompts but if this is the standard I’m not convinced it poses a massive threat to higher Ed yet
I did my first two years with open book exams so you had to focus more on reasoning in your answers. I personally felt like I learnt very little. I could use the notes (which you would also have in coursework) so never felt any pressure to learn anything. I definitely wasn’t the only one.
No method of assessment will suit everyone, but different unis offer different things so people will have choice. GCSE and A-Level less so, but at least GCSE has foundation tier for those who are less confident.
I’ve asked it a few of my questions (after we have the answers back. I didn’t cheat haha) and it got most of the working out spot on (for both normal maths stuff and coding). It for some reason struggled to get the actual answer, but all the code was correct so you could you put it into python yourself to get the actual answer
I’d argue that says more about the course materials than how it was assessed!
Unless of course the goal of an examination is to act as a memory recollection test first and foremost. But I don’t think that’s necessarily right or fair. It’s also not what will be expected beyond education, unless you plan to work as an expert witness.
I think looking up notes is generally fine, because you’ll be doing that in the work place too. No mathematician can remember every single mathematical formula, and will always have to look it up. So what I think matters more is testing how well a student can understand, apply, and use the formula in various contexts, and I think assignments do that far better than an exam does.
To give you an idea of what’s expected with that OU module for instance, here’s one of my answers for part of a question on an assignment when I took it for a test drive during its first run on what must be coming up to a decade ago now (doesn’t feel like it’s been that long)!
Makes me feel old, because now I’m realising things I keep describing as happening a decade ago are now 15 years ago in 2023!
In any case, I’m dubious ChatGPT would be able to formulate the answer and present the working as expected. It might give you the right answer 40% of the time though, which would earn you 1/4.
Edit: I imagine it’d be able to answer the hotel/pub question with relative ease to net both marks though.
I saw this shortcut for iOS published on reddit earlier this week;
Gave it a little test at the weekend - and while it needs a paid account - you can go pretty heavy on it using the GPT 3.5 model for very cheap.
Useful for pulling up some reference info formatted in a clear way, or many of the things suggeted by the author of the post.
Today I’ve been dabbling with it [chatGPT more generally] to create some content for some internal training, basically summarising info I can’t be bothered to type fully - and it’s a huge time saver.
I’m not shocked at someone entering AI images, but I am shocked that a pretty poor image won. Taking the image at face value, it’s overly processed and unnatural, but that seems to be what some of these prizes are actually looking for. Knowing the source of the photo, I then took a closer look at the hands, and. We’ll. There’s definitely one iffy hand in there.