To test this, I pasted the professor’s own sample code into one of these detectors, and it determined that their sample code is 73% AI-generated. I’m curious to see what they’ll say now.
If your programming professor isn’t aware that AI detectors are unreliable and that the task of detecting AI-generated content is essentially impossible, you might want to consider transferring to a different school.
Agreed. If he genuinely believes that AI detectors are reliable, he’s not up-to-date with current technology and should know better.
That same professor once said that all software, no matter how sophisticated, is error-prone.
I just tested it with a 100-word essay. It came up as 100% AI-generated. Then I added the word “fucking” to the sentence, and suddenly it was 0% AI.
The professor faces a major dilemma: either admit that AI detectors are not 100% accurate, which would alleviate concerns about their code being AI-generated, or concede that their sample code is indeed AI-generated, thus justifying the detectors’ accuracy.
You might have missed the professor’s likely actual stance: he knows AI detectors are unreliable, but having a clear “don’t use AI” policy gives him a way to fail students who are obviously using AI. As someone experienced in coding, he can easily tell when a beginner hasn’t written their own code and can prove it by quizzing them on their rationale and thought process.
His claim that AI detection is “100% accurate” is probably a bluff to scare students into not cheating, which helps him avoid the hassle of dealing with academic dishonesty.