“What are the positives and negatives of using ChatGPT (and other AI) in post-secondary?”
This is a question I need to answer for an essay competition thing, and while I do have ideas myself and from my professor when I asked for his opinions, I was hoping if anyone here had some insights to add.
Is it ethical that I ask for your aid? I don’t want to overstep. I would not use anyone’s name/usernames at all in this essay, at most I will cite sources on the matter.
While I think my current ideas about the pros and cons are good (more cons than pros in my opinion) but I want to know if I missed anything.
If needed, I will add what ideas I’ve come up with so far but for now I’ll leave that out.
Edit: I was tempted to post this in the “Ask Lemmygrad” community but I think thats a more educational community about communism specifically so I’ll stick to asking here.
for me the arguments would be mostly negative because:
- using it does not train your research skills
- using it does not train your creative and academic writing skills
- it is often just wrong when synthesizing text
so to me those are major cons in an educational context some positives would perhaps be:
- it is useful as a phrase bank, as it can quickly give ideas on how to put words together.
- it is alright at giving direction when starting research, sort of like wikipedia
thats all i can think of so far
Don’t forget about the privacy and copyright concerns. Scraping the internet for training data, copyrighted or not, and also logging every input for this purpose (and probably others).
A pretty significant con in my opinion.
Copyright is one of the cons I have written down but I never thought of the privacy issues with AI…
Most people don’t. Convenience is more important than privacy for most people, can’t blame you.
I’m just a paranoid tech geek. So this is usually the first and strongest concern for me.
Honestly, being in this forum has got me on the privacy paranoia train so I get it. Sometimes i forget how many aspects of life/the internet invade on privacy. Like now, I had no idea that AI like ChatGPT invaded privacy.
I think AIs are one of the most privacy invading things right after social media platforms.
These are all great, thank you for this!
My preferred way of thinking about these chatbots is that they’re effectively just on-demand peers with quick Google skills to chat. Just like humans they can be confidently wrong a lot or have incomplete information or presentation, but also just like humans they can help you explore your ideas and give you quck insight.
Besides all the technical cons (blatant disregard for copyright law and it being randomly racist sometimes), I don’t think they’re particularly bad. You just have to keep in mind that they’re about as trustworthy as your local arrogant lab intern. Usually you’re already required to source your claims on higher education work anyways.
Main issue right now is that the current favourite implementation seems to be specifically trained to almost never admit to not knowing something.
Main issue right now is that the current favourite implementation seems to be specifically trained to almost never admit to not knowing something.
Training data comes from Americans, so that makes sense.
Just like humans they can be confidently wrong a lot or have incomplete information or presentation, but also just like humans they can help you explore your ideas and give you quck insight.
very very good stuff, thank you!
For more general information on llms I recommend this blog. https://simonwillison.net/ The post about prompt injection is especially good.
Oh this is going to be helpful since I don’t know much about LLMs.
It’s been awhile since I skimmed it, but this article is a good source on the topic.
I love getting reading material so thank you!