CHATGPT

Page may contain affiliate links. Please see terms for details.

AuroraSaab

Legendary Member
To answer Milzy's question about propaganda and misinformation, it's been shown quite a bit that you can get it, and other systems, to be racist and so on fairly quickly. I don't know about dangerous but some of the reports about it are concerning, like answering Yes when asked Should I kill myself?

https://www.google.com/amp/s/amp.th...eply-sorry-for-offensive-tweets-by-ai-chatbot

It's going to have an impact on journalism and students writing fairly quickly. Sports Illustrated have just sacked their CEO for using articles written by AI, with AI generated photos for the made up journalists profiles:

https://www.independent.co.uk/news/...insohn-sports-illustrated-fired-b2463446.html
 

stowie

Active Member
To answer Milzy's question about propaganda and misinformation, it's been shown quite a bit that you can get it, and other systems, to be racist and so on fairly quickly. I don't know about dangerous but some of the reports about it are concerning, like answering Yes when asked Should I kill myself?

https://www.google.com/amp/s/amp.th...eply-sorry-for-offensive-tweets-by-ai-chatbot

It's going to have an impact on journalism and students writing fairly quickly. Sports Illustrated have just sacked their CEO for using articles written by AI, with AI generated photos for the made up journalists profiles:

https://www.independent.co.uk/news/...insohn-sports-illustrated-fired-b2463446.html

Microsoft made a bit of an error with their chatbot in that it was open for all to use and used the dialog as part of the training process. Plus, I suspect, they used a much smaller dataset for training than ChatGPT which meant that if a significant portion of the training data was racist or otherwise inappropriate then the output reflects. ChatGPT has a huge dataset plus interaction with the tool is restricted by sign-up (and I suspect policed far more rigorously than Microsoft did).

There is quite a lot of research around bias in AI. Fundamentally this is a result of bias in the dataset, which is a reflection on bias in the organisation of the dataset and ultimately bias in society more generally. A more well known result was AI guessing jobs of people based on photos showing a strong bias based on ethnicity.

I already use ChatGPT at work, typically where I have to write a long and involved email or document and - on reading back - it is clunky and disjointed. ChatGPT has worked quite well at times to provide a better constructed mail. I view this as more editing than creating, similar to asking a colleague their opinion on a mail before sending.I think it will be a long time (possibly never) that I feed ChatGPT the original email or document scope and just send off what it deems a suitable reply.

ChatGPT is astonishing, but underlying all the complex models is still the same premise. The model is providing responses based upon probabilities derived from previous data. Wider context and societal nuances will at best be skewed to the norms that were implicit in the training data. I think a great example of this is when I use it to write emails to my boss. They end up sounding very enthusiastic and corporate with a lot of buzz-words. This might play OK in the US, but comes across as hugely sarcastic to the British ear. A nuance that I find hilarious and my boss finds slightly irritating. ChatGPT is so good that I can ask it to dial back the enthusiasm and it does then produce something more aligned with a UK email, but where is the fun in that?
 
I don't have a twitter. What does it do?

IMG_4172.jpeg


IMG_4173.jpeg


IMG_4174.jpeg


IMG_4175.jpeg


IMG_4177.jpeg


IMG_4178.jpeg


IMG_4180.jpeg


IMG_4181.jpeg


IMG_4182.jpeg
 

PK99

Regular
I've tried asking CHATGPT about a few things I know a lot about, and also know that there is much wrong information out on the interweb.

Eg How to prune Wisteria and how to grow Alstroemeria plus other technical questions about nuclear safety. (Two past career tracks...)

In every case much of the answer provided has been sound. But, key elements have been missing or just plain wrong. At best, in university exam terms, the answers would score a 2.ii or a poor 2.i ... nowhere near a first.
 
  • Like
Reactions: C R

AndyRM

Elder Goth
It's a bit early to be using the dug as a heat source, when winter properly kicks in? That's the time.
 
Top Bottom