Non-paranoid AI thread

cycloneG

Well-Known Member
Mar 7, 2007
15,131
15,164
113
Off the grid
This is the only slippery slope I worry about. Now that management sees the shortened lead time they will expect it every time.

It certainly let's us take on more projects which has increased our profits and salaries :). We just hired two more people this week because of the increased work.
 
  • Wow
Reactions: tman24

GBlade

Well-Known Member
Mar 9, 2014
731
409
63
There are between 100-1000 trillion synapse in the brain. ChatGTP-4 is rumoured to have .5 trillion parameters: each roughly equivalent to a synapse. In 20 years we are likely to see AI models just as capable as a person in a wide verity of uses.
 

pourcyne

Well-Known Member
Feb 19, 2011
7,678
9,061
113
There are a TON of companies that take free government data on various whatever, repackage it, and sell it. Sometimes with little to no analysis or clean up. Just zero value add, other than someone didnt know it was out there free...

Yup. Like all the weather reports that come from NOAA.
 

qwerty

Well-Known Member
SuperFanatic
SuperFanatic T2
Apr 3, 2020
6,219
8,801
113
59
Muscatine, IA
I know there is already a thread about AI in the Cave, but it is filled with paranoia about SkyNet. I also don't think a reasonable discussion about ChatGPT and AI needs to be political.

I am not worried about "evil AI" taking over, AKA "SkyNet". I am worried about us becoming dependent on bug-riddled AI, which seems likely. More and more people are growing dependent on things like ChatGPT without realizing the limitations and that it can be just plain wrong a lot.


My ChatGPT says that post is uncytely
 

HFCS

Well-Known Member
Aug 13, 2010
67,811
55,005
113
LA LA Land
There are between 100-1000 trillion synapse in the brain. ChatGTP-4 is rumoured to have .5 trillion parameters: each roughly equivalent to a synapse. In 20 years we are likely to see AI models just as capable as a person in a wide verity of uses.

I’m always curious how some read this with comfort, some with concern, and their ages.

It applies even more to some other topics.
 

KnappShack

Well-Known Member
May 26, 2008
20,284
26,158
113
Parts Unknown
I’m always curious how some read this with comfort, some with concern, and their ages.

It applies even more to some other topics.

With how humanity seems to have a need to destroy itself and how humans tie their worth to work it's impossible for me to not see the potential disaster.

That's before any hiccups to the technology are mixed in.
 
  • Agree
Reactions: HFCS

TitanClone

Well-Known Member
SuperFanatic
SuperFanatic T2
Dec 21, 2008
2,546
1,672
113
I know there is already a thread about AI in the Cave, but it is filled with paranoia about SkyNet. I also don't think a reasonable discussion about ChatGPT and AI needs to be political.

I am not worried about "evil AI" taking over, AKA "SkyNet". I am worried about us becoming dependent on bug-riddled AI, which seems likely. More and more people are growing dependent on things like ChatGPT without realizing the limitations and that it can be just plain wrong a lot.

Bolded is essentially the same argument, just different points. Is the AI is so good when does it take over? Compared to, if the AI enticing enough how far can we go without becoming to dependent, at which point we're screwed so it took over?

At the end of the day regulation will need to be strong, but it will be tough. How do we handle misinformation at a greater scale than we already fail to? How do we handle the economy as more and more jobs are swept up? How do we encourage leaning when more things are available by something simpler than a quick google?
 

Angie

Tugboats and arson.
Staff member
SuperFanatic
SuperFanatic T2
Mar 27, 2006
28,206
12,927
113
IA
I am fine using it for things like "write me fifteen headings for this report about X that use Y in the title." Things that aren't depending on a lot of facts and figures, or nuance, but rather just aggregating options.
 

Jer

Opinionated
Feb 28, 2006
22,693
21,081
10,030
I think it has a lot of uses and 99% of the AIs out there aren't nearly as capable as being stated. They're more of dynamic, search engine 2.0s or grammar and writing learning tools.

With that said... It's now being denied, so who knows if it actually happened, but there was a report last week that was alarming if true...

The US Air Force tested an AI enabled drone that was tasked to destroy specific targets. A human operator had the power to override the drone—and so the drone decided that the human operator was an obstacle to its mission—and attacked him.
 

DarkStar

Well-Known Member
Sep 15, 2009
6,365
7,144
113
Omaha
There are between 100-1000 trillion synapse in the brain. ChatGTP-4 is rumoured to have .5 trillion parameters: each roughly equivalent to a synapse. In 20 years we are likely to see AI models just as capable as a person in a wide verity of uses.
Wait till quantum computers become a thing. This argument will not age very well.
 

DarkStar

Well-Known Member
Sep 15, 2009
6,365
7,144
113
Omaha
I am fine using it for things like "write me fifteen headings for this report about X that use Y in the title." Things that aren't depending on a lot of facts and figures, or nuance, but rather just aggregating options.
And the general population will lose more critical thinking skills. Making them more vulnerable to deep fakes and people using these tools to scam them and worse.
 

DarkStar

Well-Known Member
Sep 15, 2009
6,365
7,144
113
Omaha
I think it has a lot of uses and 99% of the AIs out there aren't nearly as capable as being stated. They're more of dynamic, search engine 2.0s or grammar and writing learning tools.

With that said... It's now being denied, so who knows if it actually happened, but there was a report last week that was alarming if true...
Reminds me of the book I, Robot.

What safeguard logic can you program into a computer that will prevent the computer determining humans are the greatest threat to humanity?
 

nrg4isu

Well-Known Member
SuperFanatic
SuperFanatic T2
Dec 29, 2009
1,886
3,044
113
Springfield, Illinois
Wait till quantum computers become a thing. This argument will not age very well.

I'm of the opinion that quantum computers are much like fusion reactors - mind blowing in theory but will be mired in "development" for my entire life. Maybe someday they're viable, but I've lived long enough to doubt that that day will be anytime soon. AI is kinda in the same boat. It's here and it's real, but AI has its limits. As a software developer, I'm not buying into the current AI hype/scare.
 

cyco2000

Well-Known Member
Nov 5, 2007
1,328
198
63
And the general population will lose more critical thinking skills. Making them more vulnerable to deep fakes and people using these tools to scam them and worse.
Or susceptible to never believing AI is wrong. I don't see either scenario as good.
 
  • Like
Reactions: DarkStar

UnCytely

Well-Known Member
SuperFanatic
Sep 24, 2017
3,297
3,471
113
Council Bluffs, Iowa
People are using AI to train AI.

"Using AI-generated data to train AI could introduce further errors into already error-prone models. Large language models regularly present false information as fact. If they generate incorrect output that is itself used to train other AI models, the errors can be absorbed by those models and amplified over time, making it more and more difficult to work out their origins"


I am not worried about "evil AI" nuking civilization. I am worried about a sophisticated but bug-ridden AI turning all of the traffic lights in a city green all at once (for example), but only for a minute or two, and because it was so transient it becomes difficult to dive into the mountains of data that trained the AI to find out why it happened.
 

Gonzo

Well-Known Member
Mar 10, 2009
23,525
25,845
113
Behind you
I guess I don't understand how it's ok for someone to pull up ChatGPT and type in... "write a blog about this topic that focuses on area 1, area 2, and area 3 as it relates to my profession" and then post that blog with their name on it. I understand that the output is pretty amazing, but putting your name on content that you didn't generate is fukked up.
 
  • Agree
Reactions: DarkStar

Mr Janny

Welcome to the Office of Secret Intelligence
Staff member
Bookie
SuperFanatic
Mar 27, 2006
41,177
29,491
113
I guess I don't understand how it's ok for someone to pull up ChatGPT and type in... "write a blog about this topic that focuses on area 1, area 2, and area 3 as it relates to my profession" and then post that blog with their name on it. I understand that the output is pretty amazing, but putting your name on content that you didn't generate is fukked up.
Agreed. That doesn't seem like something you can truly say that you wrote.

But I had to write a job description for a new position that I'm hiring for, and chatGPT gave me a pretty awesome framework to model it on. I made quite a few changes and tweaks to fit exactly what I was looking for, but I have to say that it did a pretty remarkable job.
 

exCyDing

Well-Known Member
Nov 29, 2017
4,315
7,639
113
Agreed. That doesn't seem like something you can truly say that you wrote.

But I had to write a job description for a new position that I'm hiring for, and chatGPT gave me a pretty awesome framework to model it on. I made quite a few changes and tweaks to fit exactly what I was looking for, but I have to say that it did a pretty remarkable job.
That’s exactly how I’ve been using it. It does maybe 80% of the work, but I always end up rephrasing a few things and doing a final edit before sending anything off.
 

Gonzo

Well-Known Member
Mar 10, 2009
23,525
25,845
113
Behind you
Agreed. That doesn't seem like something you can truly say that you wrote.

But I had to write a job description for a new position that I'm hiring for, and chatGPT gave me a pretty awesome framework to model it on. I made quite a few changes and tweaks to fit exactly what I was looking for, but I have to say that it did a pretty remarkable job.
Exactly, I think it'd be perfect for things like that. I've never seen a job description with a byline.