Non-paranoid AI thread

huntt26

Well-Known Member
Apr 10, 2006
10,598
2,252
113
po' dUnk
It has saved me a lot of time writing JavaScript for some web applications. I'm able to push out many more features and bug fixes in a given day or week than I used to. I brainstorm with it, then tell it to toss it into code snippets. It breaks it all down to help me learn new strategies and concepts too! Pretty impressive stuff.
 
  • Like
Reactions: Mr Janny

flycy

Well-Known Member
Jul 17, 2008
2,140
2,289
113
Crescent, IA
When I go home I'm around people from a certain era that couldn't foresee electric cars taking off (this was about a year ago) because there's not many charging stations.

Minds were blown when I pointed out charging stations will start to become more numerous...just like um, gas stations did.
Charging stations aren't the problem. Raw materials and power supply are. I suppose if we have enough gasoline generators it could work :)
 

flycy

Well-Known Member
Jul 17, 2008
2,140
2,289
113
Crescent, IA
I think it has a lot of uses and 99% of the AIs out there aren't nearly as capable as being stated. They're more of dynamic, search engine 2.0s or grammar and writing learning tools.

With that said... It's now being denied, so who knows if it actually happened, but there was a report last week that was alarming if true...
I'm sure that Colonel got his peepee schwacked for presenting that. Allegedly he presented at an AI conference in England, and then retracked the next day saying it was hypothetical and they didn't really run the test. In a virtual simulation, AI was tasked to independently destroy enemy SAMs, but a human had the final yes no control. AI allegedly didn't like not being told no and took out the controller. After it was reprogrammed to lose points if it attacked its controller, it allegedly took out the comm links to the controller. The Colonel is the Ops Group Commander of the test squadron at Tyndall where they have some test F-16s that fly around autonomously, so definitely would be the guy who would conduct these sort of simulations.
 

KnappShack

Well-Known Member
May 26, 2008
21,802
28,799
113
Parts Unknown
I'm warming to the technology. John Wayne vs the Devil...


That's a tough question to answer, as both John Wayne and the devil are powerful beings with their own strengths and weaknesses.

John Wayne was a larger-than-life figure who was known for his strength, courage, and determination. He was also a skilled gunfighter and a talented horseman. However, he was also a mortal man, and he was ultimately vulnerable to death.

The devil, on the other hand, is a supernatural being with powers that far surpass those of any mortal. He is also immortal, and he cannot be killed by conventional means. However, he is not invincible, and he can be defeated by those who are more powerful than him.

So, who would win in a fight between John Wayne and the devil? It's hard to say for sure. If the fight took place in a physical realm, John Wayne might have a chance, as he is a skilled fighter and he is also very strong. However, if the fight took place in a spiritual realm, the devil would likely have the advantage, as he is a supernatural being with powers that John Wayne does not possess.

Ultimately, the outcome of the fight would depend on the circumstances. If John Wayne was able to catch the devil off guard, he might be able to defeat him. However, if the devil was prepared for the fight, he would likely be victorious.

So, who would win? It's a toss-up
 

besserheimerphat

Well-Known Member
Apr 11, 2006
10,914
14,073
113
Mount Vernon, WA
People are using AI to train AI.

"Using AI-generated data to train AI could introduce further errors into already error-prone models. Large language models regularly present false information as fact. If they generate incorrect output that is itself used to train other AI models, the errors can be absorbed by those models and amplified over time, making it more and more difficult to work out their origins"


I am not worried about "evil AI" nuking civilization. I am worried about a sophisticated but bug-ridden AI turning all of the traffic lights in a city green all at once (for example), but only for a minute or two, and because it was so transient it becomes difficult to dive into the mountains of data that trained the AI to find out why it happened.
"Never attribute to malice that which can be explained by ignorance LLM error propagation."

Personally I can't wait to be turned into a paperclip.
 

besserheimerphat

Well-Known Member
Apr 11, 2006
10,914
14,073
113
Mount Vernon, WA
It certainly let's us take on more projects which has increased our profits and salaries :). We just hired two more people this week because of the increased work.
Our company has banned ChatGPT on all company hardware because they found out some software engineers were using it to write product requirements, and there was a concern about IP escaping.
 

besserheimerphat

Well-Known Member
Apr 11, 2006
10,914
14,073
113
Mount Vernon, WA
Definitely a valid concern. We primarily use machine learning for computer vision model building and regression analysis.
Yeah that makes total sense. We've played with some machine learning for internal projects too (language processing on repair documentation). But that was totally within the confines of our company.

We didn't get very far with it because our data wasn't big enough to get the resolution we needed. While we had millions of data points for training, but by the time we broke them into useful/actionable bits there were often only a few thousand data points. So we didn't have a lot of confidence in the results we were getting. I know we tried boosted random forests, and I think the team after me tried neural nets but we ended up going back to publishing results at a coarser level. We are in the awkward area where our data is big enough that it's labor intensive to do manually but not big enough to leverage AI effectively. Basically we're stuck with inferential statistics for the time being.

That's not a bad thing, but it means lots of explaining to management. They know that AI is a "black box" so they don't try to understand it. But regression, hypothesis tests, Monte Carlo, stochastic modeling - they want to know how it works before they accept the answer.
 

TitanClone

Well-Known Member
SuperFanatic
SuperFanatic T2
Dec 21, 2008
2,877
2,056
113
Our company has banned ChatGPT on all company hardware because they found out some software engineers were using it to write product requirements, and there was a concern about IP escaping.
Why are engineers writing requirements in the first place? I've had to for the past couple months since my team lost our PO and it's not great coming from my head thinking about how stuff currently works vs purely how new features should work functionally.
 
  • Winner
Reactions: cycloneG

cycloneG

Well-Known Member
Mar 7, 2007
15,556
15,924
113
Off the grid
Why are engineers writing requirements in the first place? I've had to for the past couple months since my team lost our PO and it's not great coming from my head thinking about how stuff currently works vs purely how new features should work functionally.
tumblr_n2w7i19mbR1qakh43o4_250.gif
 

besserheimerphat

Well-Known Member
Apr 11, 2006
10,914
14,073
113
Mount Vernon, WA
Why are engineers writing requirements in the first place? I've had to for the past couple months since my team lost our PO and it's not great coming from my head thinking about how stuff currently works vs purely how new features should work functionally.
Brother that's a whole other thread... :confused:
 

TitanClone

Well-Known Member
SuperFanatic
SuperFanatic T2
Dec 21, 2008
2,877
2,056
113
Brother that's a whole other thread... :confused:
Vent away. Just told my project manager yesterday to stop thanking me publicly for picking up the slack in that space because I don't want it to become the norm/expected. Worst part, we have a former TA who's more than qualified to fill the role and has asked about coming back but HR is dragging their feet getting us the approvals to hire anyone.
 

Cyientist

Well-Known Member
SuperFanatic
SuperFanatic T2
Aug 18, 2013
3,364
3,946
113
Ankeny
I used ChatGPT for a first draft of a resume this week. Saved me a lot of time and I was impressed about some of the specifics it was able to pull.
 

KidSilverhair

Well-Known Member
Dec 18, 2010
8,827
17,131
113
Rapids of the Cedar
www.kegofglory.blogspot.com
People are using AI to train AI.

"Using AI-generated data to train AI could introduce further errors into already error-prone models. Large language models regularly present false information as fact. If they generate incorrect output that is itself used to train other AI models, the errors can be absorbed by those models and amplified over time, making it more and more difficult to work out their origins"


I am not worried about "evil AI" nuking civilization. I am worried about a sophisticated but bug-ridden AI turning all of the traffic lights in a city green all at once (for example), but only for a minute or two, and because it was so transient it becomes difficult to dive into the mountains of data that trained the AI to find out why it happened.

This part - AI “scraping” data off the internet to train itself, which results in AI learning from, well, AI - seems like a bigger drawback than people are thinking. We already see AI create formal-looking research papers, complete with footnotes and cites, except the cites are either real people who never wrote the cited work or completely imaginary references to nonexistent publications. AI creates things that look like what they’ve been asked to make, but that doesn’t mean AI can actually do the research for you.

It’s a weird world we live in. It also bothers me that the sci-fi Jetsons era view of AI was that it would relieve humans of all the “scut work” of research and labor, freeing up humanity to engage in a life of more leisure, recreation, and art … and instead in reality AI is taking jobs away from writers and visual artists, and rather than giving us a utopia of economic freedom from labor it’s just throwing more people out of work while concentrating more wealth in business owners and investors, actually (at least for now) acting as a net negative for economic equality.
 

Showtimeljs

Well-Known Member
Jul 2, 2015
759
291
93
Huxley
Today I tested Bard, Bing, and ChatGTP to see if AI could write some protocols that I regularly have to do research for and create for my job. The results were at little disturbing. Technical all of them failed to make a correct protocol, however they were all about 90% of the way there. Bing was the closest on all of them. All of this was surprising because I asked for some more difficult to figure out ones.

Anyway, in the process I noticed that all of the AIs were relying on data from Chinese research paper mills. These sources appeared to be at least partially AI generated and not recognized as valid research in the US. Yeah, AI is relying on low quality Chinese data. My job is safe for as long as that is the case...but this is pretty scary to see that the sources used to answer my questions were so suspect.
 

CloniesForLife

Well-Known Member
SuperFanatic
SuperFanatic T2
Apr 22, 2015
14,346
18,431
113
Someone smarter than me. Is there anyway to leverage blockchain technology and tie it to items on the internet to authenticate origins and help identify if it's AI generated or not?

Second question, can you start a multi billion dollar company and cut me in?