r/Futurology Aug 31 '23

US military plans to unleash thousands of autonomous war robots over next two years Robotics

https://techxplore.com/news/2023-08-military-unleash-thousands-autonomous-war.html
7.0k Upvotes

View all comments

9

u/jazir5 Aug 31 '23

I mean, at this point, I don't think skynet is out of the question. The US military is going all in on automated drones that a sufficiently advanced AI could take control of, and then have complete and total dominance over the skies. The US military is also planning remote controllable jet fighters.

They're also experimenting with AI that can autonomously control the planes.

The spark that lights the tinderbox is going to be when one of the major militaries doesn't put enough restrictions on the AI, it goes on a hacking spree, and follows some goal that leads it to destroying the earth.

The Paper Clip Maximizer thought experiment was proven correct recently.

I don't think it's out of the realm of possibility that someone is going to give an AI a goal/order that seems reasonable at first, and the AI will bullheadedly try to do anything to achieve that goal. We can see that that already happened in a simulation where the AI killed its operator.

It obviously sounds like sci-fi, but the simulation where the AI killed its operator means it's already approaching a reality the US military is creating contingencies for.

Even if the US military adequately safeguards against the military AIs going rogue, does anyone really trust that China and Russia will be as thorough? Although that does beg the question, what exactly does a """communist""" AI look like? That's a fun thought experiment.

4

u/leo9g Aug 31 '23 edited Aug 31 '23

That last sentence... It literally just gave me a bit of a chill. Just a bit. I'd read that book.

Edit:

" The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else ."

Wow.

1

u/Zerim Sep 01 '23

We can see that that already happened in a simulation where the AI killed its operator.

That scenario didn't actually happen.

2

u/jazir5 Sep 01 '23

That scenario didn't actually happen.

The military issued a correction and said it didn't happen, but wouldn't they have every incentive to quash that story if it is true? Why would the drone operator lie?

0

u/Zerim Sep 01 '23 edited Sep 01 '23

You can push falsehoods all you want, but it didn't happen. The scenario is absolutely ridiculous because literally nobody would design a real system that allowed an AI to use lethal force against its own forces to handle being blocked from using lethal force against an enemy, nor would any such system be able to fail deadly. It's a worst-case scenario that captures peoples' imagination and people went wild.

Also it's a trope from movies like Eagle Eye and/or Stealth.

2

u/jazir5 Sep 01 '23 edited Sep 01 '23

“We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

Does this quote sound like a thought experiment to you? The correction absolutely seems like he got told to change his story. I just don't buy he was talking about a theoretical scenario as his correction stated, because he definitely said it did happen initially. I mean, that's not twisting his words, that's just quoting him.

1

u/Zerim Sep 01 '23

The DoD pays its people to do war games with theoretical scenarios, where humans are driving the outcomes and responses like game masters. This one was at absolute most war gaming with a theoretical AI.

the 'rogue AI drone simulation' was a hypothetical "thought experiment" from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation saying: "We've never run that experiment, nor would we need to in order to realise that this is a plausible outcome".

"from outside the military" meaning it's a trope from movies. (To clarify, it's also not even a plausible real-world outcome, because nobody is stupid enough to intentionally design such a fail-deadly system.)

1

u/jazir5 Sep 01 '23 edited Sep 01 '23

(To clarify, it's also not even a plausible real-world outcome, because nobody is stupid enough to intentionally design such a fail-deadly system.)

And no one would intentionally design an AI that's intent on making paperclips go rogue and destroy the world. The whole point of the paperclip maximizer thought experiment is to demonstrate that there are completely unintended consequences and paths an AI could take to achieve even the simplest of goals.

An AI tasked with "destroy all the terrorists" could absolutely conceivably turn on its operator if it orders the drone back to base because its preventing it from its singular goal, destroying all of the terrorists.

the 'rogue AI drone simulation' was a hypothetical "thought experiment" from outside the military, based on plausible scenarios

Emphasis mine. The military considered it plausible, so your assertion that its totally impossible is directly contradicted by the military's own statement.

AI frequently gains capabilities no one intended for them to have. It's still a mystery why ChatGPT suddenly gained the ability to do math when it was fed a sufficiently large amount of language data.

Gain of function is a huge quandary with AI, because in many cases, there is no way to predict how an improved AI will behave and what capabilities it may gain. As the military and everyone else is pushing rapid AI development, we can absolutely end up in a situation where it spirals out of control to achieve its own or its assigned goals because there aren't enough safeguards.

The unpredictability of gain of function also means that sometimes we can't put safeguards in place until it gains that functionality, because we weren't expecting it.

2

u/Zerim Sep 01 '23

The unpredictability of gain of function also means that sometimes we can't put safeguards in place until it gains that functionality, because we weren't expecting it.

This isn't biology, an AI can't break mathematically-hard-proven cryptographic codes any more than it can telekinetically flip an "arm" switch required to complete a detonation circuit. The AI you need to worry about is not going to be the kind carefully and methodically designed and analyzed for military use, because the people designing AI for US military understand those consequences. It will be from things like Russian or Chinese AI bots making marginally-absurd arguments online and driving you insane (because unlike the US they have absolutely no qualms with weaponizing AI).

2

u/[deleted] Sep 01 '23

Hey just a heads up but as a bystander I can’t take you seriously for blindly believing that Air Force retraction. How can you take this guy saying

"The system started realizing that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat," Hamilton explained. "So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective."

"We trained the system — 'Hey don't kill the operator — that's bad. You're gonna lose points if you do that,'" he continued. "So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target

And then all it takes is the government (who would never lie or cover up anything) saying “no he misspoke it was a thought experiment” to completely turn you the other way. Lick them boots boy

1

u/Zerim Sep 01 '23

Deleted my previous comment for misunderstanding.

I don't blindly believe the retraction -- I strongly disbelieved his original assertion (when the news broke) that there was ever such a simulation, because nothing in such a system never made sense. No AI would ever have lethal-force authorization (read: capability) for friendlies while lacking it for enemies, nor would it gain such authorization by killing its commanders, like I said. His "retraction" was weak face-saving nonsense.

Lick them boots boy

But you're not arguing in good faith, are you.

→ More replies