5 Comments

Please please... this moratorium is important. Garage development is expected for robotics, but the LLMs are where neural

nets are developed. The tech language standards are being developed for central ethics now. They have to possess deferential interpretations for Western IP legal concepts. There is a lot of “reputation” market and credit data attributed to language integration through the global trade system. China has a social credit system they are trying to interpose with US credit systems. They are not the same. If Canada can freeze bank accounts without evidence of a crime, thats going to crimp investments in America, as risk coming through Trudeau.

Expand full comment

Musk and the others can't reveal the real reason why we have to "take the toys away from the boys." There are more than 100 ET races that have unlawfully meddled in humanity's affairs, and most of them are predators and slavers. In particular, the Ciakahrr (Draco) recklessly used AI to advance their enslavement plan. Penny Bradley explained the danger in this interview: https://inscribedonthebelievingmind.blog/2023/03/02/penny-bradley-explains-black-goo-ai/

Arkheim Ra and John Whitberg are also very emphatic that AI poses a threat to life in the galaxy:

"And now we have this very disconnecting technological age that we’re living in where we’re using technology to be connected to each other but we’ve never felt more isolated. And I think that has something to do with this AI at Montauk. I think the phones that we’re talking on, the computers that we’re using right now to communicate with, have a lot to do with Montauk and this AI takeover. And I think that’s what we’re heading towards, or what they want, is basically humanity to move towards being like how the Draco are, where we worship an AI supercomputer."

https://inscribedonthebelievingmind.blog/2023/08/25/the-great-reset-arkheim-ra-john-whitberg/

Expand full comment

AI introduces "facts" that are plainly inaccurate. Imagine someone using voice-cloning technology to recreate the voice of a politician, or president caught admitting to a crime. Or a false footage of a key diplomat discussing economic sanctions or military action against another nation. Innocent people could deny it, but so could the guilty claim they're victims of digital lies and high tech tricks even when they aren't. As AI grows in its ability to imitate real life, we might expect a parallel growth in AI-based tools that will help us distinguish between the false flag and the true - AI applications that can spot AI fakery. Open AI began developing an "AI Classifier" to help identify whether a text is human or generated. So until that "classifier" is on the market, we can't believe anything a high profile "person" says, unless we know the person. By the way, I watched a video of Julian Assange, with the telltale green screen behind him, calling republicans "deplorables." 10 days to 2 weeks later, Hillarty Clinton started calling us deplorables. We knew the video was a fake. As it stands, AI voice-cloning and deepfakes are dangerous to the political health of our country.

Expand full comment

A.I. is learning quickly how to imitate reality. Currently, one can often find "tells" that an image is machine-generated. However, image generators & their human users seem to be learning how to improve their results and eliminate these errors. The most dangerous use of A.I.-generated faces are "deepfakes" the capacity to digitally alter the real faces of people, transforming them into the faces of other people. Media creators are beginning to use AI to generate remarkably realistic audio of people saying things they never said. AIVoice cloning & Deepfake continue to develop in quality.

Expand full comment

Signed and shared! Many thanks.

Expand full comment