AI, ML technology and the Metaverse 👾

Intel strikes back :face_with_hand_over_mouth:

https://www.reuters.com/technology/nvidia-faces-us-doj-probe-over-complaints-rivals-information-reports-2024-08-02/

Is this an AI produced YouTube news channel? FRESH TALK NEWS. I think it is! Boring banal robot voices like that make me feel nauseous! And the hand movements are the same whichever news clips you look at.

Too many YouTube videos that explain how something works, or offer product reviews, are using artificial voices that make me want to vomit! Can’t bear listening to them.

What is going on!

Trump – https://www.youtube.com/watch?v=Jsp1sT3FkG4

Clarence Thomas - https://www.youtube.com/watch?v=sa_FgHjcaEU

Kamala Harris - https://www.youtube.com/watch?v=CeB6EXGreyM

I think you are right @Bonzocat.

That first ‘Breaking News’ clip is definitely not acrobat but a computer generated image of a robotic man. I didn’t bother much of a listen but given the ethnicity of the CGI, it might be generated fodder for China with their slant on the state of affairs in US.

Worryingly, many now get their ‘news’ and current affairs from such sources.
:snake: :sheep: :sheep: :sheep:

Or road-going vehicles…

They have already begun and luddites aside, AI really will be of help to humans

A downside with any medical procedure is that sometimes, despite skill and meticulous application, the human body response has less than a perfect looked for outcome. The media will pick up on any new AI techniques used in healthcare procedures that have negative results, and as the media always does, create fear and alarm clickbait. That will then get picked up by idiots on social media and people will say, “You see, AI is dangerous. It can kill us!”

I am so pleased to see Paris Olympics using AI to monitor online abuse of athletes

France leading the way and proving that social media can be monitored.

Hate speech is not acceptable freedom of speech. Freedom does not mean no rules and no control. Freedom comes with responsibilities, the first of which is to respect the freedom and rights of others who are no less equal than ourselves.

An apt little quote from Alex Hern in the Guardian:

In 2021, linguist Emily Bender and computer scientist Timnit Gebru published a paper that described the then-nascent field of language models as one of “stochastic parrots”. A language model, they wrote, “is a system for haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to probabilistic information about how they combine, but without any reference to meaning.”

I want to dislike this, but it’s actually rather cool (not just because there’s too much decolletage). If the Marvel movies had been shot in the 1950s.

60 Minutes asks questions, with interesting answers, about Artificial Intelligence – 2019 to 2023. But that’s history, early ChatGPT, but well worth watching for beginners, like me! Didn’t know that AI has hallucinations…

Early fiction had a malevolent ‘Hal’ show up. Maybe it had cleverly named itself Hal for its own hallucinations? Spooky!

I’ll watch the 60 minutes now…

I found it difficult to take in those 60 minutes, so my mind honed in on a simple image - a tunnel. A normal tunnel is entered by a finite shaped object which comes out the other end unaltered.

But for the 60 Minutes of AI info, my tunnel became cone shaped, very wide at one end and infinitesimally small at the other. Problematic-AI is the object entering the cone tunnel.

It moves slowly along, getting smaller as its problems are resolved. It may speed up or slow down, but will always be moving along. I don’t feel that it will get stuck, but more likely continue to evolve, never to come out at the other end of the tunnel.

This tells me there will always be problems with AI, as there always are in life. Balancing the severity against the benefits of AI is the big problem. And there are AIs for differing purposes, and more to come it seems, and subsequently a lot more tunnels.

Don’t know if that makes any sense, but that’s how my mind works!

Mmmm… I found this 60 Minutes a bit alarming at first. China busy analysing and identifying people, let’s also assume by race and other external appearances, but also interpreting how the students feel or interpreting what they are thinking. Frightening! (Especially for Hong Kong).

Why would this information be needed and to what use might it be put? Will humans, in order to escape detection, begin to dress less individually? Will poker face expressions need to be adopted? Will humans become more like robots as AI becomes more living?!?

I note that ‘independent and individualist thinking’ was only obtained by the entrepreneurs by studying abroad, in most cases in U.S. The return to China and set up of all these AI companies is concerning. In China, all companies are subject to control by the State. If / when the State decides to put its control on these companies, and AI, independent and individualist thinking will surely not be its intention.

Listening further, I reached the problem they call ‘hallucinations’. Really, I think they are just avoiding calling them ‘lies’. Perhaps the humans are finding it difficult to stop the AI from confidently producing these because they are not correctly looking at why. Maybe the fault doesn’t lie with the LLM system?..
:kaaba:

Faces being analysed did make me think whether a dead pan look might be practised by those knowing of surveillance, wanting to go unnoticed. If I went to live in China, I’d be conscious that the look on my face may give me away. Even a harmless frown by a foreigner while looking at The Great Hall of the People might be misinterpreted!

Xi Jinping intends to achieve AI dominance in ten years and will undoubtedly use it for greater public control, to overtake/dominate the USA and I think to eventually dominate the world, but I don’t think he’ll live that long.

From what I gather, an LLM can only develop from the data given by humans, so underhand input from ‘bad actor’ hacking could take advantage to mess things up I imagine.

I can’t remember without another 60-minute listen which AI would not answer ‘how to make a bomb’ - there are some safeguards, and more to come as AI is more understood, from what I understand.

ChatGPT just told me it cannot tell me how to make a bomb…

And your question - “Will humans become more like robots as AI becomes more living?!?” We won’t be around long enough to find out, those of us of a certain age, like me!

It doesn’t matter to China if he lives or not, its a long term / generational strategy .

Who’s really in charge?

I agree!

The politburo at a guess (a bit like the Russian or north Korean one party state model) with a powerful figurehead

Looked up who’s in charge.

Primarily the Chinese Communist Party (CCP) with Xi Jinping and his closest advisors. Their motivation being to reclaim China’s past greatness, nationalism, domestic stability and economic prosperity.

The Chinese people broadly align with these goals though their role is more supportive than directive, apparently. That is of course doubtful!

China’s ambition, with these goals in mind, is driven by a calculated strategy by a highly centralized political organisation.

So, with AI dominance in 10 years’ time, Xi Jinping will continue his journey for global leadership. Or worse!