AI wrap-up - Friday June 2nd
There is so much AI news each week it was taking over our digital pānui so we decided to pull that content out into a seperate post each week.
There were three big news themes in the AI news this week - we’re all going to die, it’s creating demand for computer chips and we should be worried about mis, dis and mal information.
It is worth starting with the more positive news - Bill Gates is speculating that AI Technology will be teaching our tamariki (children) literacy skills within 18 months! He acknowledges the technology still has a way to go to achieve this but at the current rate of change 18 months isn’t that far away!
It's good to hear there is a potential silver lining coming very soon when other headlines read like this right now - “Headteachers warn “bewildered” schools need more help to cope with rapid advances in AI”.
We’re all going to die
In a new open letter by leading AI experts (more scientists this time) we are being warned that we risk extinction from AI “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
This article has a more indepth summary some insights into the good and bad aspects of these letters and the dramas surrounding them.
If you have been watching this space closely you will already be aware of ChaosGPT described as an API accessible through the paid version of ChatGPT that has a mandate (apparently) to destroy humanity. Given the founders of OpenAI are signatories to the open letter above (and others) calling for regulation I can only speculate ChaosGPT has been created to provoke discussion, I read one commentator describe it as a tongue in cheek experiment. But to be fair I haven’t looked too hard. This article provides a good argument for the need for regulation - either self regulation or government - using ChaosGPT as it's basis.
This article tells us that either ChatGPT has a sinister side or a sense of humour on this topic. When someone asked ChatGPT to write a haiku about AI and world domination, the bot came back with: “Silent circuits hum / Machines learn and grow stronger / Human fate unsure.”
Conversely we learned this week that ChatGPT can not only write a pretty good debate speech it also told a Cambridge University debate team that it poses no threat to jobs or humanity.
And - Scientists used AI to discover a new super-bug killing Antibiotic, so maybe it’s not all doom and gloom yet.
Demand for Computer Chips
Computational power is another really big winner of the AI research and development investment boom we have been seeing. Nvidia joined the $1 trillion club this week amongst record demand for computer chips. Nvidia is the 9th company to hit $1 trillion, and this article goes on to tell us only 5 others still hold this value - Apple, Microsoft, Alphabet, Amazon and Saudi Aramco. Note who is missing from this list btw.
“Nvidia’s graphics processing units, or GPUs, are critical to generative AI platforms like OpenAI’s ChatGPT and Google’s Bard. The company has historically been a leader in the so-called discrete or stand-alone GPU field, but until recently, many consumers thought of GPUs as primarily used for intensive gaming.” They are not the only ones experiencing this boom - Advanced Microdevices (or AMD) and Taiwan Semiconductor Manufacturing (TSMC) also experienced massive demand uplifts in recent months.
However, there is also a looming chip shortage for this type of chip. It’s explained here in this podcast.
Mis, Dis and Mal Information
Twitter has pulled out of the voluntary EU disinformation code which seems to have come as a surprise for most commentators as the new Digital Services Act starts to come into effect on August 25h - when the designated Very Large Online Platforms (VLOPs) need to file their first risk assessment reports.
Former Green MP Gareth Hughes suggests if we want “To combat AI this election, we need to rediscover the art of conversation”. His article covers everything from the National Party using AI to generate images - and how this practice is not the same as using a stock image, - through to the topic of the headline, “We need more avenues for ordinary people to have their opinions heard above the virtual shouting of online discourse.”
This article “Can a ‘lonely guy in a garage’ use AI to run misinformation campaigns which outdo the Russians?” from legal research platform Lexology provides a great explanation of the threats generative models pose, and a really good list on how we can protect ourselves. Worth a read.
Radio NZ's The Detail investigated the likely use of AI this election year.
Other AI News
Earlier this week a Judge in the USA ordered all AI generated content must be declared and checked by humans if it is to be used in his court - stopping short of banning it all together. This came about after another case earlier in the week where a lawyer submitted a brief citing six non-existent judicial decisions produced by our friendly AI ChatGPT. It feels like some general guidance on when not to use AI technology needs to be issued.
Adobe is taking photo editing to the next level with their own AI investment.
The US and EU are drafting an AI code of conduct they want all democracies to sign, and say it will be released in a few weeks.
As ever there is so much more news than I can possibly share here. Feel free to send me links to articles worth reading for others. Ngā mihi Vic
You must be logged in in order to post comments. Log In