AI Wrap Friday June 30th
In a rare week where tickets to Taylor Swift concerts and political scandals dominated our mainstream headlines there weren’t as many AI will take all of our jobs / increase our productivity / doom us all articles for a change.
Speaking of cutting jobs though this article should have civil servants worried! Predicting half of their roles will be automated thanks to AI by 2030! Now that is soon.
Sticking with political things the Spinoff wrote this explainer for us on how to spot an AI generated political advert (for now).
Women AI Ethics
What really grabbed my attention this week though was this article from Women in AI ethics providing a timeline of achievements by women in the AI space, often overlooked by the media in favour of Silicon Valley founders and not celebrated enough.
The article has a few gems in it. A link to their 100 Brilliant Women in AI Ethics list which is updated annually, a list of achievements by brilliant women in the last 10 years in this space, and this priceless quote:
“As powerful and wealthy men in Silicon Valley pontificate about the future existential risks of AI, while simultaneously profiting from AI wars, many women and others from marginalized communities around the world are fighting an uphill battle to keep humanity safe from the harms of recklessly developed and deployed AI.”
Gender bias in ChatGPT / Generative AI’s
One from Gene this week is the gender bias of ChatGPT, there are loads of articles out there with examples and a range of twitter threads but I found this article with screen shots - and proved the same by testing myself - demonstrating the bias built into the model.
Then I came across this article which showcases a GirlfriendGPT (AI Girlfriends) which made me shudder.
Back to the serious side and some analysis. Bloomberg’s graphics on this site “Humans are biased. Generative AI is even worse” are a tiny bit mind blowing. They analysed text to image outputs from Stability AI’s Stable Diffusion, OpenAI’s Dall-E, and other tools like them where they generated 300 images of representative workers for 14 jobs typically considered high paid and 7 jobs typically considered low paying. What they found isn’t in itself surprising - showing men primarily holding the high paying roles with lighter skin tones, more women and darker skin tones in the lower paying jobs. And they go on to analyse the AI’s output against actual workforce data, you guessed it, the AI bias is worse than reality. Powerful visualisations worth taking a look at.
They also have an interview discussing this you can watch.
Knowing we have gender bias already built into Generative AI solutions is one thing, next question is how are we going to address this?
For the Data folks, Snowflake’s acquisitions and new partnership with Nvidia could make it easier to build generative AI applications
Free for one user - Helplook.net - claims some bold statements like can build a help centre in 5 mins and it’s free to use but that’s only for 1 user and I haven't tried it.
OpenAI is planning a Personal Assistant for work that runs on ChatGPT - but will it put them at odds with funders Microsoft?
AI for evil?
From the BBC this week there have been two articles on the darker side of AI:
To amuse you this Friday
A ditty on AI shared via the Digital Technology Teachers Association message board (yes it's on facebook and I know many of you don't love that platform but it's kinda funny).
I used Gencraft to generate today's image. My text instruction was "sunny day in Wellington, New Zealand capital city, showing the harbour". You get 10 free per day, can choose any style and it's really easy to use.
Thanks to everyone who shared links with me, they didn’t all make it to this edition but will in future weeks. Keep them coming [email protected] Kia pai tō rā Vic
You must be logged in in order to post comments. Log In